Skip to content

News & Insight

6 Experts on the Future of Artificial Intelligence

Artificial Intelligence: everyone talks about it, some are scared of it, and very few actually understand it. Plus, many Ai experts are saying that companies treating AI competition like an arms race leads to significant ethical concerns surrounding misinformation, discrimination, and privacy. So we’re going to highlight quotes from six experts on the future of artificial intelligence and concerns on immediate challenges and potential threats.

From the Experts:

Elon Musk –Tesla, SpaceX, and recently founded X.ai

 

“There’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity. Hopefully, that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong.”

Geoffrey Hinton – “Godfather of AI”, retired Google Researcher, Turing Award Recipient

Hinton recently retired from Google to speak freely about the dangers of this technology. “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. Unlike biological intelligences like human beings, [AI systems] can learn separately, they share their knowledge ‘instantly.’ So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

 

Yoshua Bengio – Professor of Computer Science at the University of Montreal, Turing Award Recipeint

I’ve been speaking for many years about the long-term danger to society of having very powerful tools at our disposal. It’s like any technology in the sense the more powerful they are, the more useful they can be, but also the more dangerous they can be if they’re misused. And I had the impression for many years that the way our society’s organized in each country, but globally in general, is not adequate to face the challenges that very powerful technologies bring, and in particular AI, but I’m thinking of biotechnology in the future, which is something great that, I think, can be incredibly useful for humanity as well. But, but the more powerful it is, the, the, the more wisdom we need in, in the way we deal with this… But also, it means that companies are a bit in a haste and may not take the precautions that would otherwise be warranted. So, in the short term, I think we need to accelerate the countermeasures.”

Find the full interview here.

Steve Omohundro – Founder, Vision and Learning Group and the Center for Complex Systems Research

“We have shown that all advanced AI systems are likely to exhibit a number of basic drives. It is essential that we understand these drives in order to build technology that enables a positive future for humanity. […] The rapid pace of technological progress suggests that these issues may become of critical importance soon.”

 

Stuart Russell – Professor of Computer Science at Berkeley

“Just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, it seems inevitable that issues of control and safety will become central to AI as the field matures.”

 

 

Ray Kurzweil – Google Scientist

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

 

 

Stay tuned to our blog as we dive deeper into artificial intelligence, machine learning, and how they impact both business and life. You can also check out our blog for other relevant industry topics, or our podcast on YouTube, the audio version, or wherever you get your podcasts.

Post a Comment

Your email is kept private. Required fields are marked *