Skip to content

News & Insight

5 Experts on the Future of Artificial Intelligence

Artificial Intelligence: everyone talks about it, some are scared of it, and very few actually understand it. But questions about implementation, regulation, copyright law, and even how existing AI tech works So we’re going to highlight quotes from 5 experts on the future of artificial intelligence and concerns on immediate challenges and potential threats.

From the Experts:

Stuart Russell

On how existing AI tech works: In a conversation with CNBC, professor of computer science at the University of California at Berkeley, Stuart Russell, explained why large language models, like ChatGPT, are more like on piece of the artificial general intelligence puzzle, not the whole solution. “Right now, what a lot of people are excited about are large language models. They are a product of AI, but they are not AI. It’s sort of like confusing physics with cell phones, right? Cell phones are a product of physics – they’re not the same thing as physics… And I think most people within the field believe that the large language models are part of the solution. One metaphor that I find helpful is to think about a jigsaw puzzle. And, if we can fit it all together, we’ll have general purpose intelligent systems. And we think that these large language models are one piece of that jigsaw puzzle, but we haven’t yet figured out what shape that piece is, and so we don’t know how to fit it together with the other pieces. And the reason we haven’t figured out what shape the piece is, is because we have really no idea what’s going on inside.” He describes these neural networks as  “a chain-link fence. And every little link in that circuit is tunable. And as you tune those connection strengths in the circuit, the output of the circuit will change. And say that circuit has about a trillion links — a chain-link fence covering 500 square miles. And then you are training it with 20 or 30 trillion words, and you’re just tweaking all those links to get it to be good at predicting the next word. And then you hope for the best.”

Charles Higgins

On AI reaching the general public: In a conversation with Business Insider, co-founder of AI-training startup Tromero, Charles Higgins, said, “I think 2024 will be the year where we actually start seeing widespread adoption of all these AI tools… With a model like Gemini, the important part is accessibility. It’s already integrated into products you are used to and use. So using an AI suite of tools is going to become the norm rather than the exception.”

Dr. Andres Guadamuz

On copyright law and AI: With several lawsuits going on around the world, many are asking if it’s legal to train AI models with copyrighted content. Professor of intellectual property law at the University of Sussex, Dr Andres Guadamuz, said, “It’s an open question in most countries. I think that in 2024 we’re going to get potentially one or two decisions that will help clarify things — but this is going to take a long time, probably between four to five years to properly settle.”

Trevor Back

On AI impacting jobs: At the AI Summit New York 2023, chief product officer at Speechmatics, Trevor Back, noted, “There’s definitely going to be a difference between those that use AI and those that don’t. If you don’t use AI, you are going to struggle since most roles will use some form of AI in the way that they act. Curiosity is good. You have to say ‘I’m not afraid of what the machine might do to me. I’m interested in what it can do for me.”

Yoshua Bengio

On AI Regulation: Many lawmakers have been in a tailspin on AI regulation since the release of ChatGPT in the fall of 2022. President Biden published an Executive Order on AI regulation in October. In summer 2024, Yoshua Bengio, know as one of the godfathers of AI, spoke to a US Senate Committee hearing on principles of AI regulation. When addressing new technologies like ChatGPT and others, her said, “these advancements have led many top AI researchers, including myself, to revise our estimates of when human level intelligence could be achieved previously thought to be decades or even centuries away. We now believe it could be within a few years or decades. The shorter timeframe, say five years, is really worrisome because we’ll need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future… if this technology goes wrong, it could go terribly wrong.” Later in his statement, Bengio outlined the three areas he thought national and global leaders should focus: “I firmly believe that urgent efforts, preferably in the coming months, are required in the following three areas. First, the coordination of highly agile, national and international regulatory frameworks and liability incentives that bolster safety. This would require licenses for people and organizations with standardized duties to evaluate and mitigate potential harm. Allow independent audits and restrict AI systems with unacceptable levels of risk. Second, because the current methodologies are not demonstrably safe, significantly accelerate global research endeavors focused on AI safety, enabling the informed creation of essential regulations, protocols, safe AI methodologies and governance structures. And third, research on countermeasures to protect society from potential rogue ais. ’cause No regulation is gonna be perfect. This research in AI and international security should be conducted with several highly secure and decentralized labs operating under multilateral oversight to mitigate an AI arms race, given the significant potential for detrimental consequences, we must therefore allocate substantial additional resources to safeguard our future.”

Stay tuned to our blog as we dive deeper into artificial intelligence, machine learning, and how they impact both business and life. You can also check out our blog for other relevant industry topics, or our podcast on YouTube, the audio version, or wherever you get your podcasts.

Post a Comment

Your email is kept private. Required fields are marked *