Skip to content

News & Insight

General Artificial Intelligence: What, When, & How

The goal of artificial intelligence developers since the beginning of the dot-com boom in the late 90s has been to try to create technology that is as smart as humans… or smarter. We’ve seen great strides in automation and augmentation, but what’s the next step? This is where artificial general intelligence (AGI) enters the scene.

Entertainment & AI

Perhaps no industry better addresses the theoretical, practical, and ethical concerns of AI for the masses than Hollywood. Even before the dot-com boom, Hollywood executives have put billions of dollars into exploring the many facets of AI with audiences all over the world through movies like The Matrix, Blade Runner, Minority Report (a personal favorite of mine), and, of course, Arnold Schwarzenegger’s blockbuster series Terminator. I-Robot, a Will Smith action sci-fi action flick, addresses the three rules of robotics written by science fiction author Isaac Asimov in his 1942 short story Runaround. Netflix’s remake of Lost In Space includes an alien race creating thinking robots made in their likeness. And if you’re looking for a real mind-bender, check out Ex Machina.

While these films certainly sensationalize—and in some cases glamourize—AI’s sentient advent into humanity, there is no doubt that they make one think. What would thinking robots mean for humanity? DO they make humans, or at least human work, irrelevant? How do we control them and simultaneously ensure they don’t control us? These are only a few of the questions scientists, ethicists, and tech investors are trying to answer.

Many of these movies deal with what scientists would call not just artificial intelligence, but artificial general intelligence.

What is Artificial General Intelligence?

Asking researchers and developers to define AGI reveals many different answers. In addition to those questions asked earlier, many also wonder what constitutes “intelligence”? Does true AI require sentience (which AI has yet to achieve)? Or is it more a matter of scaling existing technology?

First, let’s look at the history of AGI. The term “artificial general intelligence” was originally coined by AI researcher Shane Legg. He was conversing with his former colleague Ben Goertzel about Goertzel’s upcoming compilation of essays discussing superhuman AI which was eventually titled Artificial General Intelligence and published in 2007. Legg met Goertzel while working at an AI company called Webmind, founded by Goertzel, which intended to create “general” intelligence technology.

Perhaps the most commonly accepted definition of AGI essentially describes a machine with a human-like ability to generalize. In other words, AGI should be able to apply knowledge learned from one set of information to a new set of information in multiple areas without being taught (programmed) how to. Within in this realm of generalization, AGI technology could range from a multi-tool machine to a superhuman AI.

According to this definition, an AGI that can learn without being taught would also be able to potentially exponentially self-improve. In 1966 British mathematician Irving J. Good captured the endgame of such advanced technology: “the first ultraintelligent machine is the last invention that man need ever make.” Indeed, AGI that surpasses human intellect would be able to self-improve without human input.

But the “general” part of the term has made AGI a bit of a catchall for all future iterations and ideations of AI that we haven’t yet achieved. The definition laid out above isn’t professed by all. Divisions about what AGI even is, how it does and could work, and whether its pursuit is even wise run amuck. Let’s review the qualms of some of the world’s leading tech minds.

The Divisions of AGI

In recent weeks, talk of AGI has arisen regarding a new AI development from Google’s DeepMind, which was founded by Shane Legg, Demis Hassabis, and Mustafa Suleyman in 2010 and bought by Google in 2014. This new AI machine, dubbed Gato, is described as “a multi-modal, multi-task, multi-embodiment generalist policy.” If you’re anything like me, you’re wondering what in the world that means.

Essentially, Gato is an artificial intelligence machine that operates on a much larger scale than many other AI machines. Whereas personal assistant-type technologies, like Siri and Alexa, perform very specific tasks based on a narrow model, Gato contains many of these narrow models tied up in a neat package. Elon Musk’s OpenAI has developed similar tech called GPT-3, which is essentially a very complex large-language model.

Lead researcher at DeepMind, Dr. Nando do Freitas, argues that this is the answer to AGI. In a May 14 tweet thread, the scientist replied to an opinion piece by The Next Web author Tristan Greene which questioned the potential for humans to achieve true AGI (or AGI as defined earlier) in the near future. Greene argues that while Gato is undoubtedly an impressive piece of artificial intelligence, it’s still not artificial general intelligence.

De Freitas disagreed: “My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities… Solving these scaling challenges is what will deliver AGI.”

What Are Other People Saying?

But, like Greene, many AI scientists believe simply scaling a system like Gato isn’t true AGI. Here are quick summaries of the opinions of some of today’s leading AI minds:

  • Gary Marcus, founder and CEO of Robust.AI: Marcus argues that Gato and GPT-3 aren’t really AGI but instead an iteration of what he calls Alt Intelligence. He says, “Alt Intelligenceisn’t about building machines that solve problems in ways that have to do with human intelligence. It’s about using massive amounts of data – often derived from human behavior – as a substitute for intelligence.” Marcus believes that Alt Intelligence is still highly valuable, but he’s also saying “let’s call a spade a spade.”
  • Elon Musk, founder and CEO of Tesla, CEO of SpaceX, and co-founder of OpenAI: In a May 30 tweet, Musk stated that he believed we could arrive at AGI in 2029. Marcus shot back in a later Substack posting, criticized Musk’s view, pointing out Musk’s failure to account for outliers, the sheer size of the problem of AI, that humans have yet to create an accurate method to create complex cognitive systems, and his historically “spotty” prediction timelines. But if Musk’s track record with mind-blowing technology means anything, we certainly can’t rule out his voice.
  • Shane Legg, co-founder of DeepMind: As noted earlier, Legg first came up with the term “artificial general intelligence” for Ben Goertzel’s 2007 book Artificial General Intelligence. His ruminations about AI imagine a machine a user could interact with as one would another human.
  • Jerome Pesenti, Vice President of AI at Facebook: Pesenti has said he doesn’t agree with the term “artificial general intelligence” and goes so far as to say he doesn’t even know what it means. And, since it seems that all roads lead back to Elon Musk, Pesenti replied to Musk’s prediction that we would conquer AGI in the next five years, saying, “@elonmusk has no idea what he’s talking about when he talks about AI. Here is no such thing as AGI and we are nowhere near matching human intelligence. #noAGI”
  • Julian Togelius, associate professor at the Department of Computer Science and Engineering at New York University: Also on Twitter, Togelius states, “Belief in AGI is like belief in magic. It is a way of abandoning rational thought and expressing hope/fear for something that cannot be understood.”
  • Andrew Ng, co-founder of Coursera, former head of Google Brain: Ng is a rather vocal critic of AGI pursuits, noting, “[t]here’s tons of progress in AI, but that does not imply there’s any progress in AGI.” He believes that it’s more valuable to focus on real problems that need timely and practical solutions. Though he calls the prospect of AGI “exciting,” Ng acknowledges the controversial light thrown on AGI when scientists tried to make specific claims about it.

To summarize these opinions, many AGI commentators and researchers believe that AGI is, at the very least, more complex than scaling existing models—at most, impossible—and likely unattainable in the near future.

These questions also make one wonder what the true end goal of AI research is? Is it AGI? And if AGI is the goal, should it be to achieve a human-like intelligence? Or to reach for superhuman capability?

Then, to add yet more fuel to the fire, there is the separation between “intelligence” and “consciousness.” We’ve yet to achieve a truly “intelligent” machine, so how can we fantasize about a conscious machine when philosophers and scientists have not defined what consciousness even means for humans?

This begs the next interesting question.

What Makes AI “Intelligent”?

Legg has defined what he believes is the difference between a narrow, one-task model and a machine that can generalize. He defines them, respectively, as “one-algorithm” and “one-brain.”

Some of the most recent AI breakthroughs include AlphaZero, an AI machine also developed by DeepMind. It learned how to play Go, Chess, and the Japanese game shogi. But once it had mastered one game, its memory had to be wiped so that the algorithm could learn a new game. This exemplifies Legg’s one-algorithm model, where one algorithm can be used to solve one problem.

In the same way that a human can use his or her same brain to perform varied skills—like playing chess, firing up a grill, completing a math problem, etc.—all with one brain, Legg’s one-brain approach would mean the machine wouldn’t need to use a different algorithm and memory each time it learned a new task. It could approach multiple problems or tasks at once with the same system.

Since human brains are the best example we have of general intelligence, many scientists model contemporary methods of machine learning on them. These methods, called “deep learning,” are based on the human brain’s neural pathways. IBM states:

“Deep learning neural networks, or artificial neural networks, attempt to mimic the human brain through a combination of data inputs, weights, and bias. These elements work together to accurately recognize, classify, and describe objects within the data. Deep neural networks consist of multiple layers of interconnected nodes, each building upon the previous layer to refine and optimize the prediction or categorization.”

Many proponents of deep learning believe it’s a matter of creating the right algorithms and then arranging them in the correct cognitive structure to achieve AGI.

 While numerous AGI companies, like OpenAI, are pursuing deep learning, others believe that deep learning is holding the industry back. Instead, this group believes creating the right cognitive structure is the way to achieve AGI. Theoretically, if you have the parts connected correctly, then plugging the algorithms into the network is simple (as simple as programming an AI algorithm can be!).

But modeling AI algorithms and cognitive structures is a slippery slope, especially when there’s still much more to learn about the human brain.

Humans & AI Learn Differently

Creating these models is all easier said than done. These AI systems, like Gato, GPT-3, and others, can perform human-like behaviors, but that doesn’t mean they’re “thinking” like humans. At least so says University of Melbourne cognitive neuroscience Ph.D. candidate James Fodor.

He explains that artificial neural networks (neural nets) use something called backpropagation to learn, meaning that the neural nets essentially work backward to correct errors. In very basic terms, once an AI machine makes a guess about a problem or offers a prediction based on existing knowledge, the system will then send information back to the machine to tell it whether it was right or wrong. This is the most common way to train AI machines.

But this method is very different from the way that humans learn. Neuroscientists largely agree that backpropagation can’t be used in the brain. Humans learn by applying conceptual knowledge to new information, whereas AI uses complex statistical data to make predictions.

Remember, scientists don’t even fully understand how humans learn, so basing AI algorithms and networks on something we don’t completely grasp is kind of like trying to run before you can walk. Fodor concludes that we must discover more about the human brain and how it learns before we can develop AI that truly functions like a brain.

An AGI Timeline

Most scientists, researchers, and AGI investors aren’t as bold as Elon Musk in predicting when we’ll achieve AGI. Because of the sheer breadth and depth of the issue, putting AGI on a timeline is akin to predicting when humans will inhabit Mars.

There’s no doubt that Gato, AlphaZero, GPT-3, and other AI machines are significant developments. But they are mere mimicry of human cognition. Imitation does not actuality make.

So what’s the likelihood of AI outstripping human intelligence? Well, it’s not something that anyone alive today need lose sleep over. To dream of AGI 20 years ago, much less discuss it seriously in a professional context, was laughable. And even though it seems we’re not much closer to achieving AGI than we were two decades ago, the pursuit of artificial general intelligence in the AI world is undoubtedly more commonplace—and controversial—than ever before.

Post a Comment

Your email is kept private. Required fields are marked *