Skip to content

News & Insight

General Artificial Intelligence: What, When, & How

The goal of artificial intelligence developers since the beginning of the dot-com boom in the late 90s has been to try to create technology that is as smart as humans… or smarter. We’ve seen great strides in automation and augmentation, but what’s the next step? This is where artificial general intelligence (AGI) enters the scene.

Entertainment & AI

Perhaps no industry better addresses the theoretical, practical, and ethical concerns of AI for the masses than Hollywood. Even before the dot-com boom, Hollywood executives have put billions of dollars into exploring the many facets of AI with audiences all over the world through movies like The Matrix, Blade Runner, Minority Report (a personal favorite of mine), and, of course, Arnold Schwarzenegger’s blockbuster series Terminator. I-Robot, a Will Smith action sci-fi action flick, addresses the three rules of robotics written by science fiction author Isaac Asimov in his 1942 short story Runaround. Netflix’s remake of Lost In Space includes an alien race creating thinking robots made in their likeness. And if you’re looking for a real mind-bender, check out Ex Machina.

While these films certainly sensationalize—and in some cases glamourize—AI’s sentient advent into humanity, there is no doubt that they make one think. What would thinking robots mean for humanity? Do they make humans, or at least human work, irrelevant? How do we control them and simultaneously ensure they don’t control us? These are only a few of the questions scientists, ethicists, and tech investors are trying to answer.

Many of these movies deal with what scientists would call not just artificial intelligence, but artificial general intelligence.

What is Artificial General Intelligence?

Asking researchers and developers to define AGI reveals many different answers. In addition to those questions asked earlier, many also wonder what constitutes “intelligence”? Does true AI require sentience (which AI has yet to achieve)? Or is it more a matter of scaling existing technology?

First, let’s look at the history of AGI. The term “artificial general intelligence” was originally coined by AI researcher Shane Legg. He was conversing with his former colleague Ben Goertzel about Goertzel’s upcoming compilation of essays discussing superhuman AI which was eventually titled Artificial General Intelligence and published in 2007. Legg met Goertzel while working at an AI company called Webmind, founded by Goertzel, which intended to create “general” intelligence technology.

Perhaps the most commonly accepted definition of AGI essentially describes a machine with a human-like ability to generalize. In other words, AGI should be able to apply knowledge learned from one set of information to a new set of information in multiple areas without being taught (programmed) how to. Within in this realm of generalization, AGI technology could range from a multi-tool machine to a superhuman AI.

According to this definition, an AGI that can learn without being taught would also be able to potentially exponentially self-improve. In 1966 British mathematician Irving J. Good captured the endgame of such advanced technology: “the first ultraintelligent machine is the last invention that man need ever make.” Indeed, AGI that surpasses human intellect would be able to self-improve without human input.

But the “general” part of the term has made AGI a bit of a catchall for all future iterations and ideations of AI that we haven’t yet achieved. The definition laid out above isn’t professed by all. Divisions about what AGI even is, how it does and could work, and whether its pursuit is even wise run amuck. Before we dive into the differing options of the world’s AI leaders, let’s review some of the latest AI developments.

Recent AI Innovations

Most of us are probably familiar with ChatGPT by now. Launched in November 2022, ChatGPT is a model of generative AI developed by Elon Musks’s OpenAI. The user “chats” with the AI, prompting it to provide humanlike text, images, and video. Right now, you can use ChatGPT for free!

OpenAI’s recent major innovation is GPT-4, described as “a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.”

Google’s DeepMind, creator of Gato, is planning to release another generative AI called Gemini in the near future. Gemini will be similar to GPT-4 as a large language model. Habassis, DeepMind’s CEO, says that Gemini will have additional capabilities, like planning and the ability to solve problems.

March was a big month for AI! You can check out this article from Medium for more detail about AI innovations from the world’s leading tech companies.

What Are Experts Saying?

Many voices are trying to dominate in the AI conversation. You check out our recent blog which outlines what experts are saying about AI–where it’s been and where it’s going.

To summarize, many AGI commentators and researchers believe that AGI is, at the very least, more complex than scaling existing models—at most, impossible—and likely unattainable in the near future.

These questions also make one wonder what the true end goal of AI research is? Is it to create AI that can generalize? And if AGI is the goal, should it be to achieve a human-like intelligence? Or to reach for superhuman capability?

Then, to add yet more fuel to the fire, there is the separation between “intelligence” and “consciousness.” We’ve yet to achieve a truly “intelligent” machine, so how can we fantasize about a conscious machine when philosophers and scientists have not defined what consciousness even means for humans?

This begs the next interesting question.

What Makes AI “Intelligent”?

Legg has defined what he believes is the difference between a narrow, one-task model and a machine that can generalize. He defines them, respectively, as “one-algorithm” and “one-brain.”

Some of the most recent AI breakthroughs include AlphaZero, an AI machine also developed by DeepMind. It learned how to play Go, Chess, and the Japanese game shogi. But once it had mastered one game, its memory had to be wiped so that the algorithm could learn a new game. This exemplifies Legg’s one-algorithm model, where one algorithm can be used to solve one problem.

In the same way that a human can use his or her same brain to perform varied skills—like playing chess, firing up a grill, completing a math problem, etc.—all with one brain, Legg’s one-brain approach would mean the machine wouldn’t need to use a different algorithm and memory each time it learned a new task. It could approach multiple problems or tasks at once with the same system.

Since human brains are the best example we have of general intelligence, many scientists model contemporary methods of machine learning on them. These methods, called “deep learning,” are based on the human brain’s neural pathways. IBM states:

“Deep learning neural networks, or artificial neural networks, attempt to mimic the human brain through a combination of data inputs, weights, and bias. These elements work together to accurately recognize, classify, and describe objects within the data. Deep neural networks consist of multiple layers of interconnected nodes, each building upon the previous layer to refine and optimize the prediction or categorization.”

Many proponents of deep learning believe it’s a matter of creating the right algorithms and then arranging them in the correct cognitive structure to achieve AGI. While numerous AGI companies, like OpenAI, are pursuing deep learning, others believe that deep learning is holding the industry back. Instead, this group believes creating the right cognitive structure is the way to achieve AGI. Theoretically, if you have the parts connected correctly, then plugging the algorithms into the network is simple (as simple as programming an AI algorithm can be!).

Developers must also contend with the fact that current deep-learning models can’t set their own goals–another marker of intelligence and autonomy.

But modeling AI algorithms and cognitive structures is a slippery slope, especially when there’s still much more to learn about the human brain.

Humans & AI Learn Differently

Creating these models is all easier said than done. These AI systems, like Gato, GPT-3, and others, can perform human-like behaviors, but that doesn’t mean they’re “thinking” like humans. At least, so says University of Melbourne cognitive neuroscience Ph.D. candidate James Fodor.

He explains that artificial neural networks (neural nets) use something called backpropagation to learn, meaning that the neural nets essentially work backward to correct errors. In very basic terms, once an AI machine makes a guess about a problem or offers a prediction based on existing knowledge, the system will then send information back to the machine to tell it whether it was right or wrong. This is the most common way to train AI machines.

But this method is very different from the way that humans learn. Neuroscientists largely agree that backpropagation can’t be used in the brain. Humans learn by applying conceptual knowledge to new information, whereas AI uses complex statistical data to make predictions.

Remember, scientists don’t even fully understand how humans learn, so basing AI algorithms and networks on something we don’t completely grasp is kind of like trying to run before you can walk. Fodor concludes that we must discover more about the human brain and how it learns before we can develop AI that truly functions like a brain.

An AGI Timeline

Most scientists, researchers, and AGI investors aren’t as bold as Elon Musk in predicting when we’ll achieve AGI. Because of the sheer breadth and depth of the issue, putting AGI on a timeline is akin to predicting when humans will inhabit Mars (which, coincidentally, Musk is also trying to achieve).

There’s no doubt that Gato, AlphaZero, GPT-3, and other AI machines are significant developments. But they are mere mimicry of human cognition. Bill Gates points out in a March 2023 blog none of the AI innovations of 2022 moved us significantly closer to AGI or “strong AI” since it can’t control the physical world or set its own goals. Right now, it has no meaningful independence.

So what’s the likelihood of AI outstripping human intelligence? Well, it’s probably not something that anyone alive today need lose sleep over. To dream of AGI 20 years ago, much less discuss it seriously in a professional context, was laughable. And even though it seems we’re only marginally closer to achieving AGI than we were two decades ago, the pursuit of artificial general intelligence in the AI world is undoubtedly more commonplace—and controversial—than ever before.

Post a Comment

Your email is kept private. Required fields are marked *