I am here in Washington D.C. for the World Future conference. Yesterday I attended the “Education Summit” and keynote opening adresses, but have not yet had time to compile those notes (and, like many conferences, this one has no Internet availability in any of the conference rooms – I won’t comment on the irony there).

Today I got wise and began taking my notes in more of a narrative to make them easier to post here. This means I am starting my blog posts with today’s sessions and will, eventually, backtrack to yesterday. The first session of the morning was on the future of Artificial Intelligence. These are my notes – I hope they are reasonably accurate.

You know the scientist in Independence day who is utterly fascinated with the aliens – the one with the long brown hair and beard? Ben Goertzel is a spitting image of that guy.

Goertzel is a mathematician, an ex-academic, who works in the software industry. His startup companys, Novamente and Biomind both address artificial intelligence in different ways.

artificial intelligence head with the brain, creative technological design

I. Artificial General Intelligence versus Narrow AI

Goertzel tells us that general artificial intelligence is the ability to achieve complex goals in complex environments using limited computational resources. Narrow AI is the field where the vast majority of academic research takes place – the example that you’d be most familiar with is an AI computer program that can play chess or checkers (like IBM’s Deep Blue).

Google is the most well-known example today of an AI company – with both their search engine and AdSense software. Their economic success is a testament to the power of narrow AI applications. Another well-known examples include the players that are not controlled by humans in video games, which are essentially narrow AI programs. Goertzel predicts an incredible increase in the complexity of these non-player characters in the next decade.

The movie industry predicted that there would be human-level AI by the year 2001 (remember Hal?). However, today, in 2008, there is nothing close to a consensus in how exactly we are going to achieve human AI (or whether we should). Goertzel suggests that we need to better understand how the brain works before we can realistically build human AI. He mentions IBM’s experiment to simulate a mouse brain – an experiment that took 10 seconds of computing time for every one second of brain function. Soon, the hardware will be available to theoretically simulate human AI, and as the power of computing increases, it isn’t unreasonable to expect that eventually human AI will be available on your desktop.

However, narrow AI still dominates the AI field and there are limitations of this narrow AI. Goertzel compares the example of googling the phrase “How many years does a pig live?” – and google gets pretty close to the correct anser. Change this to “How many years does a dead pig?” or “How many years does a pig in captivity live?“– and the results are essentially nonsense given the questions. So while we have some narrow AI search capability, there is a “common sense bottleneck.” Narrow AI can’t yet handle NLP (natural language processing). The current AI can parse the sentence and diagram the structure, but has difficulty drawing the correct meaning. Goertzel gives another great example: Consider the variety of meanings to the sentence “I saw the man with the telescope.” Does that mean you looked through the telescope at the man? You saw a man with your own eyes, and the man had a telescope? or You are sawing the man in half using a sharp telescope? Today, you can go online and play with chatbots, but it is always painfully obvious that they are limited by their logical programming.

Goertzel tells us that 90% of the AI community thinks that narrow AI will eventually lead to general human AI, but he disagrees. He thinks that there will be dramatic breakthroughs in general AI, citing Moore’s law and the exponential growth of computing. Computers are getting faster and faster and cheaper and cheaper. Hardware is unlikely to be the obstacle in AI systems. Also, he mentions the efficiency and temporal resolution in brain scanning means our understanding of how the brain functions will be happening faster and faster.

Where are we know? We have fast computers that are internetworked, decent virtual worlds, somewhat decent robots, narrow AI algorithms, and a basic understanding of human cognitive architecture.

II. The Novamente and OpenCog AGI Projects

The Novamente cognition engine is a Goertzel project in which he attempts to merge several general AI algorithms. Novamente seeded some of it’s code to a project called OpenCog, a site that provides open-source AI code in hopes of increasing the speed at which AI is developed.

The formulation of the general AI problem is basically this: we supply the AI with certain goals, like “explore your environment” or “meet new people.” Given the context it observes, the AI works to achieve its goals. Goertzwel describes five key aspects of general AI design:

  1. Knowledge Representation (we learn how to do this as we grow up)
  2. Cognitive Architecture (areas for language, areas for action, etc.)
  3. Knowledge Creation
  4. Environment / Education (including physical & virtual robotics)
  5. Emergent Structures and Dynamics (what structures emerge in the AI system? The sense of self – who am I?)

It is after this point, that Goertzel’s explanations of AI are really flying fast and over my head, but I’ll continue with some of the bits that I can make sense of.

Goertzel makes a startling point about human AI – would you really want to make AI that has human AI, but with 100 times the intelligence? Or would you rather build a pseudo-human AI that has well-defined goals in contexts?

There is a deep technical problem at the heart of all AI algorithms, called the combinatorial explosion problem. Narrow AI works well on small scale problems, but when the scale of the problem increases, the AI breaks down – they are not scalable. Goertzel thinks that the way around it is to hook up a bunch of different narrow AI algorithms so that the algorithms can help each other through interadaptation. For example, take logical inference, probabilistic evolutionary program learning and economic attention allocation and let them work together. You can read more about this in his book, The Hidden Pattern, which lays out a “novel philosophy of mind” and explains in the AI approach that Goertzel is taking.

III. The Marriage of AGI and Virtual Worlds

One of the important AI problems is creating embodiment for the AI. How can an AI system get the information to build its intelligence if it has no embodiment? Goertzel believes that a “body” is extremely convenient, but not absolutely necessary to have. A body makes it easier for us to teach AI and understand AI since that’s how we developed our own intelligence.

However, there is some wisdom to letting AI learn in virtual worlds. Unfortunately, there is not currently any virtual world where you can easily use tools. When you make an object in a g
raphics program, you have to place “sockets” on the objects and pre-define how the socket of one object will interact with other sockets of other objects. All of the possible actions on objects are preprogrammed. The animations are programmed by graphic designers and they are limited to what has been programmed.

There needs to be software integration between robot simulators and virtual world engines (like Gazebo with OpenSim). Goertzel says this seems feasible, but not unreasonable as a goal for the next few years. We are hopefully moving towards a point where virtual worlds can be a viable option for AI embodiment. However, one of the problems with virtual worlds is that there is no sense of touch. It’s hard to say, he says, if AI could become intelligent, in the human sense, without the sense of touch.

IV. Example Application: Virtual Pet Brain

As a stepping-stone to more ambitious projects is the creation of virtual pets. Goertzel’s Novamente is working on a “pet brain” and he showed us a few prototypes for a virtual dog and a virtual parrot. Goertzel has experimented with building a virtual AI dog in Second Life, but moved on to Multiverse (a platform I’ve mentioned on this blog) and OpenSim because of superior animating abilities.

Training the dogs in the virtual world involves this process:

Teach – Imitate – Reinforce – Correct

(this is much like the process of teaching math, to tell you the truth)

For example, you can teach the dog to sit by sitting yourself. To teach fetch, you might show the dog the “fetch” interaction with another virtual person, then tell the dog to copy the action of the person.

The next step Goertzel is working on is to build a virtual talking parrot to try to learn language skills. If the parrot can learn from its language interaction with humans, the speed of learning will increase faster by harnessing the power of all the users. The hope is that this can get them past the NLP bottleneck, and teach the AI common-sense interpretation through experimentation and correction. For example, we know how to interpret the meaning of a sentence because we have learned from experience and context. Parsing a paragraph of language is quite complex – every sentence might have five words that could all be interpreted in five different ways – this is that combinatorial explosion problem again.

An AI goes through five stages: infantile, concrete, formal, reflexive, and full self-modification. Current AI systems have really not advanced much beyond infantile. The interesting thing is that an AI could theoretically get to this full self-modification stage because it has access to its own source code. We humans, however, do not have access to modify our source code.

Goertzel leaves us with this thought: the final step in the progression of human AI would be to integrate human AI with the human brain. He agrees with Verner Vinge’s prediction that we should get to human-level AI by 30 years from now. Goertzel says he is also crazy enough to believe that we could do it in the next decade with a pretty concerted effort – although it is not likely to happen. Could creating human-level AI lead to a bad outcome? Sure. But a lot of other bad outcomes are possible without the creation of human-AI. A human AI system would hopefully be more rational and ethical than human beings – and working in tandem with us, we could have a safer world.

About Author