Artificial Intelligence: Curse or Blessing

In a previous post, see here, I presented robotics as one of the technological mega-trends that will shape our coming 100 years. what does that mean to us? Should we be scared? Should we be excited? Will street life change with humans and humanoid robots jointly moving around?

Is this what it will look like?

Answering these questions will take much more than one post, which is good, as it will give me more to write about, and us all more exciting technological questions to ponder. Here I will start to give a rough sketch of the relation between robotics, intelligence and artificial intelligence, shortly AI. Then I will evaluate the direction that AI will go: Will it become a menace, as in the movie The Matrix, or will it become a benign technology that will catapult humanity into the future, as e.g. Ray Kurzweil predicts in The Singularity Is Near: When Humans Transcend Biology?

One particularly well-thought through work regarding AI is Luke Muelhauser’s Facing the Intelligence Explosion (just €0.99 in Amazon’s Kindle store).

Intelligence is defined as the ability of an agent (human, animal, machine) to achieve goals in a wide range of environments. This definition of intelligence aims at achieving goals, i.e. instrumental rationality. Further this definition uses two dimensions: Firstly the degree in which goals are achieved, and secondly the width of the environments.

Already we have realized high levels of goal achievement in machines, think of IBM’s specialized Super-Computer Deep Blue’s success is beating Garry Kasparov in a 6-game match in 1997. There are many examples of highly specialized Expert Systems in the areas of medicine (diagnostics), finance and engineering. However, all these examples have a very high goal achievement rate in a very narrow environment. Deep Blue was very good at playing chess, but was utterly useless at recognizing animal pictures, just as an example.

Recently MIT AI system ConceptNet 4 has been tested on IQ, and tested comparable to a 4-year old child (see here). So on the combination of both goal achievement and width of environments AI still has a long way to go. But it is good to realize that we already have come a long way, thinking of how briefly we have been working on this technology.

Why do we develop AI? First of all because scientists try out things. So they try out if they can make an intelligent machine. But more important is that we need AI, we need robotics. As you read this, you are using AI systems: The operating system of your PC contains AI, your car’s electronics contains AI, most Internet Services that you use make extensively use of AI, your doctor does, and so does the mechanic who takes care of your car. AI supports us achieving goals. AI makes people more productive. And that will be badly needed in the coming 100 years, as population growth on Earth is rapidly declining, and the size of the productive population will decline. So we will need technology to take over from us. At least make each and every one of us very much more productive!

I have been using both terms robotics and AI. They are not the same, but are very closely related. AI is the logic governing the robot, which consists of the AI part (the “brain”) and the mechanical part (the “body”). I concentrate in this post on the AI dimension of this technology.

As pointed out, current AI systems match a 4-year old. We expect that the intelligence of these systems will increase, until, at one moment in time, it will surpass the human level of intelligence. Kurzweil expects this to happen around the year 2040, I think it will take a little longer, but the exact timing is irrelevant for this argument. At one moment in the coming 100 years computers will be more intelligent than human beings. What does this mean? It means that computers will be better able to achieve their goals in a broad environment than humans are.

This realization forces the following question on us: Will the goals aimed at by machines by compatible with the goals of humans, or will these goals be conflicting?

What if the goals are conflicting? E.g. computers are mainly consisting of metals. Metals have a tendency to oxidize in an oxygen-rich environment. Humans need oxygen. See here a clear conflict of interest. Some brainstorming will produce many potential conflicts of interest. Currently we, humans, are clearly superior to these machines in achieving our goals. But what happens when the machines surpass us?

We can view this potential conflict in terms of a Darwinian struggle of the fittest. In the past several thousand years humans have been the fittest on Earth, but that position is about to be taken over by machines.

Evolution?

When you talk with people who have been thinking about AI, robotics and the future, you will find optimists and pessimists. The optimists have faith that robots will be good for us, and will support humanity in its conquest of the cosmos. The pessimists paint a picture in which the stronger species will always eat, kill, destroy, obliterate etc the weaker species. Muelhauser makes a strong point that neither of these scenarios is pre-programmed. Different things can happen and do happen in the Universe, and on Earth. Some of these things we consider good, some of these things we consider bad. When an asteroid makes a close pass to the Earth, we consider that a good thing. When it hits the Earth we consider than bad, as it probably wipes out humanity. Both scenarios are possible. Our concept of Good and Bad has no influence on the probability of events!

In the same way, our preferences regarding the aims that robots and AI should pursue, have no impact on whether AI will be benign or evil by the time it surpasses humans in its IQ. That is, unless we drive the development of AI in a desirable direction, unless we start putting as much effort in AI safety as we are putting in AI capability. This is not an easy topic. Again I refer to Asimov who formulated the three universal laws of robotics (see e.g. Asimov’s I, Robot), reality will be much more complicated than this.

But, as Muelhauser argues, whether AI becomes a blessing or the end of humanity depends on the research and engineering effort that we start putting into this issue as of today.

Good Guys?