Artificial intelligence is very much a hot topic for engineers, investors, corporations, economists, and fiction writers. Unfortunately, there is a reality gap between fiction writers and the rest. Why is there such a gap between the current state of artificial intelligence and how it is depicted in fiction? How does this gap affect the general public’s perception of what is possible today? What does this mean for the current debate surrounding artificial intelligence and ethics?
Artificial intelligence has advanced rapidly in the past decade. What was considered to be ‘intelligent’ in a machine five years ago seems common place today. But previously, the emphasis seemed to be on what the technology did, be it make a recommendation, find a trend in data, or intelligently route a network. Most people don’t see these advances directly, or when they do, they aren’t necessarily impressed with what they see. A lot of the work for such technology is under the hood, in resolution engines and algorithms that aren’t very visible or vibrant.
Recent advances have made the advances of artificial intelligence more apparent in everyday life. Technology that interacts directly with us, through voice recognition and human-sounding speech generation, make it seem like there is more intelligence in these programs. In fact, these changes merely close the gap between existing capabilities and everyday life by more improving the machine-human interface.
These rapid improvements in interface has generated a misconception – namely, that machines are becoming more human. Seeing the results of a intelligent recommendation program is one thing; talking with an assistant who answers questions in a human voice is another. Since the technology seems more real, more personal, we tend to view it as more intelligent.
In past years, writers who gave us Robbie or HAL or C-3PO were clearly engaged in science fiction. Since there were no relatable real world instances of such beings, readers were left with a feeling that these characters existed well in the future. Even when these characters broke moral boundaries, we did not feel the need to engage in a serious discourse on those moral issues.
Today, when writers create a robotic character, a mischievous android, or a malevolent program, moral questions seem to arise. Are robots slaves? Should androids have rights? Will the machines rise against us? Although previous literature touched on these very issues, previously they seemed to be vague and futuristic, not worth fretting too much about. As interfaces to the existing technology improve, suddenly these issues seem more topical and urgent.
The truth is that although artificial intelligence has advanced significantly in the past decade, it is nowhere near jumping the gap to human-like behavior. Bad decisions and human emotions in artificial intelligence technology is still a far ways off. We are nowhere near that moment of singularity, where a program achieves so-called ‘consciousness’. Although current technology can mimic many human-like characteristics, there is no need to arm ourselves against a computer revolt, at least yet.
However, you might not know that if you listen to some technology leaders. We are being warned that artificial intelligence is taking a sinister turn, that we should be careful where we go, that this technology is a existential threat to mankind. That may very well be true in the coming decades and centuries, but we face more danger today from buggy programs than from evil ones.
The rhetoric on the near-term dangers of artificial intelligence seems to be an attention seeking effort by those who want to stay in the news. The danger is that we see a backlash against development of such technology, much as we saw a backlash against other technologies based on uninformed and irrational fears; for example, with stem cell research. Although we should discuss artificial intelligence and ethics freely and openly, we should be in no hurry to stifle advances based on imaginative story telling and fictional apocalypses.