The Rise of Artificial Intelligence

Oskar Leimkuhler explores the moral and practical dangers of the potential of Artificial Intelligence.

0
Artificial Intelligence: no longer a fiction. Photo: bing.com/images
Artificial intelligence (AI) taking over the world is not a distant existential fear, it is an immediate concern that is being overlooked by policy makers. The malevolent self-aware computer is ubiquitous in popular culture thanks to its pivotal role in science fiction, but because of its capacity to help solve many of our most urgent problems, AI is fast becoming reality. In science, medicine, finance, and data processing, as well as many other fields, artificial intelligence has the potential to remodel our world, and the dawn of the truly malicious machines is probably not a cause for panic on any reasonable timescale. However, there are much more pressing issues that are being overshadowed. The AI is the ultimate form of automation, and there is a systemic complacency to address what will become a major revolution in the labour market and wider society, perhaps sooner than we think.
Computerisation is the newest contributor to the phenomenon of technological unemployment, a term popularised in the 1930s in response to machines causing redundancy for manufacturing workers. Today the robots are advancing not only into factories but also into offices and call centres. An artificial intelligence is loosely defined as a machine that acts based on its perceived environment in order to maximise some objective. An AI typically displays some aspects of human intelligence, such as learning and problem solving, only without needing to be paid a wage to apply them. For now, the most easily automated tasks are those that are repetitive and have a limited, well-defined scope. Creative and high-level jobs that involve flexible thinking are not at risk in the near future, nor are lower paying jobs that require a high degree of non-repetitive physical exertion, such as janitors and plumbers. An oft-cited 2013 study by Carl Frey and Michael Osborne from the University of Oxford warns of a current trend towards labour market polarisation, with growing employment in high-income cognitive jobs and low-income manual occupations, accompanied by a hollowing-out of middle-income routine jobs. The same study found that 47 percent of all employment in the United States is potentially under threat from computerised automation.

Market research company Forrester published a report last year that claimed 6 percent of American jobs will be eliminated by 2021, due to advances in AI and cognitive technology. The societal change brought on by such a rapid shift in employment would be staggering. Among the most commonly used examples as a source of immediate technological redundancy is self-driving cars, which are already in use in Singapore, and are being introduced now by transportation company Uber in the United States. There has been much speculation about autonomous vehicles forcing drivers into obsolescence, but industry experts are divided on how long such a transformation would take. Accountancy, one of the oldest and historically most lucrative professions, is a possible target for robot replacement, but many argue that new technology is more likely to aid accountants than supersede them altogether.

Today, tasks involving sufficient adjustability still require a human to do them. A different kind of computation is being developed, however, that is expanding the limits of what it can be applied to. Machine learning is employed when it is difficult to create an explicit set of instructions to solve a complex problem. Computers can be programmed to learn for themselves how to adapt to challenging situations, and advances have demonstrated that these algorithms can be incredibly powerful. One of the ways to achieve this is with an artificial neural network, which mimics the way that information is processed by connected synapses in the human brain. Machine learning algorithms and neural networks have had their spot in the press, with IBM’s Watson winning the TV game show “Jeopardy” in 2011, and more recently DeepMind’s AlphaGo taking the limelight for defeating the world Go champion in 2016. Every tech giant is investigating machine learning to some extent. There was a time when text translation programs were limited to words or short phrases, being incapable of faithfully converting an entire sentence, often to hilarious effect. In November last year, Google Translate began to implement its new neural network-based translation engine, and the program can now handle entire paragraphs with relative success. If this sounds unbelievable, try it.

Among others, founding computer scientist and mathematician John Von Neumann asked the question of what happens when the AI learns to reprogram itself. He argued that once a computer is built that is able to modify and improve itself, the rate at which it will develop will prevent any further human control over it. This is referred to as the technological singularity, and at this point, it becomes vitally important to take care in specifying the computer’s goals. The AI does not have to be self-aware to cause damage to us. Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute in Berkeley, provides as an example a superintelligent AI that is given the simple task of making as many paperclips as possible. Undergoing an “intelligence explosion” during which it improves upon itself with the goal of maximising its production, the AI decides to convert all of the atoms in the solar system into paperclips. Yudkowsky’s chilling message is that “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

This example is obviously extreme, but it illustrates a fundamental issue at the heart of using AI. Given a smart enough machine, if its goals are even slightly misaligned with the goals of humanity the result could be catastrophic. An example closer to home might be an AI that, completely unintentionally, develops a racial bias. This is not so far from reality, as Microsoft’s Twitter chat-bot Tay demonstrated when it began tweeting racial slurs and hate speech it had picked up from conversations with trolls, before being quickly shut down. An AI that screened job applications accidentally acquiring a taste for racial profiling could be disastrous. Some controversy arose over the ethics of self-driving cars, around the issue of who takes responsibility for an accident, and in what kind of order the AI should prioritise damages that could arise from its actions. These moral questions come up wherever the AI goes beyond what its makers might innocently but naively expect it to do. Who will control the AIs as they arrive has been yet another subject of debate. A powerful algorithm has the potential to do a lot of good, but in the wrong hands can be used to inflict great harm.

The regulation of AIs and the prevention of their use by malicious actors or rogue states has been discussed by some high ranking politicians, such as the outgoing President Obama when he spoke to Wired magazine. Among his concerns for the future were specialised algorithms that might be given the task of penetrating a country’s nuclear codes, or perhaps more simply maximising profits on the stock exchange. He noted that “if one person or one organisation got there first, they could bring down the stock market pretty quickly.” Hopefully, the new leadership will heed these warnings. There are fears that artificial intelligence might become an enabler for the preservation of oppressive power structures in the future if their benefits are not shared evenly.
Entrepreneur Elon Musk, a supporter of the non-profit company OpenAI which aims to build a platform for safe and constructive AI development, has repeatedly called for more regulation of AIs, advocating that they be democratised and open-source, calling them our “greatest existential threat.”
In a world where private companies are allowed to pursue AI projects almost entirely unregulated, in the midst of a wave of political rhetoric based around bringing back jobs that soon might not even exist, in the face of imposing moral dilemmas, and on the brink of what might be one of the fastest and most influential transitions in human history, it’s easy to envision that we are sleepwalking into some form of dystopian nightmare. But this is ignoring the tremendous benefits that AI will bring to mankind, and provided that decision makers can pull together and start finding solutions to the problems that will come with the progress, the AI-assisted future may yet be bright.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.