My outlook on the future changed irrevocably in January 2016. That month, it was announced that AlphaGo, an algorithm designed by Google’s artificial intelligence (AI) unit DeepMind, had beaten the best human European Go champion. Within a few months, it conquered the best player in the world. While I had been following the advances in AI since I read The Lights in the Tunnel shortly after its publication in 2009, I was stunned by the speed with which deep learning had allowed something purely digital (the algorithm) to outmaneuver something biological (the human).
The reason the Go victory is so meaningful is that, unlike chess, Go is too complicated for all potential games to be either programmed in or analyzed via brute computing power. Consequently, to win, the algorithm makes decisions using something akin to what we might call instinct, an amorphous term that incorporates a heightened talent for pattern recognition mixed with a knack for assessing imprecise trade-offs – that is, situations where time or information is lacking. Until the Go victories, I had thought I would be able to retire from my career in the financial industry before being replaced by a computer. However, since the job description of a portfolio manager could be distilled to, “One who analyzes disparate data from many sources to discern a pattern and make decisions assessing imprecise trade-offs,” the question of my personal future suddenly became more proximate.
The uncertainty envelopes more than just my career, though, it also incorporates a variety of other professions rooted in human capital: medicine, law, education, even technology itself. That repetitive tasks will inevitably be ceded to machines has been little in doubt since the Luddites destroyed factory machinery in early 19th-century England. But humans have always had the time to adapt to the technological developments of the past, building new professions and using technological advancement to enhance the productivity of human workers rather than replace them. This time, however, Moore’s Law – which has correctly predicted a doubling in transistors per chip every couple of years since 1965 – has brought us to the infamous second half of the chessboard, where we enter a period of progress that defies the ability of the human brain to properly comprehend. As a consequence, we may not have the time to adjust as we have always had before, and entire professions could migrate to the machines before humans have had time to develop complementary careers.
A previous post highlights a report from Oxford academics predicting that at least half of jobs are at risk of automation, but that report was written when even experts thought that algorithms capable of something like the Go victories were at least a decade away. In the early days of AI, researchers posited ideas such as deep learning and neural networks that would model AI on the human brain and allow computers to be designed to learn things for themselves rather than needing to be prompted by programmed coding. However, the transistors of the 20th century were not up to the challenge, and so the industry endured what is known as “the AI winter,” when new discoveries were rare. As noted above, though, Moore’s Law kept churning out more power per chip, and we now walk around holding in each of our smartphones more computing power than all of NASA had in 1969 during the first moon landing.
Deep learning is what made AlphaGo’s conquests possible, and it is going to propel digital beings’ abilities beyond those of biological beings in more areas than just games. IBM’s Watson famously beat Ken Jennings and Brad Rutter at Jeopardy! in 2011, but more recently, Watson has been reading MRI scans and providing answers to medical questions. Lemonade, an insurance company, uses AI instead of human underwriters, sometimes paying claims in as little as three seconds. ROSS Intelligence, another recent startup, targets the legal field with a product that uses AI to analyze legal questions. Their tagline is, “Do more than humanly possible.” IBM has made Watson a cornerstone of its latest shift in business strategy, as Big Blue fights to stay relevant while the computer world continues to shift farther and farther away from hardware. The company is offering assistance to developers who design apps for business solutions to utilize Watson’s capabilities, such as Red Ant. This retail-focused application mines customer data to make predictions about behaviors and predilections in an effort to enhance customer loyalty via “fully connected retail experiences.”
So far, the examples given will just lead to fewer people employed in each of the industries affected while driving demand for new types of workers – AI app developers, for example – in what seems like a similar story to what we have seen in technological advancements of the past. Once again, though, the pace of change is what will set this instance apart. In the past, although individual groups of humans lost their livelihoods, there were new types of jobs for their children to learn. While the AI app developers will have more career longevity than, say, most radiologists or paralegals – or portfolio managers – the power of deep learning portends a time when AI develops its own apps.
Some people claim that there are jobs that will always have to be done by a human, with anything deemed to need “emotional intelligence” topping the list – judges, therapists, nurses, caregivers. However, emotional intelligence is essentially an enhanced skill for observation, and computers are superior than humans at almost every type of surveillance: micro-expressions, physiological responses, body language. In answer to those who believe emotional intelligence is about response, not observation, studies indicate that people are more open during therapy when they believe they are conversing with a computer. Apparently algorithms aren’t deemed as judgmental as other humans – at least not yet.
To be fair to humans, there are areas where we are still incontestably superior to AI-directed machines. Dexterity is one. Your Roomba vacuum will learn the contours of your living room furniture and knows to avoid the stairs, but it can’t pick up the delicate vase on the side table or dust around it. Assembly line machines can learn how to grasp specific objects, but humans are still more efficient at adjusting pressure to suit the task, which is why many types of fruit are still harvested by hand. Humans are also more energy efficient. In a 1965 article, Fred Singer summarized 1950s test pilot Albert Scott Crossfield’s assessment of why robots wouldn’t replace him: “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computing system which can be mass-produced by unskilled labor.”
Relying on my relative energy efficiency as a career strategy doesn’t suit me, though, and since my background is in fixed income, I am programmed (as it were) to consider the downside risks. I used to believe that I would make it through my entire career before automation swept the financial industry. After all, there are still stock traders, even though the infrastructure of the stock market – exchanges, nearly continuous data, relative simplicity (eg, rarely more than a few classes of shares in comparison to the hundreds of bonds outstanding for many large companies) – should have made them obsolete decades ago. However, even if I am wrong about the upcoming pace of change, if there is even a small probability that I am right, the downside is immeasurable. For anyone who doesn’t live their life based on expected value calculations, here’s the math: a small probability of something with potentially infinite awfulness far outweighs a large probability of something that’s pretty great but not absolutely amazing. In other words, after AlphaGo’s victory, when that small probability got a little bigger, I decided that I’d prefer to spend my time learning about the future – no matter how distant – rather than clinging to the past. I left my job as a portfolio manager and now am taking classes and writing about the way forward.
A big reason for my decision – and the part that makes the downside immeasurable – isn’t based on the practical and economic considerations. My concerns are existential. First, although it has been shown that indigenous humans likely worked less than modern humans, identity is now bound up with work, especially in Western society. How will we redefine ourselves and find purpose without compensation? If capitalism evolves into either feudalism or socialism as jobs disappear, will the transition be violent or bloodless? Second, as digital beings develop further, we may get to a point where biology is obsolete. This may evoke overtones of The Terminator or Screamers, but it isn’t clear that such physical antagonism is inevitable. We may instead choose to evolve into something digital ourselves in order to compete. What are the implications of choosing to augment ourselves? Will there be enough resources for everyone to choose augmentation, or will this drive further inequality? What happens to the human soul in the transfer of identity from a body made of cells to one made of silicon?
Some scholars, like Robert Gordon, don’t believe that we need to be worried. And it is true that, even among people who see strong AI as inevitable, there is disagreement about the timeline: as soon as 2045 or as long as hundreds of years from now, when even the famous seven generation rule of intergenerational duty will have been discharged. Nevertheless, rather than waste time debating a timeline, given the colossal implications, it seems rational to start at least discussing a strategy for the human response to being replaced. This is especially true if the replacement we are talking about is not just as workers but as biological beings.
*Much of what I touch on in this piece has been developed more fully by other authors. If you are interested:
The Best Non-Technical AI Introduction Out There
A Capitalist Solution to a World Without Work
Transhumanism: a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.
Ray Kurzweil – The Shepherd of the Singularity Movement (Singularity is defined in this case as the point where machines outrun all human capabilities.)
Kam Moud says
I am enjoying reading your posts! Definitely has made me think deeper about the future.
Disgruntled Rationalist says
Thanks, Kam! Hopefully discussions of AI reach a broad enough audience that we can have proactive rather than reactive policy discussions….
Floyd Frank says
AI was a great concept that is becoming more and more real. Good! It will free us to pursue those forms of living that cannot be replicated, amplified and accelerated by number-crunching tools.
Disgruntled Rationalist says
I’m not sure that there is anything that can’t be replicated (unless you want to get tricky about the definition of consciousness, which opens up the question of relevance). That is why I wrote in an earlier post that art is so important. There is room for more than one cubist master, for instance, and it shouldn’t matter if one is biological and one is digital!