One of the most persistent fears about turning over tasks to machines is their susceptibility to hacking. Although briefly mentioned in a previous post on neural implants, the topic comes up so frequently in my discussions that it is worthy of its own piece. One concern involves external devices such as internet-connected kitchen appliances and self-driving cars, where cybersecurity issues are already widely discussed. Implanting technology into humans, however, especially our brains, involves another level of conversation. Human brains are not entirely analogous to computers, despite decades of science fiction arguing to the contrary, and there are raging debates about the definition of consciousness. Nevertheless, when it comes to hacking, the meaning is clear: unauthorized access.
In order to protect our electronic devices, we rely on layers of security, including anti-virus software, passwords, biometrics, and key codes, and while some of this offers only perfunctory protection, it is clearly better than nothing. Although passwords can be broken, they at least serve as digital equivalents to heavy furniture pushed against doors, requiring more than a simple lock-picking kit to break through. Our mistake as humans is believing that we have similar barriers to the portals of our brains. Unless we are part of the tin-foil-hat brigade, we generally believe our skulls and our silence are sufficient defense against the theft of our thoughts. Hacking isn’t always about theft, though. At its most insidious, it is about manipulation, and neither bone nor tin foil can protect us from that.
Technology, for all its wonderful promise, aids the exploitation of our neural weaknesses. In economics, value is defined by scarcity, and attention is emerging as the scarcest resource of the modern world, with the most powerful companies competing over access to our awareness. In that competition, they exploit the dopamine pathways that are a design feature of our paleolithic brains. Whereas environmental catalysts for dopamine activation used to be few and generally linked to survival – sex, eating – technology has broken that link. A seemingly mundane example is the ubiquitous pull-to-refresh feature, which encourages users to stay engaged with the webpage or app, swiping downward hoping for new stories or messages as if pulling the lever on a slot machine to see whether this time they will win the jackpot.
Like many tech innovators, the inventor of this feature has qualms about his legacy. He invented a solution to a specific problem but failed to incorporate second-order effects, a distressingly common phenomenon amongst the tech intelligentsia. In response, many have championed a spartan online existence as the answer – deleting social media accounts, limiting internet use. However, Luddites are regularly hacked, too.
Those dopamine pathways that technologists have learned to exploit are the same avenues the advertising industry has trod for as long as there have been people wanting to sell things. The result is that most of us regularly make purchases based on the subtle persuasion of unseen forces, often without our awareness. For instance, the mere exposure effect is a well-studied psychological phenomenon whereby we develop a fondness for something – a brand, for instance – merely because we are familiar with it. This is why companies angle for product placement in movies and TV shows.
We have little control over this sort of manipulation, but it has nevertheless been deemed protected speech under the First Amendment. Most of the time, it may seem harmless enough to try to part people from their money in exchange for something they may not need but might still gain pleasure from. However, the direction of choices to entire categories of goods that may have deleterious health effects – junk food, for example – makes defending advertising as free speech awkward, particularly when we understand how little freedom we have in instructing our neurotransmitters like dopamine. In response to certain types of stimuli, they wash through our brains whether we want them to or not, and, unless we have trained as Buddhist monks, we react accordingly.
The same sort of maneuvering occurs in our daily interactions with our coworkers, friends, and family. Toddlers are particularly adept at exploiting their parents’ preprogrammed neural pathways for personal gain. Somehow, though, whether we are responding to the subliminal messaging of the advertising agencies or reacting subconsciously to complex interpersonal cues, we still believe we retain agency, that we are fully in control of the choices we make.
When I was a child, adults commonly admonished against certain activities – raucous roughhousing, imbibing alcohol, eating excessive sugar – by warning that such things “killed brain cells.” At the time, we all believed that we each had a certain number of these precious neurons endowed at birth, like a neurological trust fund, and any withdrawal was permanent. We know now that the adult brain actually has impressive powers of regeneration. However, part of the tradeoff is that the neuroplasticity that allows us to adapt and respond quickly to new patterns in our environment also makes us susceptible to persuasion.
While there is something sinister about the idea of someone – or some thing – invading our minds and directing our thoughts by hacking a neural chip, we have to recognize that this happens constantly already, even without the implants discussed in my prior post. It is not at all clear why we accept one form so blithely and fear the other so passionately. Similarly, many people feel queasy at the idea of a robot conducting tasks where human lives are at stake – surgery, driving – despite evidence that robots make many fewer errors. The only answer I come up with for this conundrum is another nefarious feature of our evolution: our pathological fear of the “other.” For the time being, algorithms are still unmistakably “other.” They do not, however, make us any more susceptible to hacking than we already are. If we are going to fear them, we need to come up with a better reason.
Louisa Frank says
A beautifully written and thought-provoking reminder that our sense of agency rests on sand.