In one of my performance reviews when I worked in the financial industry, my boss told me that I was too scary and unapproachable to advance farther in my career.* The corporate world’s current remediation for something like this is a sort of executive boot camp, where instructors gather peer evaluations and then run through a series of courses and one-on-one sessions to help an employee at least sublimate the perceived personality problems. Instead of expensive training sessions, though, what if neural implants could replace the corporate coaches? What if my boss, after explaining my weaknesses, had handed me a voucher for the local clinic and explained that if I wanted to stay with the company, I would need to get the appropriate rewiring in my brain?
This may sound rather far-fetched, but we are already obsessively attached to our technology. There are almost five billion mobile phone users globally, and over 23 million people own Fitbits. Additionally, embedding technology in the brain is not particularly new. The first cochlear implant was successfully performed in Melbourne, Australia, in 1982, and battery-operated neurostimulators to help alleviate symptoms for Parkinson’s patients via deep brain stimulation were approved by the FDA in the US in 2002. So far, neural implants have benefited specific patient groups, and the idea that we can pick traits and alter them at will is extremely simplistic, especially given the complexity of the human brain. However, given the pace of advancement and the enormity of the potential effects, this is something we should at least talk about.
In the future, instead of encountering repetitive tasks and saying, “There’s an app for that,” people will encounter difficulties and say, “There’s a chip for that.” The list of possible traits that could be altered is endless and stretches across almost all human activities:
- An aspiring politician who wants to stop blushing when he is interviewed by the press.
- A poker player who wants to eliminate her tell.
- An athlete who wants to divert neural resources to his motor cortex to improve his reaction time and dexterity.
- A chef who wants to enhance her ability to differentiate flavors.
- A police officer who wants to be able to better spot lying.
At least in the short term, augmentation could mean tradeoffs – enhancing one part of the brain likely leading to deemphasizing another part of the brain – and that has implications for economic specialization, not to mention the possibility for unintended consequences. However, just as the brains of older adults show activity across a greater number of brain regions than younger adults when performing specialized tasks, technologically enhanced brains could do the same thing, possibly even more quickly, obviating potential problems from over-specialization.
A more obvious concern – and one that many people are probably familiar with after the recent surge in sales of 1984 – is who determines which traits to target for “improvement.” For instance, what would happen if I had been the boss in the opening scenario. If the roles had been reversed, would I have suggested that my subordinate needed to learn to be more direct in order not to waste so much of other people’s time? Most workplaces have a definitive culture, and people already self-censor parts of themselves to fit in better with coworkers, so perhaps the idea of getting help from technology isn’t worrying, especially if the chips in question could be switched off during non-work hours. Creativity might suffer, given a lack of diverse thought, but that is a problem of economics, not of ethics, and could presumably be solved by an algorithmic optimization of the exact number of each personality type needed to maximize overall productivity in a particular industry. In this scenario, people would still have control over where they chose to work, and chips would become akin to the employee badges that almost all corporate office workers wear around their necks or clipped to their belts.
A more dystopian scenario emerges, though, if chips are required for any job or are mandated for participation in civic activities such as voting, as explored, for instance, in The Mandibles: A Family, 2029-2047. In this situation, who holds the power to determine which traits are desirable is a crucial question, one without a “good” answer. The best solution is probably to try to avoid the situation in the first place, but that would take coordinated social action, especially since the technology in question is not broadly owned. Finding ways to encourage broader ownership of stocks – particularly those of companies who own patents on the technologies of the future – is one way to help alleviate this (a variation of Jerry Kaplan’s proposal in Humans Need Not Apply), but it wouldn’t necessarily save a population if capitalism is overthrown (Argentina) or gradually diminished (Turkey). Perhaps implantable AI is the type of technology that demands open source development. It might slow advancements in the short-term, but it could eliminate the possibility of too much concentrated power in the future.
This post purposely has not brought up hacking as a cause for unease. The reason is that it isn’t clear that technology is any more susceptible to hacking than biology. Although humans have a more developed prefrontal cortex than other animals, we still rely overwhelmingly on our emotions rather than logic for decision-making. Psychological research also shows that we are adept at ignoring evidence that does not comport with our already-established, visceral views, and we often are not even aware of internalized stereotypes (one of which was explored in a previous post). Consequently, our brains are vulnerable to emotional hijacking. Just like a computer hacker could conceivably adjust the purpose of a neural chip, entities such as politicians, marketers, and salespeople are already manipulating our behavior without our explicit approval or even knowledge. At least with neural implants, though, we still have a choice about whether to introduce a potential source of exploitation. Hopefully we have a robust, society-wide conversation about it before it becomes inevitable.
*This particular performance review deserves its own post at some future time, but, for the record, it is true that I do not suffer inefficiency gladly.
Floyd says
How interesting. Back in the old days we had pre-frontal lobotomies to ensure acceptable behavior. A lot of people would choose a chip that guaranteed a good feeling for themselves but I hate the idea that appropriate behavior could be forced on anyone.
Our complex brains with their balance of rational and emotional responses will, I hope, allow future humans to outnumber future cyborgs.
Disgruntled Rationalist says
It isn’t at all clear to me that our rational and emotional responses are “balanced.” The research seems to indicate that we still rely more heavily on emotion, which is why our beliefs are so resistant to change, even when contrary (rational!) evidence is provided. In fact, if you want to further entrench someone’s views, one of the best ways is to present rational evidence that they are wrong. If they are human (and not a cyborg), they will probably find any number of ways to maintain their already established beliefs. This seems like a handicap to our species in the modern world, and it seems like one that technologists (or AI) are likely to take advantage of. That the end result is the triumph of biology seems unlikely.