[T]he problem is the hijacking of the human mind: systems that are better and better at steering what people are paying attention to, and better and better at steering what people do with their time than ever before. These are things like “Snapchat streaks,” which is hooking kids to send messages back and forth with every single one of their contacts every day. These are things like autoplay, which causes people to spend more time on YouTube or on Netflix. These are things like social awareness cues, which by showing you how recently someone has been online or knowing that someone saw your profile, keep people in a panopticon.
The premise of hijacking is that it undermines your control. This system is better at hijacking your instincts than you are at controlling them. You’d have to exert an enormous amount of energy to control whether these things are manipulating you all the time. And so we have to ask: How do we reform this attention economy and the mass hijacking of our mind?
Tristan Harris, Design Ethicist, Center for Humane Technology Wired interview.
It’s No Secret
By now it is self-evident to everyone who’s paying attention that culture and habits are having a hard time keeping up with emerging technology. Beset by bottomless input and devices that proactively work to get our attention, many people report dissatisfaction with their engagement with things like social networking even as they are ever-increasingly attached to the medium, for example.
But for the moment I want to assume all that. I want to assume that there exists an intrusive, engaging set of technology systems that hold our attention and cause and allow us to disseminate information to each other. I want to assume that these systems are being improved both deliberately and accidentally to do this better. I want to think about a possible consequence of the convergence of their user-manipulating methodology combining with their facilitation of the propagation of memes.
I don’t think it is safe to presume that it is impossible that some one or some corporation or some national government or some technology system will, even if purely by accident, stumble upon what amounts to a zero-day exploit of the human mind.
I mean by this some set of stimuli that reliably changes the track of the experiencer’s life in a way that truly short-circuits their ability to cognitivize their content or respond in a way that is fully rational — whatever it is about such a thing that compels us may not even be consciously available to the experiencer to analyze. Our minds will simply be changed, and in a way that is meaningfully distinct from merely hearing an argument, suffering a pain, or learning a fact.
And if any of the things that this causes a person to do is propagate the set of stimuli, then we could have not just a mind virus, but a mind pandemic. The consequences could be much more than humanity spending a larger fraction of its time staring at phones.
Make no mistake: This is already happening. At the moment, it looks like Twitter bots and Facebook campaigns and pictures of kittens, with consequences that are quite weird enough, thank you. But this mostly confirms people in what they already believe or reinforces what they already like or causes them to look at more kittens. But supposing there were something about a kitten picture or a set of kitten pictures in the context of some other input that caused such significantly ideal content to exist in its consumers that people seeing them would buy different products, elect different leaders, or endorse different philosophies, that would be a different thing, and I’m not sure we’d see it coming. We would just be sharing kitten pictures, not realizing that we were participating in a Snapchat streak that tends to make people fear democracy or something.
So I’m really not talking about anything terribly novel; just this, only more so, or slightly different, such that there emerges a new constellation of ideas or dispositions of the mind that is not predictable and tends to reinforce itself, like a fad, or a style, or a social movement, but some aspect of which is operating truly under the radar of consciousness by virtue of impulses that resist introspection or ideas that resist dismissal.
An interestingly similar concept is explored in the compilation novel Cyteen, by C. J. Cherryh. In it, a future space-faring civilization has the ability to rapidly educate people with wonderfully anachronistic “tape” that plays them information while they are asleep. This civilization colonizes sterile worlds, having by tape equipped the colonists with a set of memes such that their culture would inevitably evolve through known stages toward a prescribed final form, like the efflorescence of genes elaborating themselves in a developing fetus. They would begin their societies knowing the predicates of arguments that supported a given form of government, say, and inevitably would invent it when the conditions became fertile.
But I don’t think that MindVirus 1.0 is likely to be as benign or predictable as that. Imagine a news story taken out of context and thereby causing a panic. You’d like to think that somewhere along the line people’s skepticism would kick in and cause them to put the brakes on. But it’s not hard at all for me to imagine a fictive or exaggerated report of … say, an imminent nuclear attack or an outbreak of Ebola that incorporates a “share this” button, could “go viral.” It’s not hard to imagine the news picking it up with the sorts of “it is being reported” coverage that sticks eyeballs to screens awaiting confirmation. Given the magnitude of the reported threat, one could forgive people erring on the side of survival and taking extreme actions, especially if they see that others already had done so.
But I’m thinking of something less obvious than that. What’s creepy is the idea that something subliminal, or following from diverse and seemingly unconnected cues could come together in a non-obvious way that implants or causes to emerge in people novel ideas or makes them disposed to act differently. What separates this from ordinary learning, communication, or propaganda is not superficially obvious, but I think if you squint and peer, you’ll see it: If I believe this, and find that popular, this other thing follows, and it’s harder to go back than forward. Once I believe it, it seems to falsify alternatives. I am proposing that this sequence can be manipulated in a non-obvious and novel way through technology.
Technology companies already are profiting enormously from causing people to consume and share ideas, and their methods are largely driven by black-box AI that nobody understands. So I’m really only talking about some evolution, however subtle, of what they are already doing that induces some psychological phase change in the users, some strongly emergent combination or intensity of effects, that has properties that are not predicted by the rules of the present regime, motivate behavior, and are transmissible. Given the speed at which such a system could propagate on the internet, the results might be quite surprising indeed. And given the potential power of such a system, I would be very surprised if nobody is trying to develop such a thing to be operated deliberately.
Given the vast resources already committed to propaganda and addictive technology, I want to suggest to you that this is likelier than you probably believe, and I don’t think the motives of those who may be seeking to do it deliberately can be presumed to be benign.
Featured Image: Voice of America, Public Domain