Why surveillance capitalists resent human freedom and autonomy.

Image for post

The recent Netflix film The Social Dilemma presented as a stunning confessional. Ex-executives of Google, Facebook, Twitter, and other companies, each aired their guilty conscience on air, issuing a warning: the social media platforms they helped to create present a huge danger to collective society and individual wellbeing. As ex-Facebook executive and venture capitalist, Chamath Palihapitiya said, he helped invent “tools that are ripping apart the social fabric of how society works”. Or Facebook co-creator, Sean Parker, who claims that Facebook is designed to exploit human vulnerability. Who can we, in good faith, still maintain the delusion that it’s this club of big-tech defectors who have it wrong, and that those still espousing the infantile ideals of tech utopia have it right? In fact, The Social Dilemma may understate the breadth and pervasiveness of the problem. The concentration of power in these companies alone is concerning. Zuckerberg owns a majority share of Facebook and so personally controls the information flows of the biggest communications network in the world. This is an event unprecedented in human history. This level of power and control is already being exercised in alarming ways. Surveillance capitalism — the modus operandi of Google, Facebook, Microsoft, Amazon, and others — is a novel attack on human freedom and individual autonomy.

Before we explore their mode of operation, what about the deleterious effects of the medium of internet itself? New media technologies are not non-specific amplifiers, given mediums influence our minds in very different ways. The internet effects the way we read, think, and remember. All these are skills we should covet if we hope to pursue justice and understanding, and to meet the challenges of our time. Research by Ziming Liu from San Jose State University found that the internet promotes browsing, scanning, skimming, and non-linear reading. According to Liu “the digital environment tends to encourage people to explore many topics extensively, but at a more superficial level”, and that “hyperlinks distract people from reading and thinking deeply.” By doing so we risk neglecting and effectively decaying (by virtue of our neuroplasticity) our capacities to personally understand subjects in depth, and to critically integrate new information. Simply having your phone in your field of vision has been shown to decrease cognitive and mnemonic performance.

Tech moguls proudly declare the apparent “outsourcing” of human memory as a progressive triumph which increases access to information whilst unburdening our biological memory. In their world the mind is the brain, and the brain is a supercomputer. In this sense, and as internet critic Bob Carr says, they are “being misled by a metaphor”. The brain is not a computer, it is an analogue, organic system. So, human memories are alive, fluid. Computer ‘memories’ (which really stretches the metaphor) are dead, static data. Human memories aren’t digital facts, they are made up of complex, non-binary schemas. Evoking our biological memories is deeply entangled with how we understand ideas and how they relate to our lived experience, its entangled with how we think. And of course, unlike our digital ‘brains’, we have both short-term and long-term memories. Consolidating short-term memories into the long-term bank takes time and reflection. So, it’s our long-term memory, and the skills associated with feeding it, which are left sclerotic by our internet use. Our devices and platforms are designed to maintain our attention with constant waves of fresh information. They are deliberately designed to be as addictive as possible. Today, the competition for our attention is a lucrative market. Our attention is sought by a plethora of apps, sites, and platforms all at once, every day. The net result is that at times we’re constantly distracted, which makes engaging in deep reading, thinking, synthesis, and creative activities harder. Despite Plato’s warnings about the effect the technology of writing would have on human thought, writing came to supplement human memory, whereas Carr describes the internet as “a technology of forgetfulness”. The digital environment overloads our short-term working-memory. The information flows exceed our attentional capacity and refuse to allow that precious respite we need to consolidate long-term memories and cultivate personal understanding.

Today, our digital environment goes beyond the implicit impacts of the medium of the internet. And not just in the sense that the speed of the media cycle encourages reactivity, tribalism, and cultural warfare — though it does — but in that surveillance capitalists want to explicitly oppress deep thinking, reflection, free-choice, and individual autonomy. This is precisely because all of these things threaten their financial imperatives. These imperatives, and the market outcomes sought by surveillance capitalists are stunningly elucidated by Harvard’s Shoshana Zuboff’s book, Surveillance Capitalism.

Zuboff is sometimes referred to as the Karl Marx of her age, and this seems appropriate. Considering this new market form prides itself on ‘ubiquity’ and defends itself with sophisticated propaganda campaigns, Zuboff somehow broke the spell, stripping aside all the utopian euphemisms to expose the dangerous ideology and economic imperatives which drive tech giants. According to Zuboff, surveillance capitalism unilaterally, undemocratically, and without consent, claims the right to monetise our human experience. It’s sometimes thought that these companies access only that information which we provide to them, perhaps as well as our browsing history and location data. However, this is just a drop in the pond compared to the oceans of data they derive about us. Cross-site tracking on the web, always-on tracking on our devices, microphones (sometimes hidden) in ‘home’ devices, Roomba vacuums which map and sell the floor plan of homes, facial and voice recognition discerning mood, browsing habits, social habits, and more, all turn out to be fruitful sources. From all this raw data surveillance capitalists can compute what we’re likely to think, feel, and do with an incredibly high degree of accuracy relative to standardised personality testing. Sure, a minority of this data contributes to the ‘virtuous circle’, wherein just enough data is derived and utilised to improve services. But beyond that is the ‘behavioural surplus’. Contrary to what tech moguls might prefer us to think, this additional data is not “exhaust” that would otherwise be “wasted”. This data would not exist if it were not rendered as data in the first place. The amount derived is not what is required to provide adequate services for free. Not by a long shot. But why derive this behavioural surplus? Why seek this historically unprecedented capacity to understand who we all are and what we’ll all do?

Although the motto “If you’re not paying, you are the product” prompts a useful shift in perspective, this predictive capacity is the real product. From our behaviour, emotions, experiences, personalities, habits, and heartbreaks, ‘predictive products’ are derived. These are incredibly attractive for 3rd party customers seeking highly determined demand. First it was targeting advertising. In 2000 Google caved on its earlier resistance to the advertiser model amid the ‘dotcom boom’ and mounting pressure from shareholders. It adopted the model, and within 4 years Google’s AdSense was accruing annual revenues of more than $10billion. The birth of surveillance capitalism. However, surveillance capitalists have widened their customer base over time. For example, facial recognition technology, which is programmed by the photos we provide to surveillance platforms, is sold down a chain of 3rd party buyers. In one case it has assisted military operations to track imprisoned Uighurs in Chinese “re-education camps”.

Their early levels of predictive certainty were just the beginning. Surveillance capitalists want to more accurately determine outcomes by modifying human behaviour en masse. Zuboff claims: “surveillance capitalists declare their right to modify others’ behaviour for profit according to methods that bypass human awareness, individual decision rights…and autonomy and self-determination.” So now, in addition to highly accurately predicting what we’ll think, feel, and do, these companies now seek to (without consent) modify what you think, feel, and do, all in alignment with the preferred outcomes of their customers. Facebook researchers themselves have boasted as early as 2012, in an article published in Nature, that they have successfully influenced “the real-world voting behaviour of millions of people” in the US midterm elections. More recently, Pokémon Go, originally developed by Google, was herding users into franchises who were paying to have more Pokémon featured in their stores. Companies were paying surveillance capitalists to direct unsuspecting gamers into their businesses, providing lucrative foot traffic, and shedding uncertainty of demand. Remind me just who the ‘users’ are in all this? To achieve all this seamless efficiency a great deal of our individual freedom, consent, and privacy are sacrificed. Without uncertainty how can there be freedom? With so much of our society built around these platforms and considering their foreclosure of alternative mediums (their monopolisation), are we really consenting? When the most precious and personal parts of ourselves are claimed as behavioural surplus, what of our privacy - what of our inner lives?

Robust research has shown that Facebook is likely to make us anxious, envious, and depressed the more that we use it. Usage limitation recommendations are useful PR optics for Google, Android, and Facebook, but they’d never really have us withdraw. To them, our reflection, our sanctuary, the serenity we enjoy when we break away from the addictive dopamine feedback loops which characterise our lives online, are all anathemas. Our freedom, our rights to determine our own future, our rights to read and reflect deeply and at length, all threaten their surveillance revenues. Our inefficiency, creativity, spontaneity, hesitation, deep reflection — namely those things which makes us human — are all made sacrificial lambs on the road to ‘Society 5.0’, the tech-entrepreneurial wet dream wherein a handful of men undemocratically exercise unprecedented control over knowledge and freedom. Whilst we plead politely for big-tech platforms to censor hate, they themselves do the bidding of states like Israel, who’s conditions of entry for Facebook include their compliance in suppressing pro-Palestinian voices online (see also Facebook appoints Israeli censor to oversight board). Instead of letting these private tyrannies control and erode individuals, society and democracy, they themselves should be subject to democratic control. This doesn’t mean we shouldn’t have social media, with all the beautiful connections it can foster, however it does mean we that should not just tacitly condone surveillance capitalism. We would surely flood the streets if a state exercised such concentrated power and influence, and indeed they’ve done so in Hong Kong in recent years. So why doesn’t the global, privatised, for-profit version evoke opposition? It certainly should.

Comments

Popular Posts