
Things have jobs: pillows are made for comfort, scissors are sharp, and digital devices are made to track your every move
Photo by Thielker/ullstein bild/Getty Images
Carissa Véliz is an associate professor in philosophy at the Institute for Ethics in AI and a fellow at Hertford College, both at the University of Oxford, UK. She works on privacy, technology, moral and political philosophy, and public policy. She is the editor of the Oxford Handbook of Digital Ethics (2021), and the author of Privacy Is Power (2020) and Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI (2026).
Edited by Nigel Warburton
8 May 2026 (aeon.co)
Prometheus might have handed humanity fire, but he certainly did not give us a smartphone. Digital technology is not God-given. Nor is digital technology a natural kind, an object of nature, like strawberries or lakes. We don’t find smartphones growing from trees. The digital gadgets that populate our lives – smartphones, laptops, smartwatches and more – are artefacts.
An artefact exists because human beings have created it. Hammers, laws and symphonies are artefacts too. Their existence depends on human minds and purposes. No artefact, in all the richness of its details, is inevitable. That’s partly because artefacts are designed by human beings, and there are choices in design – choices that could’ve been different. Every one of the letters you are reading right now, for instance, could’ve had a different shape by design.
Shape is not the only choice we make in designing artefacts. In addition to making choices about the sensorial attributes of artefacts – their tactile qualities, what they look, sound, smell and sometimes taste like – we make choices about what the artefact is supposed to do. An artefact is created for a purpose; it’s intended to do some things and not others. Pillows are supposed to be comfortable, pens are meant to smoothly transfer ink onto paper, and toasters should brown your bread.
Some artefacts do many things. A perfect chair, say an Eames chair, is both an object of beauty, something that is pleasurable to behold, and a useful tool for the comfort of the body. I edited my last book on an Eames chair that was so comfortable, it allowed me to focus on the content of the book instead of worrying about my body.
Digital technology does many things too. A smartphone can enable you to make calls, send emails, and track how many minutes you meditate to destress from the calls and the emails, and then track how many hours you’ve been on your smartphone, and stress about that. Mixed in the flour that bakes digital technology sit two original sins pervading most gadgets, apps and platforms alike: surveillance and prediction; more specifically, surveillance at the service of prediction. Both lead to social control.
For the most part, digital technology has been developed by computer scientists, engineers, data analysts and ambitious businessmen (yes, mostly men) with little to no consideration of the impact their technology could have on democracy.
That’s partly because, when the fundamental blocks of the digital and the online were designed, it was hard to envision that they would grow to be what they are today, something that everyone has access to, every second of the day, including gadgets that are small enough to fit into a pocket. The internet was originally designed to be a tool for researchers to communicate easily with one another; it wasn’t meant to be a major way of communication for ordinary citizens.
But another influencing element is undoubtedly that the people designing our gadgets tend to be people well versed in programming, business, mathematics and other fields distant from a deep understanding of ethics or politics. (That said, there are notable exceptions like Reid Hoffman, who co-founded LinkedIn, one of the least toxic social media platforms we have.) Considerations about how technology could impact democracy were largely not part of the design of our digital environment. And whenever political considerations did come in, they’ve come in the form of an anti-government bent.
There is some tension between Peter Thiel’s supposed defence of freedom, and the systems of mass surveillance, prediction and control he’s building
Some of the pioneers of the digital strike me as naive idealists, assuming that freedom and fairness will magically come from having no government interference. ‘A Declaration of the Independence of Cyberspace’ (1996) is one of the most iconic documents of the early internet era, written by John Perry Barlow, as quirky a character as they come. A Republican and an anarchist, Barlow was raised as a devoted member of the Church of Jesus Christ of Latter-day Saints; he was also a cattle rancher and a lyricist for the band Grateful Dead.
The beginning of his declaration reads:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
And it ends with:
We will create a civilisation of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.
Those strike me as extremely naive sentiments, at best, barely plausible when the internet was populated by a handful of nerdy guys, and utterly unrealistic once it starts encompassing millions of people from around the world, including thieves, drug dealers and human traffickers, not to mention swathes of terrifyingly ordinary trolls who silence people they don’t like (women, often). Where did Barlow think fairness was going to come from?
In other instances, digital anarchism or libertarianism seems anything but naive. Peter Thiel is a German and American venture capitalist and a conservative political activist. He was the first outside investor in Facebook – you know, the company that popularised a model of surveillance for social media that led to the Cambridge Analytica scandal – and a co-founder of Palantir – you know, the company named after J R R Tolkien’s omniscient crystal balls in The Lord of the Rings that was partly funded by the CIA and helps governments surveil their populations, whose ads recently read ‘we build to dominate’. Before that, he co-founded PayPal, which he conceived of as a payment service to shield people from governmental reach.
Thiel is famous for his libertarianism and for expressing sceptical views about democracy. Although he is not shy about voicing his opinions, many of his perspectives seem contradictory, and listening to him – from the content of his words to the intensity of his goggle-eyed demeanour – can be a surreal experience. Thiel’s own biographer Max Chafkin said he finds what Thiel thinks an unsettling mystery.
Surely it doesn’t escape Thiel that there is some tension between his supposed defence of freedom, and the systems of mass surveillance, prediction and control that he is building. One rather depressing hypothesis is that Thiel is nothing more complex or sophisticated than an opportunist; someone who is mostly interested in earning money and gaining dominance over others; someone who is fighting for freedom for himself and his buddies, not caring if it comes at the price of slavery for everyone else. Sometimes Ockham’s Razor is right, sometimes the simplest explanation is the correct one. Whatever Thiel’s intentions might be, what is clear is that digital technology has not been and is still not being designed to support democracy.
One of the most deceptive narratives technology companies have been successfully peddling is that technology is neutral and it’s our use of the technology that determines whether it’s good or bad. How very convenient, to put all moral responsibility on the shoulders of people; I’m sure that is an entirely coincidental implication and not at all self-serving (eye roll). The crudest way to rebut this argument is to consider an extreme technology: a chemical weapon that would obliterate all life on Earth. Would it be morally acceptable to develop such an artefact under the pretence that technology is always neutral? No. Such a technology would not be neutral, and neither is any other technology.
Every piece of technology is an artefact, which means that it has been designed by someone to do something, and that fact alone strips it of neutrality. Even a blank page is not neutral: it’s inviting you to write (or to make a paper plane). Technology is never neutral because it embodies the belief in the value of what it was designed to do. You make an artefact because you think there is value in it doing what it does.
Philosophers describe the embodied values of artefacts by pointing out that artefacts have affordances. An affordance is what an artefact invites you to do. A paper book invites you to read it; it’s made to be light, to fit comfortably in your hands, be easy to store, and its words are designed to inform you, or persuade you, or move you. It’s the result of thousands of years of refinements to make it more portable, durable, reproducible – from stone tablets to papyrus scrolls to bound codices to the printed book. Of course, you can use a book as a hammer, or a brick, or a projectile, but since it wasn’t designed for that, you’d be better off using a hammer, or a brick, or a baseball. Affordances are how the designer of an object communicates with its user.
Because surveillance affords control, when it comes to politics, it tends to decrease freedom
Surveillance tools afford control; they invite you to keep track of things and people, or people through their things, or people as things. Given how highly social creatures we are, we are typically more interested in tracking people than things.
You keep track of people to have some amount of control over them. In some cases, it’s justified. It’s part of a parent’s job to watch a toddler at all times of the day in case they run into danger, and the world is a very dangerous place for a toddler. Parents are superheroes who intervene before heads hit the floor, hands touch hot objects, and marbles get swallowed; to be able to intercede on behalf of children, parents need to keep a watch on little ones. Parents surveil children to predict disasters and avert them.
The case to surveil becomes much harder when it comes to autonomous adults. First, when adult human beings are autonomous, their ability to keep reasonably safe is greatly enhanced; we get better at not falling head first, not touching hot objects, and not swallowing marbles. Second, other things being equal, the desires of autonomous people about how they lead their lives should be respected, and most people don’t want someone watching over their shoulders (again, other things being equal). Third, because surveillance affords control, when it comes to politics, it tends to decrease freedom, which tends to be bad for liberal democracies. It is no coincidence that authoritarian regimes rely on surveillance of their citizens.
Parents who surveil toddlers tend to do so benevolently, for the child’s own good. But not all watchers are as benevolent. People can have their own agenda, which is not always aligned with your best interests. Whoever surveils you gains more power over you by virtue of learning more about you, which makes it easier to predict what you’ll do next, and use that information in their favour. If I learn that you have gone to the movies every Monday night for the past year, I can use that information to predict that you’ll be at the movies next Monday night and plan to rob your house then.
Most digital tools, as they are currently designed, are built to surveil. They collect as much data as possible by default, and your ability to constrain that data collection is limited at best. Your phone not only has a microphone and a camera, it has an accelerometer, a gyroscope, a compass, a barometer, a light sensor, proximity sensor, humidity sensor, iris and fingerprint scanners and a GPS, among many items in a long list. That’s just your phone.
Many apps on our devices – maps, online documents and social media – were designed precisely with the creation of personal data in mind, to make our lives more trackable by machines. That they are useful to us is just the attractions used to make us inadvertently give up the data that is used to control us.
According to James C Scott, the American political scientist, digital records make society more ‘legible’ to machines – arguably the same applies whether the machine is a state, a corporation or an AI. Without a quantified record of the comings, goings, communications and purchases of people, institutions are blind. When we turn analogue records into digital ones, we make data much more easily retrievable, and we make it easier for that data to get copied and shared. Finally, digital records also make it easier for computers to analyse information: to identify people, catalogue them, track their income and location, and predict their lives.
The surveillance machinery we have built exists at the service of a prediction machinery. We collect so much data to monetise it, and we monetise it by using it to predict, or by selling it to others who want to use it for the purposes of prediction. But efforts to predict people’s behaviour are intrinsically linked to efforts to control it, because the easiest way to predict the future is to influence it; preferably to determine it.
We’ve seen similar dreams of folly, for instance, in the Soviet Union’s desire to plan the economy, or with East Germany’s Stasi, but the technology then was not nearly as powerful to surveil and control as it is today. In the 1960s, if you wanted to surveil one person, you had to hire someone else to bug them or follow them. Today, we surveil everyone by default; you just have to tap into the data collected by the spies in their pockets (smartphones), on their wrists and fingers (smart watches and rings), their work tools (laptops), and in the public sphere (CCTV cameras) – you’ll have more on any person than the Stasi could ever dream of.
You might think that there is nothing to worry about because, while the Soviet Union and East Germany had a clear political agenda, today’s surveillance is mostly about earning a profit. But, as the old adage goes, power corrupts, and surveillance conveys too much power to the watchers. Furthermore, human beings are political animals, as Aristotle pointed out. It’s a matter of time before things get heated and the powerful reveal their political colours, as we’re increasingly seeing tech barons like Thiel do.
Predictions about people have a tendency to become self-fulfilling prophecies
Even disregarding the terrifying implications of surveillance (how much more surveillance can liberal democracies take before falling into societies of control?), an overreliance on prediction is a democratic problem in itself, and today we are using prediction more than ever.
Predictions might sound like descriptions of the world, or like facts, but they are neither. When we analyse them as assertions, it becomes clear that they are what the philosopher J L Austin called ‘speech acts’, language that does something rather than describe. Predictions are often veiled commands, implicitly telling us what to do. When someone like Thiel prophesies that, if we fear or regulate technology, we will hasten the coming of the Antichrist, the message is an order. Paraphrasing, Thiel is telling people to not stand in the way of his technology, or else terrible things will happen. That he has a major financial stake in the technology is something he doesn’t remark on as much.
Predictions also invite foul play. Predictions about people have a tendency to become self-fulfilling prophecies, which creates the temptation to unduly influence the future. For example, politicians have bet on themselves in prediction markets to try to sway public opinion, making them look more popular than they are.
Predictions also stand in contrast to justice, and yet we are using algorithmic predictions to make decisions about sentencing and bail. Justice is supposed to give each one what they deserve on the basis of what they’ve done or who they are, not on the basis of who other people think they will become, which is what a prediction is. If we punish someone or deny them an opportunity on the basis of clear and contestable criteria, on the basis of facts, those decisions can be challenged; they can be proven wrong. But if we punish or deny opportunities based on predictions, there is no way to contest those. Since they are about the future, they cannot be proven false in the present.
Furthermore, if people’s behaviour becomes more predictable, there’s a good chance that’s because it’s being conditioned or even determined. At an extreme, the surest way to predict someone’s death is to murder them. People living in authoritarian regimes can be more predictable because their behaviour is being constrained by tyranny.
Healthy democracies are all about embracing and managing uncertainty. It’s only when we don’t know what the results of elections will be that we have true democracy. If we knew what the election results would be, there’d be no point in holding the elections.
We have been using prophecy since before the Oracle of Delphi. We have waged wars, married, and bet our livelihoods on account of predictions. Every day, we put our life and those of others on the line based on forecasts. It’s about time we thought more carefully about the ethics of prediction. When is it appropriate to make predictions and when is it not? Who is entitled to make which predictions? What do we owe the subjects of prediction? What are ethical methods of coming up with predictions, and what are ethical uses of prediction?
In some ways, our current prophetic environment is not that dissimilar to that of ancient Greece, where the most important decisions were often made through the filter of divination, from the Oracles of Delphi and Dodona to freelance soothsayers and seers. Philosophy arose as the voice of reason partly as a reaction against a context dominated by prophecies and myth.
When the priest Lampon declared that the finding of a single-horned ram prophesied that Pericles would overcome his political adversaries and become the sole political leader of Athens, the philosopher Anaxagoras had more questions. He instructed that the skull be cut in two, revealing an underdeveloped brain that could explain the single horn. To Anaxagoras, the physiological explanation was more satisfying than the magical one.
We should demand safer products that can be more supportive of democracies
Anaxagoras was also well known for his cosmological theories, taking a step in demystifying the sun, in a context in which ordinary Greeks prayed at daybreak to the sun-god Helios. Denying the divinity of the sun was a serious offence because it risked angering the gods and bringing punishment to the whole of the community. It is no coincidence that Anaxagoras, Socrates and Aristotle were all denounced for impiety.
Today, some aspects of tech have become such an ideology that to criticise fundamental elements like surveillance and predictions feels like an act of heresy. But perhaps philosophy can rise to the occasion once more. I don’t mean academic philosophy, although that would be nice. I mean critical thinking more generally. We should be asking more questions of our prophets. We should be less naive about prediction and surveillance, and we should demand safer products that can be more supportive of democracies. Technological systems designed to surveil, predict, and control are ideal for an authoritarian takeover.
Larry Ellison, the chairman of the tech giant Oracle, has predicted a modern surveillance state in which ‘citizens will be on their best behaviour’ because we’re constantly watched. Hannah Arendt argued that it’s pointless to argue with a potential murderer about whether his future victim is dead or alive. The only appropriate response is to ‘rescue the person whose death is predicted’. When today’s prophets are predicting the death of our democracy and building the systems to undermine it, the only appropriate response is to rescue it.
Let’s make Prometheus proud.
(Contributed by Gwyllm Llwydd)