Almost every aspect of our individual and collective lives, today, is touched upon or mediated by technology. But this doesn’t make technology something easily or inherently comprehensible, whether or not we are digital natives (a phrase that conceals as much as it reveals). Indeed, many of the systems we use every day are specifically designed to slip beneath conscious notice: to elude understanding, manipulate perception, or present a semblance of seamless ease upon based upon near-incomprehensible inner workings.
How can we push back against this? One of the most important things we can do when it comes to thinking critically about technology has little to do with hardware or software, and everything to do with us. It’s this:
- We need to think twice about the words we’re using—and the unexamined assumptions caught up within them.
Consider some of the most fundamental vocabulary we apply to machines. For a start, we relentlessly anthropomorphize our tools: that is, we apply the same language to them as we do to our own actions and intentions. We ask what they want, what they’re trying to do, what they see or are considering or understand. At the same time, we use the language of technology as a metaphor for our own thoughts and feelings: we describe ourselves processing information, logging a particular event, needing to reset or reboot or shut down in the face of difficulties, and so on.
All of this is understandable. But it also brings with it a dangerous lack of precision – and the risk that our analyses of what’s going on in the world perpetuate rather than challenge pervasive misunderstandings around the very different ways in which humans and machine systems analyse information, enact decisions, and are best treated.
No current machine can see, consider or understand the world in anything like the human sense. Artificial Intelligence, one of the most widely used and abused phrases in the technological lexicon, is nothing whatsoever like human intelligence; indeed, it spans entire families of algorithmic activities that have more in common with statistics than cognition. Smart systems aren’t smart in the sense that people are; computer memory is nothing like biological memory; and, if a person is trying to think about several things at the same time, they are neither computing, processing nor multi-tasking in anything like the way these words were coined to describe.
Words matter: entire worldviews are at work within them. Have you ever undertaken gig employment, participated in the sharing economy or used cloud computing? Would you feel differently about doing these things if they were described as insecure temporary labour, largely unregulated online asset-sweating or massive bunkers full of servers?
Perhaps most importantly of all, we need to beware of the assumption that humans and machines necessarily exist in some kind of competition, or that their differences automatically translate into opposition (just think of how many headlines have been devoted in recent years to machines taking or stealing people’s jobs, as if this were something machines themselves woke up one day and decided to do, rather than a particular strategy chosen by the wholly human directors of a particular organisation).
Machines don’t wake up in the morning and decide to do anything, ever, because wanting and liking and choosing are not things they are capable of. Even the most complex machine learning systems in the world are unthinking deciders, powerful and implacable in equal measure. Praise, blame, intentions, hopes and fears all belong exclusively in the human realm.
This isn’t to say that we can or should draw a clear line between human and technological activities. Much of the activity in a human brain takes place beneath conscious notice; and much of the activity that constitutes a human mind doesn’t exclusively involve the brain. Indeed, human minds aren’t fully constrained or defined even by the bodies they occupy. From maps to clocks to mobile phones, countless inventions and machines contribute to our identities and augment our cognition. But these machines lack any direction or purpose without humans to determine their objectives. To reiterate the point I made above:
- Even the most sophisticated tools are ultimately extensions of their creators’ intentions: uncomprehending artefacts, relentless in their pursuit of whatever somebody has determined they should pursue.
Technology, in other words, is something with which we spend our lives in constant dialogue. It’s part and parcel of our societies and our identities, of our apprehensions and the connective fabric of our communities – and it’s only by engaging with it as such that we can hope to view and critique our creations with any clarity.