From radio to a phone in our pocket in 50 years: how has tech taken control of our lives so quickly?

Cartoon of a laptop-user surrounded by technology and robotic arms - Zhenqing Du
Cartoon of a laptop-user surrounded by technology and robotic arms - Zhenqing Du
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

In 1965 my dad brought home a transistor radio from the television factory where he worked. At home, we still had the stately valve-amp radiogram that took up half the parlour, where my mother had listened to Churchill’s radio broadcasts while my dad was fighting in the war. As a small child in the 1960s, I liked to sit behind the humming radiogram, watching the orange glow of the glass valves. It was fairy-like and warm.

Those valves, as the Brits called them, were vacuum tubes. They were invented in Britain, in 1904, by John Ambrose Fleming – really as a spin-off from the incandescent light bulb, a filament inside an evacuated glass container. When hot, the filament releases electrons into the vacuum; it’s called the Edison Effect (technical term, thermionic emission). Thomas Edison had invented the lightbulb in 1879, and Ambrose realised that if he put a second electrode into a similar evacuated envelope, like a light bulb, this second electrode (the anode) would attract the electrons released from the heated cathode filament, and create a current. Vacuum tubes become easy to imagine if you think about old-fashioned filament light bulbs.

Remember (no, probably not, but I am old) how light bulbs used to get really hot? That was wasted energy generated as heat, not light, hence the term, “more heat than light”, and the wonderful expression reminiscent of my entire childhood – “incandescent with rage”. Gentle low-energy bulbs just don’t offer the same opportunity for third-degree burns or social commentary.

But back to the vacuum tube. The vacuum tube was the early enabler of broadcast signals, whether the telephone network, radio or TV, and of course, early computers. Vacuum tubes do their job, but the glass is easy to break, and they are bulky and energy inefficient, as the whole tube gets heated up when the cathode is heated up. Early computers were huge because vacuum tubes and miles of connecting wires take up a lot of space, as well as using masses of electricity. The pretty orange glow they give off is waste.

In 1947, at Bell Labs in New Jersey, it was observed that when two separate point contacts, made of gold, were applied to a carbon crystal made of germanium (atomic number 32), a signal was produced where the output was greater than the input. Energy was not being wasted as heat loss. The guys described the discovery as transconductance within the varistor family (varistors are electronic components with a varying resistance depending on the input). This was a great description for jubilant electrical engineers, but it was never going to sell anything. An internal competition at Bell Labs championed the suffix ISTOR as being sci-fi-like and futuristic, and TRANS was clear and simple, so the brand-new world-changing product soon became known as the transistor.

By the mid-1950s, in America, Chrysler was offering an in-car all-transistor radio – which was better than your wife sitting on the passenger seat underneath a 20lb set of glowing valves. But it was Sony, in 1957, who manufactured the world’s first mass-production transistor radio, the TR-63. These came in funky colours like green, yellow and red. They looked modern. (Radiograms were brown or cream and looked like your parents’ wardrobe.) Best of all, the Sony could fit in a pocket – well, depending on the size of your pocket. The story went that Sony reps had special shirts with an oversize breast pocket.

But, whatever the outfit, the device was cool and neat and contemporary. No cathode meant no glow and no heat-up time. No longer would it take a few minutes, after the familiar click of the Bakelite switch, for the BBC World Service to crackle out of our set. The TR-63 ran on a 9-volt battery and boasted six transistors. Take off the back and here’s the circuit board looking like a badly packed 1950s suitcase. This, though, is the beginning of the future – with the buzzwords we all know and love: instant, portable, personal.

By the early 1960s, transistors were replacing vacuum tubes at the cutting edge of technological development. Best of all they were small – and their property of smallness changed everything. The first transistor measured around a half-inch. They were placed on a printed circuit board. It wasn’t until the 1970s that the integrated circuit was developed by Intel – by etching transistors, not onto germanium, but silicon. And then they got smaller and smaller and smaller, like something out of the genie world. So small that your iPhone 12 has 11.8 billion of them.

I think that needs a pause. Six transistors on the 1957 Sony portable TR-63; 11.8 billion in your hand right now. But in between then and now, quite a bit has happened – including the moon landing.

In 1969 Apollo 11 landed on the moon. Michio Kaku, theoretical physicist and author, put it like this: “Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon.” That doesn’t mean your phone can fly you to the moon – but it is a useful comparison when thinking about the exponential increase in computing capacity in such a small amount of time.

So, what are we doing with the 100,000 times faster processing speeds in our iPhones? Well, mainly, playing games. We’re smart but we’re still apes. Pass the banana.

Thinking of bananas, remember the banana-shaped phone in The Matrix movies? The movies that make it seem inevitable that our world is only a simulation? That banana was a Nokia 8110, once the world-leader mobile phone. But not a smartphone. The 1996 Nokia 9000 Communicator was the first mobile phone with internet connectivity – in a really limited way. Smartphones – digitally enabled devices that can do more than make a call – came into the world via IBM in 1994 with the Simon Personal Communicator. It was clunky, but alongside calls it could manage emails, and even faxes.

Almost thirty years earlier, in 1966, in her novel Rocannon’s World, sci-fi writer and general genius Ursula K Le Guin had devised the ansible – really a texting/email device that worked between worlds. One end was fixed, the other end was portable. We would be waiting a while for that to hit Planet Earth.

In 1999 Blackberry released their smartphone with the Qwerty keyboard. Like an ansible, with its keyboard and screen, the Blackberry could do calls, but its main function was email. We had to get into the 21st century for the Apple iPhone.

In 2007, when Apple was already making mega-money with its iPod, Steve Jobs was persuaded to “do” a phone that would handle everything the iPod did, plus make calls, send emails and texts, and access the internet. To do that Apple turned the humble phone into what Apple did best – computers. Safari-enabled, the iPhone wasn’t really a phone at all – it was a pocket computer.

A year later, in 2008 – the year of the global economic crash – Apple added the App Store, which is the beginning of what we think of as a truly smart phone: a phone that is globally connected, and that can be customised (personalised) by the user. It was a prescient move – a move driven by hackers and developers, who realised that what a phone is for isn’t for making calls. Since the revolution in communication that is Facebook, a phone has become primarily a social-media device. Now we go on Instagram, Snapchat, WhatsApp, Twitter, YouTube, play games, check BuzzFeed, order food and cabs. Google the internet, ask Siri, click on Spotify or Sonos, and sometimes, maybe, make a call.

When is a phone not a phone? Google’s soon-to-be-realised dream of ambient computing – really the internet of things, where all smart devices, from fridges to phones, are connected – includes, at a later date, connecting humans directly to its services, and to one another, via a nanochip implant in our brain. This will be the ultimate, and planned, end of staring at your phone – an activity that presently involves 97 per cent of Americans and 37 per cent of the world. The timeline of the smartphone 2007–20?? may be one of the shortest in the history of any world-changing invention.

In 1964, when Arthur C Clarke made his predictions of a future where “we can be in instant contact wherever we may be [with] our friends anywhere on earth, even if we don’t know their actual physical location,” he saw the exponential impact of the transistor, but he also understood that network communication depends on satellites. The first man-made satellite in space was Sputnik 1 in 1957. It looked like a steel beach ball with feelers. Today, there are thousands of satellites in space – mostly put there by nation states for scientific research. Others are for mutual co-operation, such as telecoms, and the global GPS system that tells you (and others) where you are. TV and phone signals depend on our satellite network; signals are sent upwards to a satellite, and instantly re-located back down to earth again. This avoids annoying signal-blockers, like mountains, and saves thousands of miles of land-routed cable network.

Elon Musk’s SpaceX programme, Starlink, controls more than 25 per cent of all satellites in space, and he is seeking permission to get 12,000 up there by 2025, and eventually 42,000. There are risks to all this, including light pollution and energy guzzling. As with so much of tech, most of us just don’t know what is going on, and by the time we find out it will be too late to regulate. Musk is aggressively anti-regulation.

And who owns space? Not Elon Musk. This is another kind of land-grab. Another kind of enclosure. Governments will have to regulate space – if they don’t, it’s already been stolen. The 1967 Outer Space Treaty declared space to be a common good of mankind. By 2015 the Commercial Space Launch Competitiveness Act had a different wording: “to engage in the commercial exploration and exploitation of space resources”. New technology. Old business model.

A satellite is crazily simple – as well as being enormously complex. Sputnik 1 really is the size of a beach ball. Like every satellite, Sputnik 1 has antennae and a power source. The antennae send and receive information. The power source can be a battery or solar panels. On the journey from sci-fi to Wi-Fi – when a vision of the future becomes the phone in your pocket – it is transistors and satellites that join the dots. We think of the computer as the ultimate invention of the 20th century, yet without transistors and satellites your home computer would still be running on vacuum tubes, taking up the whole of your spare bedroom, and you’d be dialling up via your landline.

Are you old enough to remember scrabbling to connect via the telephone line and hearing the wheecracklebuzzbuzzbass of the slow-motion dial-up modem? Actually, it’s not that long ago. This is not a light-bulb moment. I live in the countryside, and even in 2009 I had no broadband. I was trying to conduct a love affair with a cool New Yorker living in London. She was fully connected. I was pretending to be.

Most mornings saw me propping my laptop on the bread board and running an extension to the solitary phone socket in the understairs cupboard. I made the mistake of leaving the extension in place and a week later the mice had chewed through everything. Mice love cable. I had no phone and no internet. Progress wasn’t on my side.

But what is Wi-Fi? What it’s not is wireless fidelity.

Wi-Fi started out as “IEEE 802.11b Direct Sequence”. It’s radio waves. Plain old radio waves with a geeky label. Nobody was going to buy into that except a Dalek. So, in 1999, the brand consulting firm Interbrand made a pun on hi-fi, which really is high-fidelity, and came up with the catchy name and icon we all know so well.

In that same millennial-moment year, when we were partying with Prince like it’s 1999, Apple launched the first Wi-Fi-enabled laptop. That is so recent. So near in time. Broadband internet was city-wide across the world by 2000. That felt like a true new beginning for a true new century. And look what happened next.

Google had started out as a small search engine in 1998. The telephone directory-style headline-only internet searches were boring and slow. Stanford students Sergey Brin and Larry Page thought they could do better – and by 2003 Google had become the default search for Yahoo. Google went public in 2004, the same year that Facebook joined the world – or the world joined Facebook. Those first 10 years of the new century were incredible: Wikipedia 2001, YouTube 2005, Twitter 2006, Instagram 2010.

Even old forms, like reading, caught the revolution as the iPad and Kindle kicked off mega-sales of ebook publishing. Those sales and those devices didn’t destroy the book though, any more than the car destroyed the bicycle. A physical book, like an apple or an egg, seems to me to be a perfect form. But a perfect form that is still evolving – like the bicycle. Not everything in this world is destined to be replaced by something else.

What about humans, though? Are we going to be replaced – or at least become less and less relevant – or are we evolving? In the next decade – 2020 onwards – the internet of things will start the forced evolution and gradual dissolution of Homo sapiens as we know it. But before we get to the internet of things and a world of connected devices – and some directly connected humans – let’s go back to the internet itself, to see how far we have come, and where we might be going.

Back in late-1960s America, soon after the Summer of Love, the Advanced Research Projects Agency Network (ARPANET) adopted a British packet-switching system to transmit limited data between research institutions. The more familiar term, INTERNET – really just internetworking – came into use in the 1970s, to describe a collection of networks linked by a common protocol.

It was Tim Berners-Lee, an Englishman working at the physics lab CERN, in Switzerland, who developed HTML. HTML (hypertext mark-up language) linked up an information system accessible from any node (computer) in the network. In 1990 the World Wide Web as we know it came into existence. Think of the internet as the hardware and the web as the software.

By 2010, the web had become the way for billions of us across the world to access the internet. And, of course, we had Google as our search engine. The bigger the internet, the more sophisticated the search needs to be. The question now, though, is, are we being nudged as we search? Do we really want advertising in our face whenever we type in a word? Do we want our data tracked and traced and repackaged? Do we want to be profiled by an algorithm? Why can’t I buy something online without clicking ACCEPT on their privacy policy – which really means that what I’ve just bought is not private at all?

Personalising the web is where the money is. Your web – where everything is tailored to “help” you navigate faster, get to what you want, often via what you might be persuaded to want – is the new consumer model where the customer pays twice: with cash for the goods, and with the free gift of information about ourselves.

That information is valuable. Even when we aren’t buying stuff, when we are browsing around or using social media, we are being strip-mined for our data. Ads aren’t just selling you any old stuff – they are trying to sell you stuff your cookie trail tells them you might be persuaded to buy. More worryingly than selling you stuff, your newsfeed is algorithmically tailored to what you “want” to hear about. Our clicks and likes determine the so-called Editors’ Picks, making sure that the little we know – and all our personal bias – will be looped back to us again and again, ensuring more clicks and likes in the echo chamber of “choice”.

Access to different ideas and a wider world view just disappears. It’s censored. Not by a censor, of course, because that would be totalitarian – but by what looks like personal choice, your very own personal choices, nudged a little, just for you.

Most of life is about being wrong, making mistakes, changing our minds. Web profiling means you need never be wrong, never seem to make a mistake, never have to change your mind. You’ll be sold what you have already bought. You will read what you have already read. Amplified.

This will get more interesting/worrying as Siri and Alexa grow up – or if Google figures out how to develop a genuine personal assistant for each of us. Siri and Alexa are fun, but all they really do is connect with your existing systems – open Amazon Storytime, play music from your Sonos library, reorder your cat food, turn on the Nest thermostat controls, and search the web multiple times faster than you can. An AI PA will be a mini-me – a neural network (a series of algorithms trained to recognise patterns) that learns my wants and needs, my favourite foods, my travel preferences, restaurants, calls, bills I forget to pay, birthdays I forget to remember, and all my digital photographs, my texts, my emails. My self, wherever I have hidden it.

And what about my politics? My dirty secrets? Even my thoughts? Here’s Larry Page on how Google will go in the near future:

Our ultimate ambition is to transform the overall Google experience, making it beautifully simple. Almost automagical, because we know what you want and can deliver it instantly.

And while what you want is being known and delivered, all of that will be tracked. Cookies, remember, are bits of tracking code inserted into your computer. When you, personally, are no longer doing the searching, because your personal PA is doing it for you, there will be no effective privacy preference.

But in fact there really isn’t one now. Even innocent-seeming apps like the weather and ride-sharing are infested with tracking code. A mini-me PA will be a seductive choice. Why wouldn’t we want an able, considerate, smart helper who is always available, and mostly free? That used to be called a wife. But then feminism spoiled the party.

She or he could, of course, in time, be a double agent. In a Blade Runner world, I could be turned in to the authorities by my own virtual mini-me. And she’ll know where the money is. Where the bodies are. Who my friends are, and how to find them.

Can I run away? In a cashless world I will be using my phone to pay for everything at first – and then iris recognition, or fingerprints, or chip implant will do away with the need for external devices. I will be my own device. No need for a wallet, or a phone, or a set of keys, or an office-swipe. I will be free. And followed everywhere. Or, rather, there will be no need to follow me because my location will be obvious.

Sci-fi. Usually a dystopia for the hero and friends, and a utopia to those who benefit from or accept the situation. I suspect that the future will not be so binary. What’s for sure is that “privacy is an anachronism” (thank you, Mark Zuckerberg). Like every other system in the coming world, I too will always be on, always be known, always be available, even while I sleep, dream, or think about things. Soon the interface between me and mini-me – my Google self and myself – will be redundant. We will merge.

Think about a world where there are no private thoughts, no private actions. The internet of things will allow any object to act as a computer. Your fridge will tally the food you buy and eat. If you sign up to a dieting app, the fridge will “help” you by directly ordering the food you should be eating. The fridge will also self-lock if you break the rules. Desperate folks trying to hack their own fridge is a new torture coming your way soon. Smart beds will be able to monitor – and assist – a good night’s sleep, warming or cooling the bed, managing light flow, and reporting your state to your automated doctor; you might need medication. You might not be fit to drive today. Did you have sex? No? Is your relationship healthy? Perhaps you need a counsellor, or Viagra. In the smart kitchen, the toaster will remind you that today is a no-carbs day. The smart toilet will assess the contents of your initial evacuation. (I am not making this up.)

Leaving aside the monetisation of every breath you take, there will be advantages to being seamlessly connected. Chore-work and bore-work can be taken care of – who wants to go to a supermarket, report the faulty boiler, wait in for the plumber, or manage their health, when check-ups with the GP can be made by your monitoring implants, and smart homes will run themselves, including taking receipt of goods, and letting in the plumber, whose movements will be visible on your phone and whose access will be strictly timed? When you are gaining so much, does it matter if there are no secrets, and perhaps no self, anymore?

The new reality will not be sold to us as surveillance, with its totalitarian overtones. The future will be sold to us as empowerment. Elon Musk’s Neuralink company is working on brain-computer interfaces – threads that will allow someone to control a computer via their thoughts. Human trials started in 2020. The current aim of the tech is to help people with paralysis, a laudable aim. Musk’s eventual aim, though, seems to be symbiosis with AI, so that humans don’t get left behind in the intelligence game.

Modern medicine has already reset human biology. We live twice as long as our ancestors at the start of the Industrial Revolution. The rich, who have access to the best of everything, are doing very well. Naturally, they want to do better, which is why Silicon Valley is investing in research that will stall, or reverse, physical and cognitive decline.

Cognitive decline for humans is as real as muscle loss and organ failure. In the end, our biology beats us. AI systems, embodied or not, suffer no such losses and no such decline. AI systems can augment, version-up, get smarter. If humans are, say, HomoSapiens3, the post-Industrial Revolution version, we shall have to get to HS4 pretty soon if we want to stay in the game. Merging with the AI we are developing is a logical outcome. Outcomes, though, often resist prediction.

What can be predicted is personalisation. As we saw, personalisation began with that first transistor radio back in the 1960s: at last, a small portable device just for you – no need to sit round the family radio. Go your own way.

Personalisation was enthusiastically adopted by the laptop and smartphone industry. Simultaneous with our fully public-data-harvested-known selves is the personalisation of that self. It will be “your” smart implant. “Your” smart car/house/lifestyle/insurance/portfolio/ personal shopper/fitness guru/therapist/PA. Tailored to you, your tracker-helpers will change and develop seamlessly (frictionlessly) as you do.

That’s a clever marketing move. Personalisation isn’t just about the product anymore. It’s the concept. Personalisation is being offered in place of the old-fashioned, outmoded idea of privacy.

But why is privacy problematic in our internet-of-everything future? Privacy is friction. In economics-speak, friction is the opposite of flow. Friction is whatever impedes the data flowing from you and all that you do, to the interested parties who want to make money out of you, and/or control/nudge your behaviour. It is as simple as that.

Or am I just an analogue human who likes the idea of being off-grid sometimes? At present that is still possible if you leave your phone at home, walk wherever you want to go, pay cash where there is no CCTV (both increasingly difficult moves, I admit), don’t browse the internet for a few days. Soon, though, as smart devices and smart implants become normal, you won’t be logging in. You’re in. You’re on. For life. From sci-fi to Wi-Fi to my-wi.

Former CEO of Google Eric Schmidt, sitting next to Facebook COO Sheryl Sandberg at Davos in 2015, put it like this:

The internet will disappear. There will be so many IP addresses, there will be so many devices, sensors, things you are wearing, things you are interacting with, that you won’t even sense it. It will be part of your presence all the time.

For young people – digital natives who have grown up with a phone and Facebook, whose every move is on Insta, and who want to be influencers themselves – questions about protection, privacy, content-screening and platform responsibility are being asked by an NGO called 5Rights. 5Rights was founded in the UK in 2015 by filmmaker Beeban Kidron. When I asked her about 5Rights, she pointed out that over a billion underage young people are online every day, for hours every day, and treated by the platforms they use as if they are adults.

Content is easy to access, hard to monitor. Online grooming is a particular threat. For example, during the 2020 Covid lockdown, in the UK (just the UK here, folks) around 9 million attempts to view child-abuse images were blocked in one month alone.

Kids are tech savvy but tech vulnerable. 5Rights wants to see kids protected in the digital world just as they are in the physical world. In the physical world we do make a distinction between children and adults – a distinction hard-won during the Industrial Revolution. We don’t want our kids working in sweatshops, but we seem unconcerned about exploitation via their phones. That includes addictive gaming and porn habits, as well as the suicidal misery of “likes”.

Data collection that starts early in life amounts to a conquest of that life. And, as we have seen in the stand-off between China and Hong Kong, forced data removal from popular sharing sites like TikTok can be used to persecute or prosecute young people, to monitor their behaviour, and no doubt to influence their political “choices” later. In China’s case the data-snatch is clearly political. That’s not the point though. China is doing in an obvious way what is being done quietly and covertly in the “free” Western world every day. Our data is not anonymous. Who has a “right” to this data? To sell it? To snatch it? To package it?

What happens, though, when there is less, or even no, division between the online worlds and the physical world? When Eric Schmidt’s prediction of the end of the internet and the start of the internet of everything happens? How do we protect anyone when the internet is always on and we are always on it? I loved that great story of the online-addict teenager whose mother confiscated all her devices and turned off the Wi-Fi. The kid realised she could send tweets from the home’s smart fridge. The whole thing may well have been a hoax, but Reddit offers full instructions on how to tweet from a Samsung fridge, if you have one. The point of this story is that the goal of our digital masters in Silicon Valley is that there will be no offline. No need to hack your fridge. And yet…

It may be that all of this – privacy, data usage – is a temporary problem. At present we imagine human interests and human actors as dominant in all our scenarios. If, though, AI does become superintelligent – a player and not just a tool – then the future for humans may be irrelevant. I mean, how much data will AI need on a species being consigned to the Museum of History?

When I talk to people about the future, many believe that the world’s head-in-sand attitude to climate catastrophe will make that other kind of sand – silicon – irrelevant, in or out of a valley. We’ll be fighting for food, not tweeting from our fridges. Others believe that developing super-intelligent AI, as swiftly as we can, is our best chance at survival. Until 2020, none of us was thinking about viruses as the wipe-out call. Now we are.

Ironically, although the world may become much poorer because of Covid-19, the virus is a chance for the tech giants to get much richer, and to get more control. And not just Amazon doing home delivery. Eric Schmidt has been talking enthusiastically about homeschooling for all (except the rich, you can be sure), using the platforms that have begun to replace contact during the pandemic. If we are at home, we will need to be connected in new ways. That’s an opportunity for the connectors. Virtual-reality avatar sessions are being trialled by Facebook. I am sure they will become as real for us as Zoom is now.

More worryingly, track and trace everywhere, including even a visit to the pub, is going to allow levels of surveillance that civil-liberties groups would have taken years over, arguing about privacy and usage. That’s all gone now. Being watched equals being safe.

But what about energy constraints? AI is an energy guzzler, and even if we extracted all the fossil fuel left on the planet it wouldn’t be enough for the kind of super-future envisioned by Ray Kurzweil or Elon Musk. That’s why the premise of The Matrix is that humans are just battery packs in an AI simulation.

Optimists say that the energy requirements of an AI future will force the world towards low-carbon solutions. The market will drive the change because it must.

But there are other constraints on an AI future too. Intel co-founder Gordon Moore came up with his own law: Moore’s law observes that every two years the number of transistors that can fit onto a square inch of microchip will double. Fifty years after that first Intel chip, the computing power that once filled a whole building with hardware now fits into your handbag. And it uses much less power. That’s progress.

There is a limit to progress though – and unless we switch our systems we are pretty much at the limit. Put simply, there’s no more room, at the small scale of the laptop or phone, to keep doubling the number of transistors. No matter how tiny, they still take up (some) physical space.

The next jump to faster processing speeds and more commands will be quantum computing. There’s a rumour that China has made the breakthrough already. If they have, they aren’t telling anybody. Google and IBM are both claiming to be nano-close.

Transistors work on the familiar zero-and-one principle – whether it’s analogue or digital. “Bits” of information each hold a 1 or a 0. A quantum “bit”, or qubit, is different. Very different. By harnessing subatomic weirdness, a qubit can be a 0 and a 1 at the same time.

Numbers-wise, 8 bits make a byte. Your smartphone memory could have two gigabytes – that’s 2 x 8 billion bits – but a few dozen qubits is way, way beyond that. According to Dario Gil, director of IBM’s research unit in Yorktown Heights, NY:

Imagine you had 100 perfect qubits. You would need to devote every atom of Planet Earth to store bits to describe the state of that quantum computer. By the time you had 280 perfect qubits you would need every atom in the universe to store all the ones and zeros.

At present, IBM Q System One lives like a reclusive rock star inside a 9-foot cube of black glass only accessible through 700lb doors a half-inch thick. Quantum computers must be absolutely remote from any entanglement with reality – entanglement affects the outcome, as anyone who has ever fallen in love knows. So, we are building a god so remote that it must live in an inaccessible temple visited only by high priests in special clothes. The high priests can ask the questions and interpret the answers. Quantum computing may be the future, but that story is like a Pharaoh-dream from the past.

Where is all this heading? What’s for sure is that fewer and fewer people will know how the systems that control us actually work. We’re not talking here about fixing the washing machine. While there is no consensus on the future, there is consensus that total connectivity will happen – to the internet, to our devices, to our machines, to each other. When you customise your connectivity it will feel like yours. Actually, it will feel like you. And you will feel as if you have chosen it. An avatar Sinatra singing “I did it my-wi”. And a whole new questioning of what is “you” comes bubbling up.

My-wi is religious in its own way. Mark Zuckerberg has talked about Facebook as a “global church”, connecting people to something bigger than themselves. It may be bigger – it may be smaller – but it will be connected. Here’s George Orwell in his novel Nineteen Eighty-Four:

You had to live – did live, from habit that became instinct – in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinised.

The success story of Homo sapiens has been one of infinite adaptability. Adapting ourselves to the Machine Age was an unprecedented rip with our evolutionary past. We mourn the price to Planet Earth, but few would wish to return to a pre-1800 world. We dislike the Intrusion of Everything, but who would want a world without smartphones and Google? Perhaps, though, we would prefer a world that may be less democratic, but that could also be less stressful for the next stage of our development.

My-wi might leave us as little children are: cared for, fed, safe, watched over, with plenty of fun stuff and free stuff, and with someone else deciding the big stuff. No reason to believe that who decides will always be in human form.

Extracted from 12 Bytes: How We Got Here. Where We Might Go Next, by Jeanette Winterson (Jonathan Cape, £16.99), out on July 29. To order for £14.99, call 0844 871 1514 or see books.telegraph.co.uk

How do you feel about the future of 'total connectivity' to tech? Share your thoughts in the comment section below.