Digital v Human

I thought I’d share the preface to the new book, Digital v Human. I’ll share some more in a couple of months when things are more polished.

Preface: Taming the Future

The future casts a long shadow, in my case all the way back to Australia in 2006 when I was asked to write a book called Future Files about where I thought the world was heading over the next 50 years. But the future was always an excuse, used more as a satirical mirror than a distorting crystal ball. What I was interested in then, and remain interested in today, is people and how they respond to new ideas and events. Also, how we relate and respond to each other, which is what this book is about. It is peoples’ lives, their deepest dreams, what they believe in and what they are afraid of, that captivates me, not the latest ephemeral gadget, although these things can, and do, influence each other.

Future Files must have hit a nerve, because the book ended up being published in 16 languages. Part of the reason was timing – there hadn’t been a book about the distant future for a long time. But I was also lucky. I wrote then that debt levels were unsustainable and that a systemic shock to the financial system was inevitable. This was: “not a debt mountain, but an avalanche waiting to descend…. big banks, in particular, will come under increasing scrutiny about their lending practices, and there will be calls for salary and profit caps.”

Nothing like a page of prophesy to sell some books, although I’m still waiting patiently for the European Union to “Splinter and ultimately collapse” and for the day when “Women with facial lines will be highly desirable.” It seems that I made the mistake of thinking we’d tire of pixelated perfection, but evidently not. We haven’t grown tired of debt either; in which case I suspect that history will very soon repeat itself in the form of another spectacular financial crash.

But the main reason the book sold well was due to an emerging epidemic of anxiety and insecurity. The world was changing and readers were seeking a narrative that explained where things were going. The book provided a comforting cloak of reassurance to those grieving the loss of an imagined future.

The distant future had once been hopeful and at times rather fun. It was a preview of coming attractions. But by late 2007 people had given up hope of seeing flying cars or owning personal jetpacks. All people wanted to know was whether everything would turn out all right. Would there be a comforting resolution after the explosive opening sequence? Would computer-generated effects continue to enthral or would the computer move from all-conquering hero to sinister villain lurking behind our flickering screens?

This dystopian discomfort was likely linked to a feeling that events had got out of control. Things were unfolding too fast for people to comprehend. Gone were the days when you could start a broken down car by yourself or understand how a camera worked. Even by 2007, it wasn’t just financially engineered credit default swaps or ‘additionality’ linked to carbon credits that were incomprehensible – you almost needed a degree in complex systems theory simply to switch on a domestic washing machine. Seriously, do we really need 40+ washing choices, including the incomprehensible option to wash your clothes later?

Complexity, synonymous in engineering terms with instability, had become a hallmark of the early 21st Century and the world’s axis had shifted toward the outskirts of normal. This was unsettling, especially to anyone brought up in an analogue, western-centric world where globalization had meant Americanization and cheap washing machines.

There have always been generational overlays to future fatigue. As the writer Douglas Adams observed: “Anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilization we know it, until it’s been around for about ten years when it gradually turns out to be alright really.”

Despite this, sometime after the Millennium (probably after the explosive events of 9/11 or, possibly, after the premature death of Douglas Adams) the future became obscured. The dream that we once called the future soured and its shadow became awkward and indistinct. This wasn’t true for everyone. How one imagines and responds to the future has always depended upon whom and where you are. The future is a mental construct projected from things that have been experienced.

In large parts of Asia and Africa rapidly rising incomes and opportunities mean that optimism is in the ascendant; while across swathes of the US and Europe declining real incomes mean that it’s doom and gloom that’s projected forwards. Nevertheless, by 2008 the US financial crisis that had started with people borrowing too much money had become a global problem and unfolding events had created a vortex into which many age-old certainties were sucked.

If we had been able to remember the past and not over-react to the present we might have been all right. If the crisis had occurred much earlier, ignorance may have remained bliss. There was once less information and both people and money were less connected, which meant fewer systemic risks.

A study conducted by Angela Dimoka, director of the Center for Neural Decision Making at Temple University (US) found that as information is increased, so too is activity in the dorsolateral prefrontal cortex, a region of the brain associated with decision-making and the control of emotions. If information flow is steadily increased, activity in this region suddenly falls off. The reason for this is that part of our brain has essentially left the building. When incoming information reaches a tipping point the brain protects itself by shutting down certain functions. Key implications include a tendency for anxiety and stress levels to soar and for people to abstain from making important decisions.

Fast-forward a few years and some e-vangelists started looking at the world through Google Glass and other augmented reality devices. But others, a majority perhaps, put on rose-tinted spectacles and framed their gaze backwards. On the fringes, some squinted scornfully and aspired towards self-loathing. Others suggested that the very idea of human progress had become impoverished. Maybe they had a point, but there was no redemptive framework in clear sight.

What this amounts to is a clash between those racing towards the future and others fleeing from it. A similar tension between faith and scepticism plays out between Islamic fundamentalism and liberal agnosticism. Some fundamentalists would like to reinstate a 7th Century legal framework, while some online libertarians would like to escape legal constraints altogether.

Western self-loathing remains an especially odd development. On most measures that matter; life expectancy, infant mortality, literacy, extreme poverty, starvation, the number of women in education and employment – life has never been better for most of the planet. If you doubt this you have clearly not been paying attention.

But despite the good news about the expansion of the global middle class, the electrification of Africa or survival rates for cancer, we focus instead on doomsday forecasts about rogue asteroids, global pandemics and employment-eating robots.

These, along with climate change, obesity, resource depletion, bio-diversity and pollution, are serious issues, but I’d suggest that they are focal points for deeper anxieties and are unlikely to be terminal for the human race. So why are we feeling so miserable when there’s so little to feel miserable about?

Prior to 9/11 (or the fall of the Berlin Wall in 1989 or the financial crisis of 2008…take your pick) people believed that they had a clear view of what lay ahead. With 20/20 hindsight such views were delusional. But the detail was irrelevant. At least people had a sense of direction, from which they could construct a narrative to make sense of things. For many people life was hard, but they knew where they stood, which is why countries like Russia long to go backwards and reconstruct previous certainties along with territorial borders.

Today many people feel that the future has evaporated or else they are held hostage by some unknowable and uncontrollable force. But this is nonsense. Firstly, certain elements of the future are pre-determined. Demographics retain a high degree of certainty, while geography and geology impose a number of constraints. Parts of the future can therefore be found on the flood plains and tributaries of history.

Secondly, the collective psychology of nations, influenced again by the past, can suggest a sense of direction. Thirdly, there’s technology. It’s true that technology is neutral, but only if you take humans out of the equation. Once technology reaches humans hands it creates change.

More specifically, it is the nexus of human history, human nature and what many regard as increasingly inhuman technologies, where I’d expect the largest tensions to bubble up over the years ahead, especially as we struggle to adapt our slowly evolving monkey brains to the rapidly changing technological landscape.

These thoughts were on my radar in 2006 when I wrote that: “to a large degree, the history of the next 50 years will be about the relationship between technology and people,” but I believe that I underplayed the significance of this statement.

This is odd, because it is a point well made by Alvin and Heidi Toffler in Future Shock, published in 1972, of which I have a well-thumbed copy. Their book argued that the perception of too much change over too short a period of time would create psychological problems and mental instability at both an individual and societal level.

You might argue that they were wrong (it didn’t happen) or that they were right, but that they got their timing very wrong (futurists often use the line “give it time” in relation to suspect forecasts). One might also speculate about who gets listened to and what gets believed and why, but this isn’t the time or place.

Personally, I think the Toffler’s anticipated something of significance and if my book has a dramatic chase scene this would be it – our desire for change and renewal crashing up against our need for permanence and stability. Will we be forced to adapt to new technologies and global norms or will we insist that our new technologies adapt to us, deleting, controlling or escaping from them as necessary?

How, for instance, should technology serve humanity and what, ultimately, is its purpose? Should all forms of automation and artificial intelligence be made to exist within an agreed moral framework and where, if anywhere, should the line be drawn in relation to what humans and machines are permitted to do?

Should humans and machines be allowed to merge creating augmented, partially synthetic or cyber-human-hybrids and, if so, where would this leave any remaining natural, pure or organic humans?

Whatever happens we should never lose faith, because the future is always wide open. The future is shaped by the choices that we make and these choices can always be challenged and changed, even at the last minute.

In one sense the problem we currently face is not technology, but humans, but more about us later. One thing we should certainly do is worry far less about what might happen over the coming decades and focus far more upon what it is that we, as individuals and institutions, want to happen. And it is not necessarily logic, but rather our deepest hopes and our darkest fears that will help shape this.

The aim of this book is not prediction, but illustration. It is a critique of how we live now and a discussion about how we might live next. It is about who we are and where we could be going and about the need for human beings to remain central to any new digital interests or perspectives. Hopefully, the shadow cast by the future will henceforth be our own and will provide a small degree of comfort rather than bewilderment.

Richard Watson, somewhere in England, May 2015

6 thoughts on “Digital v Human

  1. I’m not sure if you’ve read it but you really should read “Dennis M. Bushnell, Future Strategic Issues/Future Warfare [Circa 2025] ” he goes over the trends of technology coming up and how they may play out. Bushnell was chief scientist at NASA Langley Research Center, he is responsible for technical oversight and advanced program formulation. His report is not some wild eyed fanaticism it’s based on reasonable trends. Link.

    https://archive.org/details/FutureStrategicIssuesFutureWarfareCirca2025

    Page 19 shows capability of the human brain and time line for human level computation.
    Page 70 gives the computing power trend and around 2025 we get human level computation for $1000. 2025 is bad but notice it says”…By 2030, PC has collective computing power of a town full of human minds…”. This coming real fast and is the biggest challenge that humans have faced ever.

    The only way that this can have no meaning is if computers go crazy with human or higher than human level computation. This idea comes from Larry Niven, Pournelle, etc. great Sci-Fi writers in the grand space opera tradition. I just don’t believe it. Every since this computer trend has been established Sci-fi has had a hard time dealing with it. Greg Egan has a great series “culture series” where the computers become partners with us but we have no assurance that this is the case.

    The real problem is not that the machine is smart it’s that it has no empathy for humans. How does empathy work? I don’t think anyone knows. It’s hard enough to program intelligence. We know people with no empathy cause all sorts of problems. What about psychopathic super machines. It’s frightening.

    I see three futures that are coming very fast. They will probably be set within 15 to 20 years. One is where the robots do all the manual labor for us. Grow our food, all the dirty work and we will take it easy with plenty for all. Along with this could be nuclear power, especially molten salt reactors, which can easily provide all the needed energy for the whole planet at very high rates. With enough energy city farming could leave lots of room for nature. Might be too boring for some but it’s better than number two.
    Number two the wealthy and psychopaths control the robots and kill everyone off not “in the club”. I have no doubt, although it would take several pages of lengths of links to begin to prove it, that psychopaths are in very large numbers controlling many positions in our government and corporations. Even worse, they being psychopaths, they would after killing all the normal people move quickly to killing each other off. There would be very few or no people left. Maybe even nothing but robots like in the Berserker War series of science fiction that have no function but to end life.
    The third is not much better. We get the AI’s but something goes wrong. Say a corn growing AI in Iowa notices all the areas it’s not growing corn. Somehow comes to the conclusion that the humans are screwing up it’s ability to maximize corn production. Makes a virus to kill all humans and proceeds to cover the Earth with corn. The AI after doing so is deliriously happy.

    The reason why people fear the future is in the present is it’s really scary. Look at all the bugs and viruses in a simple personal computer and think about what the odds are something will be detrimental in an AI’s programming.

  2. Reading this again, I think that your key point is that such machines have no empathy. Too which I might be tempted to add (well I have added in the book!) that many of the people designing these machines are ‘on the spectrum’ and have no empathy either….

  3. A highly readable and thought provoking intro. If Future Files is any precedent I look forward to a provocative read when the full book is published.

  4. Empathy appears to be a handicap in the digital future.
    Perhaps empathy will be a lost art in the future, I do notice business is being run by the non empathetic.

    I for one am nostalgic for the days when empathy existed.

Leave a Reply

Your email address will not be published. Required fields are marked *