Not 100% true. It’s finished in the sense that I hit the required word count, very mechanical, but it still needs a very big polish. The image above is London, which I’m finding far more condusive to writing. Weirdly, less distractions. Or maybe it’s the right kind of distractions. I can go into the fray or withdraw from it. In the country it’s almost too quiet and the famous flying dog* can drive me nuts. I’m seriously thinking of taking a week off and going to Greece on my own to do the final edit.
* Maybe I didn’t mention this? He jumped out of an upstairs window a few weeks back. Maybe that’s another book? The Day the Dog Jumped Out of the Window.
It’s done. Oliver and I have sent it off for an edit. If you are interested here’s the intro.
People have always been curious about what lies over the horizon or around the next corner. Books that speculate about the shape of things to come, especially those making precise or easily understandable predictions, have been especially popular over the years. Interest has not diminished of late. Indeed, the number of books seeking to uncover or explain the future has exploded. The reason for this, which ironically no futurist appears to have foreseen, is that rapid technological change has combined with world-changing events to create a future that is characterised by uncertainty and thus anxiety. The world offers more promise than ever before, but there are also more threats to our continued existence.
During the preparation of this book we have seen the sudden collapse of Egypt’s Mubarek regime and the domino affect it has had on the Middle East; the emergent recession in parts of the United Kingdom; the economic plight of Italy, Greece & Spain; the medieval atrocities being perpetrated in Syria; the continuing demise of Barack Obama and the introduction of the iPad which is selling over one million units every month – not to mention 911-style attacks on confidential government and commercial data and John Galliano being caught on a phonecam making racist remarks in Paris.
In short, the future is not what it used to be and needs rescuing. There is now a high degree of volatility in everything from politics and financial markets to food prices, sport and weather and this is creating ubiquitous unease – especially among generations that grew up in eras that were characterised, with 20/20 hindsight, by relative stability and simplicity. A world more like Downton Abbey than Cowboys Meet Aliens!
Thus the interest in books that explain what is going on right now, where things are likely to go next and what we should do about it. But there’s a problem with all these books about the future. Indeed, there’s a fatal flaw with almost all of our thinking about what will happen next. Regardless of our deep desires, a singular future doesn’t exist and there is no heavenly salvation in sight. The present, let alone the future, is highly uncertain and we are even starting to question what happened in the past. At a recent futures summit in Provence, Grigori Yavlinsky, the ex-presidential candidate, admitted to us that the most uncertain thing about Russia was its past. Logically, if the future is uncertain there must be more than one future.
There are, of course, different ways in which the future might unfold and suggesting, as many futurists and technologists do, that one particular future is inevitable is not only inaccurate, but is dangerously misleading. What is worse is when we are asked to assign probability to this future emerging rather than that one. Linear analysis and the extrapolation of current events is a very straight road that promotes directly unforeseen shocks coming from all sides.
As the historian Niall Ferguson has observed: “It is an axiom among those who study science fiction and other literature concerned with the future that those who write it are, consciously or unconsciously, reflecting on the present.” Or as we like to say – all futures are contemporary futures in the same way that all prediction is based upon past experience. This is one reason why so many predictions about the future go so horribly and hilariously wrong.
For example, in 1884, an article in The Times newspaper suggested that every street in London would eventually be buried under nine feet of horse manure. Why would this be so? Because London was rapidly expanding and so too was the amount of horse-drawn transport. Londoners would, it seemed at the time, soon be up horse manure creek without a paddle.
What the writer didn’t foresee, of course, was that at exactly this time the horseless carriage was being developed in Germany by Daimler and Benz and their new invention would change everything. But four years later, in 1898, Karl Benz made exactly the same mistake by extrapolating from the present. He predicted that the global demand for automobiles would not surpass one million. Why? Because of a lack of chauffeurs! The automobile had been invented but the idea of driving one oneself had not. Thus it was inevitable, he thought, that the world would eventually run out of highly skilled chauffeurs and the development of the automobile would come to the end of the road.
The preoccupation with trends analysis is doubly misleading. Not only must trends be lined up with ‘discontinuities’, counter-trends, anomalies and wild cards, which have a nasty habit of jumping into view from left field, they are also retrospective and not ‘futuristic’ at all. A trend is an unfolding event or disposition, which we trace back to its initiation, and trends tells us nothing about the direction of velocity of future events. It is true that occasionally an idea or event occurs that is so significant that history is divided into periods of before and after. The steam engine, the automobile, the microprocessor, the mobile phone, the world wide web, the collapse of the Berlin Wall, 9/11, Google, Facebook and Amazon are all, arguably, examples.
But even here there is confusion. We all have a particular lens through which we see the world and no two individuals ever experience the present in the same way. Moreover, our memory can play tricks. As a result, there is always more than one reality or worldview as we like to call it.
Equally, it is not a binary world. It is a systemic one in which influences making for change interact with each other in complex and surprising ways. It is also a world where it is rare for a new idea to totally extinguish an old idea, especially one that has been in common usage for a very long time. For instance, despite the facility with which mobile technology can deliver ‘media content’, there’s still something reassuring about the daily newspaper dropping with a thud on to the hall mat as we begin another day.
Moreover, while means of delivery, business models, materials, competitors, profit margins and even companies may change radically the deep human needs (e.g. the desire to tell or listen to stories) often remain relatively constant.
Change happens rapidly, but in most instances it takes decades, often generations, for something new to result in a linked extinction event. The slow pace of fast change is observable. We all witness this. Like the destroyer in Sydney Harbour’s Maritime Museum at Darling Harbour – built in 1943 – which has in its ops room a fax machine.
More often than not different individuals and institutions will experience present and future in slightly different ways depending on where they live, what they do and how they have grown up (i.e. more than one reality again). There is more than one present let alone more than one future.
This is a good thing, as is the level of uncertainty that surrounds the future. Indeed, in many respects this is one of the most interesting times ever to be alive, because almost everything that we think we know, or take for granted, is capable of being challenged or changed, often at a fundamental level. Even human nature if Joel Garreau the author of Radical Evolution is to be believed.
It is the view of the authors of this book that the only rigorous way that one can deal with a future that is so uneven and disjointed is to create a framework that reveals a set of alternative futures covering a number of different possibilities.
This technique, called scenario planning or more properly scenario thinking, originated as a form of war-gaming or battle planning in military circles and was then picked up by, amongst others, Royal Dutch Shell, the oil company, as a way of dealing with ambiguity and uncertainty. In Shell’s case, scenarios correctly anticipated both the 1973 oil crisis, which hiked prices dramatically, and the corresponding price falls almost a decade later.
Other incidences where scenarios have foreseen what few others could include Adam Kahane’s Mont Fleur Scenarios in South Africa in 1992, which foresaw a peaceful transition to democratic rule, and two sets of scenarios created by the authors of this book for a major bank in 2005 and the future of the Teaching Profession in 2006/7, both of which identified futures around the global financial collapse that occurred in 2008/9
This, then, is a book about the future that offers readers a number of alternatives for dissection and discussion. It is not a book about trends, although key trends within demographics, technology, energy, the economy, environment, food, water and geopolitics are commented upon in depth. Equally, it is not a ‘how to’ book about scenario planning, although in the second half of the book the authors explain how scenario planning works and outline how each of the four different scenarios presented in the book were developed. Rather it is a book that considers a number of critical questions and then uses a robust and resilient process to unleash four detailed scenarios about what it might be like to live in the world in 2040 from a variety of different perspectives. It is not simply about where today’s trends might take us but about what the world in 2040 might be like.
It is not our intention to predict the future. We are not seeking, as it were, to get the future totally right. This is impossible. Our aim is rather to prevent people from getting the future seriously wrong. This is achievable, but only if we give ourselves the chance to think bravely and creatively. The book is intended to form part of a deep conversation. It is designed to open peoples’ minds to what is going on right now and create a meaningful debate about some of the choices we face and where some of the things that we are choosing to do – or allowing to happen – right now may go next.
It is intended to alert individuals and organizations to a broad range of longer-term issues, assumptions and decisions and to firmly place a few of them on the long-range radar for careful monitoring and further analysis. It is about challenging fundamental assumptions and re-framing viewpoints, including whether or not people are asking the right questions. And in this context disagreement is a valuable tool.
Most of all, perhaps, we would like liberate peoples’ attitude towards the future. In all our work we discover that people from all kinds of professions and backgrounds want to make a difference – to generate change as well as adapt to it. As Peter Senge once remarked: “Vision becomes a living thing only when most people believe they can shape their future.” So, yes, people need to understand the opportunities and threats that lie ahead but also consider in which direction they would like to travel.
For example, is mankind on the cusp of another creative renaissance, one characterised by radical new ideas, scientific and technological breakthroughs, material abundance and extraordinary opportunities for a greater proportion of the world’s people, or are we in a sense at the end of civilisation, a new world characterised by high levels of volatility, anxiety and uncertainty?
Are we entering a peaceful period where serious poverty, infant mortality, adult literacy, physical security and basic human rights are all addressed by collective action or are we moving more towards an increasingly individualistic and selfish era in which urban overcrowding, the high cost of energy and food, water shortages, social inequality, unemployment, nationalism and increasingly authoritarian government combine to create a new age of misery and rage?
Some urban economists and sociologists are predicting a future in which between one and two billion people will be squatters in ‘edge cities’ attached to major conurbations – Mexico, Mumbai, Beijing and more – while others believe in the concept of a smart planet in which our expertise delivers a triumphal response to the drivers of change and we create local self-managed inclusive communities which resonate with traditional democratic value.
Just what does the future have in store for us? This is what this book aims to find out.
I’m listening to Old Ideas by Leonard Cohen (I love it but the kids really hate it!) trying to work out whether gamification can be justified as one of the ’50 big ideas’ in one of my new books. It’s significant, but I think I should dump it and replace it with synthetic biology.
Here’s the page…
Gamification is the application of online gaming techniques, like gaining points or status, to engage the attention or alter the behaviour of individuals or communities. Wearable devices linked to game-like systems, for instance, could induce overweight people to take more exercise or eat healthy foods.
Gamification works on three principles: First, people can be competitive (with themselves and with others). Second, people will share certain kinds of information. Third, people like to be rewarded. That’s why if you regularly buy a coffee at your local coffee shop you might end up with a nice badge courtesy of a company like Foursquare. And perhaps why, if you drink enough coffee at the same place, you might be crowned the coffee shop king – for a day. Or there’s Chore Wars, where people battle the washing up in return for virtual points or avatar energy boosts.
These are mundane examples, but there are better ones. Life Circle is a mobile app that allows blood banks to keep track of where potential blood donors are in real time. Clever, but the really smart bit is that blood donors can synchronise this with social networks to engage in a bit of competitive activity concerning who’s given the most blood or who’s donated most often. Endomondo is another example whereby users can track their workouts, challenge their friends and analyse their fitness training.
Similar techniques might be employed to get people to fill in tax returns, stop smoking, give up drugs, remember to take their drugs, drink less, walk more, vote, sleep, remain married, use contraception, cycle, recycle or revise for exams. Education, for example, especially in the early years, is all about goals, points, scores and prizes, so why not leverage a few online tricks to improve exam results or to switch students into less popular educational courses or institutions? Farmville running kindergarten services? It’s not impossible.
How could anyone possibly have a problem with this? This is surely fairly harmless activity. Making everything fun and social is simply a way to get people, especially younger people, to do things they don’t really want to do or haven’t really thought about doing. Just a way of tapping into the fact that hundreds of millions of people spend billions of hours playing online games and feel pretty good about themselves both during and after. Why not use this desire for competition, recognition and respect to increase participation in new product trials or boost the loyalty of voters towards your particular brand of government?
The answer to this is that turning the world into a game benefits certain interest groups. For example, if you can get people to do things for you for status or feelings of accomplishment, you may not have to pay other people to do it for you. In other words, your harmless game play is actually adding to the unemployment line.
According to Gartner, a research firm, more than 50% of companies will add gamification techniques to their innovation processes by 2015. But getting users to co-create or co-filter products or services or act as data entry clerks by offering virtual rewards or status also means that companies don’t have to put time and effort into improving inferior products or services themselves. Moreover, it seems infantile to treat all customers and citizens as though they are animated superheros on a secret mission to save the planet. Isn’t a virtual badge – or a real one for that matter – a rather superficial substitute for real-life engagement with other human beings?
On one level, gamification is a smart tool to get people to do what is in their best interest over the longer term. On the other hand, it can be seen as a manipulative way of getting individuals to conform to a subjective set of rules or goals or suit short-term commercial interests.
Here’s the contents from one of my new books. The only thing that’s bugging me is whether synthetic biology should be featured as one of the key ideas. Currently it’s mentioned under some of the other ideas.
1. Politics & Power
Cyber & Drone Warfare
Power Shift East
2. Energy & Environment
Beyond Fossil Fuels
3. Urban Landscape
Local energy networks
Next Generation Transport
Extra-Legal & Feral Slums
4. Technological Change
An Internet of Things
Quantum & DNA Computing
5. Health & Wellbeing
Personalised Genome Sequencing
Medical Data Mining
6. Social & Economic Dimensions
What (& Where) is Work?
The Pursuit of Happiness
7. Towards a Post-Human Society
8. Space: The Final Frontier
Alt.Space & Space Tourism
Solar Energy from Space
SETI Post-Detection Protocol
9. Doomsday Scenarios
Biohazards & Plagues
Mobile & Wireless Radiation
Super-Volcanoes & Mega-Quakes
The Sixth Mass Extinction
10. Unanswered Questions
The Nature of Consciousness
The Fabric of Reality
A New God?
I like this. I am using an edited version of this quote in my new book, Future Minds, but I rather like this longer version. It reminds me of a guy at IBM that I once did some work for. He was a smoker and talked about his “thinking sticks.” Only problem is he’s not really allowed to use them anymore (or at least the places where he can “think” are being restricted).
This is a quote from Charles Constable, Director of Strategy at Channel Five Television (thanks Charles).
“Often the ‘spark’ comes when I am not supposed to be thinking. I’m afraid I am a smoker – now sentenced to pursue this awful habit outside. I think smoking is about relaxing (for me at least) – so I let my mind stop being boxed in by whatever I was doing before hand. That’s when it gets to work on its own, and that’s when it works most laterally – both in terms of what it ‘chooses’ to decide to mull on and in terms of connections it makes between things. I sometimes find it hard to retain the thoughts when having to get back to the day job of the next immediate challenge – usually have to write it down or say it to someone. This works particularly well late at night or when it’s quiet. Or alternatively – in the bath… …a bit of a cliché but true…I think the other time I think well is when I am stealing ideas from others! People say things, which lead you to make good, new connections – to see things in ways you had not previously. I’ve often said that the best ideas I have came from someone else. This is where ‘sparks’ can be molded into something more concrete that you can really do something with. So at work I like to think with 1 or (no more than) 2 people through an iterative thought process. Two brains are often better than one for really good constructive thinking. Too many brains and the process gets tough.”
Getting back to physical offices, it’s not just the workers that are starting to disappear but the paperwork too. Historically, paper has always been an important part of office life and the idea of a paperless office has been a symbol for modernity and efficiency since the early 1960s. The early theory was that computerisation would eventually render physical paper in physical offices obsolete. Unfortunately, what happened was the exact opposite. From about 1990 to 2001 paper consumption increased, not least because people had more material to print and because printing was more convenient and cheaper. But since 2001 paper use has started to fall. Why?
The reason is partly sociological. Generation Y, the generation born roughly at the same time as the Personal Computer, has started working in offices and these workers are comfortable reading things on screens and storing or retrieving information digitally. Moreover, digital information can be tagged, searched and stored in more than one place so Gen Y are fully aware of the advantages of digital paper and digital filing. All well and good you might think but I’m not so sure.
One of the great advantages of paper over pixels is that paper provides greater sensory stimulus. Some studies have suggested that a lack of sensory stimulation not only leads to increased stress but that memory and thinking are also adversely affected.
For example, one study found that after two days of complete isolation, the memory capacity of volunteers had declined by 36%. More worryingly, all of the subjects became more suggestible. This was a fairly extreme study but surely a similar principal could apply to physical offices versus virtual offices or information held on paper versus information held on computer (i.e. digital files or interactive screens actually reduce the amount of interaction with ideas).
Now I’m not suggesting that digital information can’t sometimes be stimulating but I am saying that physical information (especially paper files, books, newspapers and so on) is easier on the eye. Physical paper is faster to scan and easier to annotate. As we’ve seen in an earlier chapter, paper also seems to stimulate thinking in a way that pixels do not. Indeed, in my experience the only real advantage of digital files over physical files is cost or the fact that they are easier to distribute.
There are some forms of information that do need to be widely circulated but with most the wider the circulation list, the lower the importance of the information or the lower the real need for action or input. As for the ability to easily distribute information this can seriously backfire. Technology is creating social isolation because there is no longer any physical need to visit other people in person. Paperless offices are clearly a good idea on many levels but I wonder what the effects will be over the longer term? What I’m getting at here is that offices aren’t just about work any more than schools are just about exams. Physical interaction is a basic human need and we will pay a very high price if we reduce all relationships (and information) to the lowest cost formats.
This is a pre-edit extract from my new book, Future Minds, out UK October 2010 (Australia/NZ April 2011).
“Google knows everything” – Nick, aged 8.
This is a book about how the digital era is changing our minds. It is about how new digital objects and environments, such as the internet, mobile phones and e-books are re-wiring our brains — at home, at work and at play.
Technology clearly has a lot to do with this, although in many instances it is not technology’s fault per se. Rather it is the way that many trends are combining and technology is either facilitating this confluence or accelerating and amplifying the effects. This may sound alarming but it needn’t be. We have created these digital technologies using imagination and ingenuity and it is surely within our grasp to decide how best to use them — or when not to.
But can something as seemingly innocent as a Google search or a mobile phone call really change the way that people think and act? I believe they can — and do.
This thought occurred to me one morning when I was looking out into space, from the rooftop of a hotel in Sydney. But then I reflected. Would I have thought this if I were on the phone, looking at a computer screen, in a basement office in London?
I think the answer is no. The hotel was a calm and relaxed environment with expansive harbour views, whereas an office can be a box of digital distractions. Modern life is indeed changing the quality of our thinking, but perhaps the clarity to see this only comes with a certain distance or detachment.
Does this matter? I think it does. Mobile phones, computers and iPods, have become a central feature of everyday life in hundreds of millions of households around the world. There are currently more than one billion personal computers and more than four billion mobile phones*(1) on the planet. In 2005, 12% of US newlyweds met online, while kids aged 5-16 years of age now spend, on average, around six hours every day in front of some kind of screen. This technological ubiquity must surely be resulting in significant attitudinal and behavioural shifts — but what are they? The answer is that nobody is really quite sure. The technology is too new (the internet is barely 5,000 days old) and our knowledge of the human mind is still too limited.
We do know the human brain is ‘plastic.’ It responds to any new stimulus or experience. Our thinking is therefore framed by the tools we choose to use. This has been the case for millennia but we have had millennia to consider the consequences. This has arguably changed. We are now so connected though digital networks that a culture of rapid response has developed. We are so continually available that we have left ourselves no time to properly think about what we are doing. We have become so obsessed with asking whether something can be done that we have left no time to consider whether something should be done. Perhaps the way our brains are constructed means that we just can’t see what is going on.
Moreover, the digital age (the internet, search engines and screens in general and mobile phones and digital books in particular) is chipping away at our ability to concentrate. As Professor Mark Bauerlein, author of The Dumbest Generation points out, screen reading “conditions minds against quiet, concentrated study, against imagination unassisted by visuals, against linear sequential analysis of texts, against an idle afternoon with a detective story and nothing else”. We are therefore in danger of developing a new generation that has plenty of answers but few good questions. A generation that is connected and collaborative but one that is also impatient, isolated and detached from reality. A generation that is unable to think in the ‘real’ world.
It’s not just the new generations either. We all scroll through our days without thinking deeply about what we are really doing or where we are ultimately going. We are turning into whirling dervishes, frantically moving from place to place in search of superficial ecstasy, unaware that many the things we most yearn for are being trampled by our own feet. It is only when we stop moving and the dust settles that we can see this destruction clearly. Our attention and relationships are becoming atomised too. We are connected globally, but our physical relationships are becoming wafer thin and ephemeral. Digital objects and environments influence how we all think and are profoundly shaping how we interact.
Ultimately, I believe the quality of our thinking – and ultimately our decisions – is suffering. Digital devices are turning us into a society of scatterbrains. If any piece of information can be recalled at the click of a mouse, why bother to learn anything? We are all becoming google-eyed. If GPS*(2) can allow us to find anything in an instant, why master map reading? But what if one day the technology doesn’t work? What then?*(3)
It is the right kind of thinking – what I call deep thinking – that makes us uniquely human. This is the type of thinking that is associated with new insights and ideas that move the world forward. It is thinking that is rigorous, focused, deliberate, independent, original, imaginative and reflective. But deep thinking like this can’t be done in a hurry or in an environment full of noise and interruptions. It can’t be done in 140 characters or less. It can’t be done when you are doing three things at once.
Yes it’s possible to walk and chew gum at the same time but I am concerned about what happens when you add a Twitter stream, a Kindle and an iPod into the mix. In short, what happens to the quality of our thinking when we never really sit still or completely switch off?
Why does all this matter? Because a knowledge revolution is replacing human brawn with human brains as the primary tool of economic production.*(4) It is now intellectual capital (i.e. the product of human minds) that matters most. But we are on the cusp of another revolution. In the future, our minds will compete with smart machines for employment and even human affection. Hence, being able to think in ways that machines cannot, will become vitally important. Put another way, machines are becoming adept at matching stored knowledge to patterns of human behaviour, so we are shifting from a world where people are paid to accumulate and distribute information to an innovation economy where people will be rewarded as conceptual thinkers. Yet this is precisely the type of thinking that is currently under attack.
So how should we as individuals, organisations and institutions (the latter being those deliberately built environments where we spend most of our lives) be dealing with the changing way that people think? How can we harness the potential of new digital objects and environments whilst minimising their downsides?
Personally, I think we need to do a little less and think a little more. We need to slow things down. Not all the time but occasionally. We need to stop confusing movement with progress and get away from the idea that all communication and decision making has to be done instantly. The tyranny of the next financial quarter is just as damaging to deep thinking as a noisy office fitted with fluorescent lighting.
I’m sure that by writing this book I will be accused by some people of going backwards, or of being a pastist. But remember that some of the tried and tested technologies of yesteryear have grown old precisely because they are good and we should think twice before deleting them. Equally, being a member of the Tech No movement doesn’t mean smashing the nearest digital device. It simply means questioning potential consequences or asking for some level of balance. It is about arguing that we need a little more of this and a little less of that.
This is a book about work, education, time, space, books, baths, sleep, music and other things that influence our thinking. It is about how something as physical, finite and flimsy as a 1.5 kg box of proteins and carbohydrates can generate something as infinite and potentially valuable as an idea. Hence, it is for anyone who’s curious about thinking about their own thinking and for everyone who’s interested in unleashing the extraordinarily potential of the human mind.
Whether you are interested in how to deal with too much information, constant partial attention, our obsession with busyness, leisure guilt, the myth of multi-tasking, the sex life of ideas, or the rise of the screenager, this book explores the different aspects of how digital objects and environments are re-wiring our brains – and makes some practical suggestions about what we can do about it.
* (1)Half of British children aged between 5 and 9 now own a mobile phone. For 7 to 15 year-olds the figure is 75%. This is despite government advice that no child under-16 should be using one. The average age that children in the UK now acquire a mobile phone is 8 years.
* (2) I interviewed someone for a job recently and one of her questions was whether or not she could use my car. I said she could, so she asked whether my car had a GPS in it. It doesn’t. She turned the job down. I wish her luck, whatever direction her life goes in. The point here is that GPS and Google give us information but they do not impart understanding and in some cases they can prevent us from properly planning ahead.
*(3) We assume the internet will always work. But what if it doesn’t? A US think-tank (Nemertes Research) says internet use is rising by 60% each year worldwide. Unless we can increase capacity they claim ‘brownouts’ (frozen screens, download delays etc) will become commonplace, relegating the internet to the status of a toy. How would you cope with that?
* (4) A study by McKinsey & Company, a management consultancy, claims that 85% of new jobs created in the US between 1998 and 2006 involved “knowledge work”.