This topic is a huge, and rapidly evolving, so I’d like to restrict myself to a small area. But before I do this I’d like to add a bit of context and discuss the area of forecasting and prediction. In short, why do we get so many things about the future so utterly wrong?
In 1886, the engineer Karl Benz predicted that: “The worldwide demand for automobiles will not surpass one million.” Eight later, in 1894, an article appeared in the Times newspaper in London predicting that: “In 50 years, every street in London will be buried under 9 feet of horse manure.”
How could they have got things so wrong? Simple. Both predictions were based on current (at the time) trends. Put another way, they used critically false assumptions.
In the case of Karl Benz his mistake was to assume that cars would always require a chauffeur and that the supply of skilled chauffers would eventually run out.
In the case of London being buried under horse manure the mistake was to assume that the volume of horse transport would increase indefinitely alongside population.
The article also totally missed, or at the very least misjudged, the disruptive impact of motorised transport, invented by Mr Benz 8 years earlier.
This is one reason why I’m such a fan of scenario planning – it’s a philosophy that rejects the idea of a singular future. Scenario planning insists that there are multiple possibilities and encourages the idea of things being part of a wider system. It focuses on external rather than internal shocks. In short, it’s a framework for dealing with ambiguity and uncertainty and combinations of external events. It doesn’t claim to predict the future in the sense of helping people get things 100% right, but used well it can get you to places you’ve not previously imagined and prevent individuals – and especially institutions – from getting things 100% wrong.
Why else, apart from projecting current trends and personal experience forward, do we make big mistakes? The main reason, I would suggest, is because of how our brains are wired and what we choose to do with the rest of our bodies each and every day.
Let’s start with our brains.
Our brain’s default setting is to believe what it already knows, largely because our brain is lazy and this is the easier, or more economical, energy conserving position. Confirmation bias says that our brains prefer to deal with information and ideas they are already familiar with, because the pathways for this information to travel down have already been built and traffic flows freely. This is why we subconsciously hunt for facts to fit pre-conceived ideas or play bad cop and fit up evidence to fit pre-established views.
Unfamiliar information and ideas, in contrast, have difficulty entering our brains, because new pathways need to be built to deal with new data or experiences. Hence our natural neural default is a combination of “Yes, I know what that is, I’ve seen it before” mixed up with “I have no idea what this is so I’m going to ignore it.” The result is that critical information is missed, ignored or not prioritized.
However, when the brain is especially busy it takes all this to extremes and starts to believe things that it would ordinarily question or distrust. I’m sure that you know where I’m going with this but in case you are especially busy – or on Twitter – let me spell it out for you.
If you are very busy there is every chance that your brain will not listen to reason and you will end up supporting information, or ideas, that are dangerous or perhaps you will support people that seek to do you, or others, harm. Fakery, insincerity and big fat lies all prosper in a world that is too busy or too distracted to listen properly. Hence the importance of occasionally switching our devices, and ourselves, off.
There are some other well-known reasons for getting things wrong. The first reason is something called sunken cost. This is essentially the idea that people sink time and money into things and therefore continue to back actions or strategies well past the point of logic in a quest to get either their time or their money back.
This is similar to the endowment effect, which says that people behave differently with things that they own (spending other people’s research money versus their own). Other reasons include egocentric bias (usually fatal when combined with Alpha male competitive behaviour, the collapse of RBS in the UK perhaps being a prime example), overconfidence, expediency and conformity. Here’s a classic example of a mistake cited by Joseph Hallinan in his book ‘Why We Make Mistakes”.
“A man walks into a bar. The man’s name is Burt Reynolds. Yes, that Burt Reynolds. Except this is early in his career, and nobody knows him yet – including a guy at the end of the bar with huge shoulders. Reynolds sits down two stools away. Suddenly the man starts yelling obscenities at a couple seated at a table nearby. Reynolds tells him to watch his language. That’s when the guy with the huge shoulders turns on Reynolds.” Here’s how Reynolds recalls the event: “I remember looking down and planting my right foot on this brass rail for leverage, and then I came around and caught him with a tremendous right to the side of the head…he just flew off the stool and landed on his back in the doorway, about fifteen feet away. And it was while he was in mid-air that I saw…that he had no legs.”
How could this happen? Easy. Reynolds was distracted. He looked but he didn’t see. He had a view of reality that was influencing what he saw. He thought he knew the context, and what might happen next, so he didn’t question his assumptions.
Our brains aren’t alone in deceiving us either. Most of us wake up at the same time each morning, we leave our houses in the same manner, we take the same route to work, we read the same newspapers and websites, we hang out with the same people at work and then we go home and sit down to do more or less the same things we always do. This is fine, it’s comfortable and convenient, but we are restricting our experience (what our brains are familiar with). This means that we are allowing the lens with which we view the world to be distorted, or at the very least narrowed.
It also means that the raw material from which new insights and new ideas are made is similarly restricted.
But I have some good news for you. It needn’t be like this.
First, you can exercise your brain much as you can exercise your other muscles.
You can feed it a diet of cerebral snacks that make it stronger and more resilient.
How can you do this? Go to places you’ve never been to before, talk to people you don’t know in spheres that you aren’t familiar with, read magazines you wouldn’t normally read and above all indulge in intellectual promiscuity and encourage serendipity.
The second bit of good news is we are getting better at forecasting and prediction, largely because digital and wireless technology is allowing us to see things that were previously hidden from view. As many people know, using digital money and shopping online leaves vast trails of data, as does walking around with a phone that’s switched on, joining Facebook or searching for things on Google.
In short, it’s becoming far easier to know where people are all of the time and what they are doing. It’s even becoming possible to work out what they are likely to do next based on historical patterns of observed behaviour and connections overlaid with basic human psychology.
Is this a good or a bad thing? That’s not really for me to say, largely because it can be a bit of both depending upon whether or not the surrender of information is consensual and what a second or third party does with the data. Interestingly, one thing that I have observed studying other people is that many of us are developing what can only be called an addiction to mobile devices, which I would suggest is harming our ability to think deeply about matters of substance, but more on that later (Part 2).
A great many people also seem to have little or no idea what they are doing with regard to surrendering personal information or information about their precise whereabouts. Maybe this is generational? Gen Y and below know that privacy is dead and have gotten over it. Gen X is deeply worried and Baby Boomers have absolutely no idea what’s going on.
There is perhaps also the thought that many people now live in the here and now and are either unconcerned about what might happen to them (or their data) in the future (having their identity stolen or being burgled because they broadcast too much information about themselves or their current location) or they have little or no appreciation of history (especially the historical misuse of technology). Maybe it’s simply that much of this technology is too new and its effects are largely unknown or misunderstood.
Talking of new technology, another thing that I’ve noticed is how little people are aware of what’s already happening now, let alone what’s likely to happen next.
For example, when I show a general audience things like Google Flu Trends, 23andMe or Life Circle + they are amazed. They are even more gob-smacked by London Underground’s suicide algorithm, the Department of Homeland Security’s Malicious Intent Detectors or DARPAs Total Information Awareness project.
Imagine their reaction when they hear about what’s starting to happen with artificial intelligence, brain-to-machine interfaces, real-time crime mapping, prediction markets and mood-mining facial recognition technologies.
Overall, what I think is happening here is an explosion of connectivity, which is driving transparency, the growth of collaborative consumption, location specific data and dematerialisation (of goods and services). It’s also driving personalisation and various location-specific services.
The personalisation point, which at times spins off into unrestrained narcissism, is especially interesting, because it appears to clash with collaboration on one level.
Collaboration (co-creation, co-filtering and Web 2.0 generally) has what appears to be a collective ethos, which clashes with the individualistic ethos of personalisation.
I don’t have an answer as to whether ‘me’ or ‘we’ is in the ascendant, but it’s something I’m watching.
Moving back to the privacy point, I suspect that in a future that’s highly connected, where digital data is difficult to contain and fear and anxiety are in the ascendant (created, I’d argue, by a mixture of Future Shock from the accelerating effects of technology, globalisation, the asymmetric nature of modern conflict and volatility created by complex and highly connected systems) I’d venture to suggest that most people will indeed give up a degree of privacy and freedom in return for the promise of certainty, simplicity and risk free environments.
Whether this promise can ever be delivered is, of course, an entirely different matter, especially when you stop to consider how the uses of technology tend to cascade.
We should also, perhaps, remember how privacy is to some extent a modern invention. Our historical pre-set was largely openness, collaboration and transparency because this allowed villages to thrive. Maybe we’re just returning to the village?
This may not be a good thing. Perhaps a loss of privacy (and to some extent secrecy) will lead to a reduction in original thinking and experimental behaviour, simply because people don’t want to be seen by the rest of the village as being stupid.
But let’s move on.
The second area I’d like to focus on is how some of the technologies I’ve just mentioned, especially digital, wireless and screens are changing how people think and act and what this could mean.
I don’t claim to be an expert in this area, but I did recently write a book about what the digital era, and screen culture in particular, is doing to our thinking, especially with regard to the thinking of people that are younger than Google (under 14).
Here’s a quote from Cass Sunstein, a Professor of Law and Political Science at the University of Chicago.
“The Internet makes it far easier for us to restrict ourselves to groups of like-minded people – to live in echo chambers of our own devising. In this way, the Internet is creating an increase, in many places, of social fragmentation, and hence an increase in both intolerance and incivility, as people end up seeing their fellow citizens as stupid, or malicious, or despicable. This problem is increased by the fact that much of the Internet is intolerant and far from civil….this isn’t healthy for democracy or tolerance, because it encourages people to choose teams, rather than to think issues through. For many people the Internet is aggravating this problem.”
And another quote from Thomas Friedman from the New York Times
“At its best, the Internet can educate more people faster than any media tool. At its worst, it can make people dumber faster than any media tool. Because the Internet has an aura of “technology” surrounding it, the uneducated believe information from it even more. They don’t realize that the Internet, at its ugliest, is just an open sewer: an electronic conduit for untreated, unfiltered information. Just when you might have thought you were all alone with your extreme views, the Internet puts you together with a community of people from around the world who hate all the things and people you do. You can scrap the BBC and just get your news from those Websites that reinforce your own stereotypes.”
Is this fair? It’s hard to say, because the internet, Google and Facebook, for example, are all still too new and because we still don’t know enough about how the human brain works to understand what the precise impacts things like BBM, YouTube and Twitter are having.
However, be assured that one thing is fairly certain, which is that because of the way our brains work, soaking up whatever stimuli are lying around, they are having some kind of impact. However, I’d suggest that these impacts are not isolated, but will be part of a much wider system of influences.
For example, I am cynical about comments that Facebook and Twitter were responsible for the Arab Spring. What I think happened was that mobile phones and social media, like most forms of technology, were an accelerant to an already existing condition.
In other words, the dry wood was already lying around. What was the wood?
I’d suggest a relatively large number of 16-24 years olds in the populations, high levels of youth unemployment, reasonably high levels of education, state repression of media, corrupt and bureaucratic governments and possibly food inflation.
The spark, in the case of Tunisia, was Mohamed Bouazizi setting himself alight in response to official harassment. Social media merely fanned these flames and, critically, thereafter, provided a way for people to by-pass government information sources and self-organise against incumbent powers.
There are a some parallels here with the London riots, although the strongest connection, in my view, was the use of social media and mobiles to organise protest – by definition evading the pyramidal command and control police structures. The other potential connection is the issue of fairness. In the case of the UK, this appeared to be a mixture of three things.
1) A culture of self-entitlement with individuals asking: “Where’s my share of the pie?” This is not a bad question in light of UK MPs recent expenses fiasco – the idea that anything is more or less OK so long as you don’t get caught.
2) Income polarisation – bankers earning bonuses that were often unelated to wider performance. Gains from speculation belong to the individuals involved, but any losses or wider social impact, belong to society at large so the theory goes.
3) There wasn’t much else to do and the weather was nice!
Interestingly, this self-organisation is indicative of what the internet and social media enable at a much broader level, which is the instant aggregation of opinion and the creation of highly fluid, collaborative and often leaderless networks that are often pitched against rigid and highly structured bureaucracies with very clear levels of command.
To be continued…