From the New Yorker, What cafes did for liberalism, by Adam Gopnik (December 2018).
Article here right here. Thanks to Simon, over in Sydney, for this little gem.
From the New Yorker, What cafes did for liberalism, by Adam Gopnik (December 2018).
Article here right here. Thanks to Simon, over in Sydney, for this little gem.
Film screening at the Royal College of Art next week….
How are you? How’s life? In 2015, a YouGov survey found that 65% of the British (and a whopping 81% of the French) thought that the world was getting worse. But it’s not, it’s been getting better for decades. On almost any measure that matters, life is demonstrably better now than it has been in the past. In fact, 2016 was the best year for ever for humanity according to Philip Collins writing in The Times. In 2016, extreme poverty was affecting less than 10% of the world’s population for the first time. If that’s not cause for celebration, 2016 also saw global emissions from fossil fuels falling for the third year running and the death penalty becoming illegal in over half of all countries. Nicholas Kristof, writing in the New York Times, echoed the optimistic perspective: Child mortality is now half what is was back in 1990. More than 300,000 people every day are gaining access to electricity for the first time. Similar good news stories can be found in statistics about human life-spans (more than double what they were 100 years ago – a mere 31 years in 1931, for example), the number of women in education and work, basic sanitation and clean water. It’s the same story with literacy, freedom and even violence.
So why does it feel like things are terrible? Why are so many people longing for the good old days? The answer, most likely, is global media and ubiquitous connectivity. Ignorance is no longer bliss. We are exposed to endless headlines about Brexit, Trump, Putin, Syria, terrorism, climate change and North Korea 24/7. There’s no escape.
Our response to this tends to be one of two things. Either we conclude that the world is indeed going to hell, so we might as well enjoy ourselves, or we become profoundly anxious, depressed and cynical about everyone and everything. But as members of what’s been termed the New Optimist movement (a term meant to evoke Richard Dawkin’s New Atheists), point out, this doom and gloom is deeply irrational. The pessimistic mood simply ignores the facts and underestimates the power of the human imagination. Moreover, while a worrisome mind was useful in the past (when looking out for threats outside your cave could literally save your life), use of the same fight or flight mind-set today can lead to spirals of despond. The fact that news now circulates the globe faster than it can be properly analysed doesn’t help either. Add to this a deluge of digital opinion which is at best subjective and more often false or misleading and it’s hardly surprising that so many people feel unsettled and disorientated to put it mildly. Another explanation for pessimism lies in our cognitive biases and especially our general inability to properly asses risk or probability. For example, more people died in motorcycle accidents in the US in 2001 than died in the Twin Towers attack on 9/11.
So, should we relax? Yes and no. Yes, in the sense that we need to put things in proper perspective, look at the real numbers and assess the actual probabilities. No, in the sense that just because we’ve had it good for the last 50 or 100 years doesn’t mean that our run of good luck will naturally continue. Maybe the last 100 years is simply a blip (extrapolating from recent personal experience or data is usually what goes wrong when it comes to long-term forecasting).
Another downside of global connectivity is that risk is now globally networked and systemic, meaning that one lunatic with the nuclear codes or a nasty biological virus could wipe us all out tomorrow. There are still things that could go seriously wrong, as David Runcimanm, a Professor of politics at Cambridge, points out.
OK, so around 120 countries out of 193 are now democracies (up from 40 in 1972), but this could change. Cyber-terrorism could bring us to our knees for extended periods too and while people throughout history warning of the end of the world have always been wrong, they only need to be right once. Hence, a degree of caution, or cynicism, can be useful. It’s also worth noting that if people become too depressed about things they tend not to be motivated to fix things (although, similarly, it might be argued that if you are too optimistic a similar rule applies). Any mind-set can become a self-fulfilling prophesy. Also, while it’s indisputable that globally, or on average, things are good and getting better, this isn’t true for everyone, everywhere. Local exceptions apply. Furthermore, a more nuanced criticism of the rational optimistic view is that saying things are great is a great way of saying don’t change anything, which is to say leave free market capitalism and political structures well alone.
Overall, people will choose to believe whatever they want to believe and choose the facts that support their world view, but one thing that’s still missing perhaps is vision. We are increasingly stuck in the present, ignorant of our deep history and seduced and distracted by an internet that’s fuelled by our attention. If, instead of giving the internet and 24/7 media our time, we spent our time thinking about how we, as a species, would like to live now and where we would like to travel in the future I suspect that a lot of the current anxiety and pessimism would evaporate. In other words, we should worry less about what we think is happening now or might happen next and start taking about what we want to happen now and what we want to occur next.
I thought that ‘executive summary’ nursery rhymes, for time-pressed parents to read to their children, were bad enough. I think this might be worse. Not only are we removing human interaction, we are allowing Big Tech to listen in on our children.
Something I’ve written for the Future Universites Thought Book in Australia. Free e-copies and hard copies can be found on this link.
What is the purpose of a university? What should they seek to encourage? You might think that universities, of all places, might be thinking more about this, but alas no. This exam question has largely fallen off the curricula. Neither is much study time being given to whom an Australian university should serve, how they might be funded or what should be taught and why.
There was a time when a university was a place where people were taught to think. They were communities of open debate. They were places where people went to be educated into the ways of the world using grammar, rhetoric and logic. They were spaces where people went to explore and understand things, not least themselves. There were no ‘safe spaces’ or no-platform policies.
Universities nowadays are becoming where you go to further your career and earn more money. Following the Dawkins reforms in the late 1980s, students are now customers with all the biases and baggage this word entails. With notable exceptions, universities have become brands that churn out qualifications much in the same way that fast food establishments flip burgers, although it’s sometimes difficult to tell which might be more damaging over the longer term.
Time to upgrade the system
A thriving university sector is essential in a hyper-competitive world where problems are becoming trickier and constraints are becoming stickier. But Australia is still stuck on a system created more than a century ago to produce muscle or memory workers for business. These are people taught physical dexterity, precision and endurance or taught to process and apply information according to sets of rules.
This output has suited us up to now, but looking ahead it seems that developments in machine learning and artificial intelligence mean that we are teaching a generation to compete head on with computers and it’s a no-brainer who will win if the contest is about muscle, precision, memory, data processing or logic. With the exception of teaching people how to create or collaborate with these machines, we should be teaching precisely the opposite, which is teaching people to think in ways that computers can’t.
Ultimately, there’s not a lot that machines can’t do if we allow them, but there could be a few domains that will remain the preserve of primitive carbon-based bipeds such as ourselves. The first is creativity. We should be teaching students how to think more imaginatively, whether the application be art, science or the art and science of innovation. We need tall buildings that don’t fall down, but also ones that make the human heart soar.
Similarly, logical machines, no matter how smart, will continue to struggle with the faults and foibles of human beings, which, to my mind, means it matters that we teach people about other people and what motivates them.
Machines can be alluring, but I can’t see them ever being inspiring, so teaching leadership should be paramount, whatever the discipline you are attempting to impart. One of the issues I hear about regularly is that of students entering the workforce that are technically brilliant, but incapable of managing themselves let alone anyone else. Recognising and rewarding EQ alongside IQ might be a good way to not only create a functioning civil society and workforce, but also a way of creating the next generation of effective leaders. But don’t suppose for a moment that this can be achieved online. We already have a problem with asocial students virtually incapable of human interaction. Let’s not make this worse by deleting the human interface.
A further thing that’s missing from the current system is ethics.
Historically, many universities were linked to the church and the moral component was bedrock. Nowadays, the moral component of a degree is akin to a slippery slope of scree sliding down a hillside. Indeed, it’s quite possible to use your head to pass through university with flying colours while remaining, at heart, an ego-centric, narcissistic, psychopath.
One thing I stumbled upon recently was the 4Cs (Critical thinking, Communication, Collaboration, Creativity). I propose that we build upon this list and set off toward the distant horizon of 2040 on the 7 Cs; Critical thinking, Creativity, Collaboration, Communication, Curiosity, Character and Compassion.
The first 4Cs are self-explanatory. We need people to think critically and creatively about the world’s problems and communicate and collaborate across communities to come up with solutions. But the last 3Cs are especially important.
The aim of education generally, and of universities in particular, should be to instil a lifelong love of learning and this is becoming especially vital in a world where new knowledge is being created at an exponential rate. But how can we expect people to continually re-learn things without first instilling a sense of Curiosity about how things work or might be changed for the better?
Character is important for two reasons. First, as machines become more adept at doing the things that were once thought the preserve of humans, the value of emotionally-based work should come to the fore. Most jobs feature people at some level and if you are trying to persuade people to do something you’re more likely to be successful if you are liked. An attractive personality cannot be taught, but it can be encouraged. Moral character is equally important. We don’t just want smart graduates, we want ethically grounded graduates too.
The final C, Compassion, is linked to moral character. Compassion is the resource the world is running out of faster than any other.
Without compassion the world is an ugly place. Universities have long focussed on IQ. Indeed, it’s hard to get into a university without it. But at the risk of repeating myself, EQ might prove more valuable over the longer term, especially if IQ becomes the preserve of artificially intelligent machines.
It’s hard to predict the future, and foolish to try in many instances, but I believe that imparting individuals with a better understanding of the human operating system would make them better prepared for whatever 2040 throws at them.
—-
As I’ve said many times before, trends tend to bend. Counter-forces and negative feedback loops tend to build up and send things off in a direction that’s opposite to the one most people expect. Case in point – too little information. Everyone, more or less, knows about Too Much Information (TMI), but much of this ‘information’ is shallow, trivial and meaningless. (See ‘Filter Failure’ – Sharkey 2009). But, more importantly, because people are starting to feel that nothing that’s digital is ever totally secure they’re starting to get very cautious and not to write things down. The result might well be that very important documentation is ceasing to exist. Think about political decisions taken during periods of crisis. Historically you’d have diaries, minutes of meetings and so on. But what if we don’t? What if the real issue isn’t Too Much Information (TMI) but Too Little Information (TLI)?
An idea proposed by the Canadian historian Professor Margaret MacMillan.
Listen here (around 20 minutes 40 seconds in).
Seen this morning….