Robots Vs. People

I attended a conference on AI at Cambridge University last week and one of the most interesting points was why we were developing robots to look after old people – why not just use people? The answer could be that we are running out of people, especially younger people, as is the case in Japan, but this simply begs another question. Why don’t we either educate younger people on the importance of looking after older people, especially one’s own relatives, or simply conscript younger people into the NHS for short periods (a wonderfully provocative idea proposed by Prof. Ian Maconochie at an Imperial College London lecture the other week).

What can’t computers do?

Are you worried that your job could be ravaged by a robot or consumed by a computer? You probably should be judging by a study by Oxford University, which said that up to half of jobs in some countries could disappear by the year 2030.

Similarly, a PEW Research Centre study found that two-thirds of Americans believe that in 50 years’ time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe. I will come back to the Oxford study and the inconsistency of the two PEW statements later, but in the meantime how might we ensure that our jobs, and those of our children, are safe in the future?

Are there any particular skills or behaviours that will ensure that people are employed until they decide they no longer want to be? Indeed, is there anything we currently do that artificial intelligence will never be able to do no matter how clever we get at designing AI?

To answer these questions, we should perhaps first consider what it is that AI, automated systems, robots and computers do today and then speculate in an informed manner about what they might be capable of in the future.

A robot is often defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly being used to replace people when people are too inefficient or too valuable. Not surprisingly, machines such as these are tailor made for repetitive tasks such as manufacturing or for beating humans at rule-based games. A logical AI can be taught to drive cars, another rule-based activity, although self-driving cars do run into one rather messy problem, which is people, who can be emotional, irrational or ignore rules.

The key word here is logic. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. If X then Y and so on. At least that’s been true historically. But machine learning now means that machines can be left to invent their own rules and logic based on patterns they recognise in large sets of data. In other words, machines can learn from experience much as humans do.

Having said this, at the moment it’s tricky for a robot or any other type of computer to make me a good cup of coffee, let alone bring it upstairs and persuade me to get out of bed and drink it. That’s the good news. A small child has more general intelligence and dexterity than even the most sophisticated AI and no robot is about to eat your job anytime soon, especially if it’s a skilled or professional job that involves fluid problem solving, lateral thinking, common sense, and most of all Emotional Intelligence or EQ.

Moreover, we are moving into a world where creating products or services is not enough. Companies must now tell stories and produce good human experiences and this is hard for machines because it involves appealing to human hearts as well as human heads.

The future will be about motivating people using stories not just numbers. Companies will need to be warm, tolerant, personable, persuasive and above all ethical and this involves EQ alongside IQ. Equally, the world of work is less about command and control pyramidal hierarchies where bosses sit at the top and shout commands at people. Leadership is becoming more about informal networks in which inspiration, motivation, collaboration and alliance building come together for mutual benefit. People work best when they work with people they like and it’s hard to see how AIs can help when so much depends on passion, personality and pervasiveness.

I’m not for a moment saying that AI won’t have a big impact, but in many cases I think it’s more a case of AI plus human not AI minus human. AI is a tool and we will use AI as we’ve always used tools – to enhance and extend human capabilities.

You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be taken or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t generally capable of original thought, especially thoughts that speak to what it means to be human. We’ve already seen a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing abstract expressionism. This requires rule breaking.I’m not saying that AI created creativity that means something or strikes a chord can’t happen, simply that AI is pretty much stuck because of a focus on the human head, not the human heart. Furthermore, without consciousness I cannot see how anything truly beautiful or remotely threatening can ever evolve from code that’s ignorant of broad context.  It’s one thing to teach a machine to write poetry, but it’s entirely another thing to write poetry that connects with and moves people on an emotional level and speaks to the irrational, emotional and rather messy matter of being a human being. There’s no ‘I’ in AI. There’s no sense of self, no me, no you and no us.

There is some bad news though. An enormous number of current jobs, especially low-skilled jobs, require none of the things I’ve just talked about.If your job is rigidly rule based or depends upon the acquisition or application of knowledge based upon fixed conventions then it’s ripe for digital disintermediation. This probably sounds like low-level data entry jobs such as clerks and cashiers, which it is, but it’s also some accountants, financial planners, farmers, paralegals, pilots, medics, and aspects of both law enforcement and the military.

But I think we’re missing something here and it’s something we don’t hear enough about. The technology writer Nicholas Carr has said that the real danger with AI isn’t simply AI throwing people out of work, it’s the way that AI is de-skilling various occupations and making jobs and the world of work less satisfying.

De-skilling almost sounds like fun. It sounds like something that might make things easier or more democratic. But removing difficulty from work doesn’t only make work and less interesting, it makes it potentially more dangerous.

Remoteness and ease, in various forms, can remove situational awareness, for example, which opens up a cornucopia of risks.The example Carr uses is airline pilots, who through increasing amounts of automation are becoming passengers in their own planes. We are removing not only the pilot’s skill, but the pilot’s confidence to use their skill and judgment in an emergency.

Demographic trends also suggest that workforces around the world are shrinking, due to declining fertility, so unless the level of workplace automation is significant, the biggest problem most countries could face is finding and retaining enough talented workers, which is where the robots might come in. Robots won’t be replacing anyone directly, they will just take up the slack where humans are absent or otherwise unobtainable. AIs and robots will also be companions, not adversaries, especially when we grow old or live alone. This is happening in Japan already.

One thing I do think we need to focus on, not only in work, but in education too, is what AI can’t do. This remains uncertain, but my best guess is that skills like abstract thinking, empathy, compassion, common sense, morality, creativity, and matters of the human heart will remain the domain of humans unless we decide that these things don’t matter and let them go. Consequently, our education systems must urgently move away from simply teaching people to acquire information (data) and apply it according to rules, because this is exactly what computers are so good at. To go forwards we must go backwards to the foundations of education and teach people how to think and how to live a good life.

Finally, circling back to where I started, the Oxford study I mentioned at the beginning is flawed in my view. Jobs either disappear or they don’t. Perhaps the problem is that the study, which used an algorithm to assess probabilities, was too binary and failed to make the distinction between tasks being automated and jobs disappearing.

As to how it’s possible that people can believe that robots and computers will “probably” or “definitely” perform most of the jobs in the future, while simultaneously believing that their own jobs will “probably” or “definitely” be safe. I think the answer is because humans have hope and human adapt. For these two reasons alone I think we’ll be fine in the future.

.


AI 101

There’s a huge amount of nonsense out there about AI. This is a great intro on what’s going on and what’s not. I’ll post some more if I find more worth watching.

What can’t AI do?

Oh, my goodness. I’ve started something looking at what AI can’t do and it’s turning into a real can of worms. I thought it might be simple: AI is fairly useless at creativity, not great with empathy (AI isn’t a “people person”) and maybe throw in common sense and perhaps leadership. But everything I dig into throws up more questions than it answers and everything is more or less contestable.


Monday statistic

Last year the number of students taking a creative arts exam in the UK fell by 51,000. Arts subjects, including design, drama and art, now account for only 1 in 12 GCSEs. Four years ago it was 1 in 8. A national scandal.

If we want our children – and our children’s children – to compete with machines that can think, I agree with Lucy Noble, Artistic & Commercial Director of the Royal Albert Hall,  that an arts subject should be compulsory at GCSE, although I’d add philosophy to the list of compulsory subjects too.

Explainable AI

I sometimes get asked how I look at things, especially in the sense of how do I know what to notice and what to ignore. My glib answer is often the rule of 3. If 3 people mention the same thing, or I see 3 examples of something in different contexts, I tend to pay attention.
A good example is Explainable AI. Early this year a coder mentioned an idea for what he called ‘software that rusts’. For some unexplainable reason this instantly grabbed my attention. It was somewhat illogical and possibly contradictory, but there was something in the idea. Digital is pristine and identical. But humans like imperfection and uniqueness.

Last week I was taking with some students at the Dyson Lab at Imperial College and we got talking about AI to AI interactions and I came up with the idea of Digital Provenance. This would be a bit like Blockchain, in the sense that you could see the history of something that was digital, but it would have a far richer and more human storyline. In other words, digital products would be able to reveal where they were coded, but also when? and by whom? In other words, the idea of provenance or ‘farm to fork’ eating transferred to software code or anything that was digital.

Then the day before yesterday I was with some people and the concept of Explainable AI came up. The best way of thinking about this might to think in terms of a black box that can be opened up. I think this will become increasingly important as and when accidents happen with AI and fully autonomous systems. These machines need to explain themselves to us. They need to be able to argue with us over what they did and why and reveal their biases if asked. At the moment most of these AI systems are secret and neither users, regulators or governments can look inside. But if we start trusting our lives with these systems then this has to change.

BTW, since I’m getting into AI, I’d like to highlight a problem that’s been around for centuries – human stupidity. In a sense, the issue going forward isn’t artificial intelligence, it’s real human stupidity. In particular, the human stupidity caused by an overreliance on machines. As Sherry Turkle once said, “what if one of the consequences of machines that think, is people that don’t?” There is a real danger of a culture of learned incompetence and human de-skilling arising from our use of smart machines.

Silly example: I was at London Bridge Station earlier in the week trying to get on the Jubilee Line. The escalators were broken. The queues were horrific. So, I asked why we couldn’t use the escalators. “Because they’re broken” was the response. “But they are steps” I replied. “They still work.” OMG.

Human Made

I was having a coffee with editor of Wired magazine a few days ago and we got onto the subject of ‘human’ being a trend in the future. He said that the founder of Net-a-Porter said that in the future we will buy clothes with labels saying “Human-Made”. I agree.

As AI, robotics and automation grow the balancing trend (or countertrend if you prefer this term) will be an emphasis on human hands, human production, human craft, human imperfection and human vs machine intelligence and so on. We might even get to a situation where people buy human-made vs AI made in the same way we currently buy organic vs non-organic food.

So, for instance, we might see burger bars with signage saying ‘human made food’, books saying ‘human written’ or computers with labels saying ‘human designed on the outside, AI on the inside’. At the very extreme we might eventually witness an artificial-authentic divide in the human population, whereby we have the original, organic, human species and a generically augmented and machine-enhanced, hybrid species.

As an aside, I was talking with some students at the Dyson Lab today about AI to AI interactions and one idea that came up was that of ‘digital provenance’. So, for example, we might be able to ask where the code for a product has come from (when it was written by whom and where) much in the same way that food products currently display information about who, where and when they were made. Maybe code will have a ‘best before’ date and maybe we’ll ask algorithms to reveal their biases (a bit like Blockchain but with more narrative quality).

BTW, two spin-offs of possible importance here. Firstly, the difference between artificial and sythetic and, secondly, how do you know when something is a synthetic construct that mimics a human behaviour or human production, which is to say how do we really know something is real? (Thanks Nik).