What Can’t AI Do?

Earlier this year I gave a friend some young onion plants. He sent me a picture of them recently. Instead of the monsters that they were supposed to grow into they had ended up no larger than spritely spring onions. He asked me how mine had done. Rather than telling him that they had turned into show stopping giants I said mine had hardly grown at all either. I lied to him to spare his feelings. Could an AI do that? Is that a calculation a computer might make?

I don’t think an AI can demonstrate compassion, not unless it had been told to or learned, from experience, that this was an effective response in a particular situation, in which case the action would be insincere. It would not be authentic. It would come from the head, not the heart, and therefore would not a compassionate act.

You might argue that people might not care. Or perhaps people might be incapable of telling the difference, but if that were the case where would that leave human autonomy? What might such simulated compassion, or automated kindness, say about individual identity, empowerment or free choice? If we start to trust machines with our feelings more than people where does that leave us?

As is so often the case, parts of the future have already been distributed. Kindergarten robots in Japan are currently socialising young children, while at the other end of the age spectrum, aged-care robots are displaying synthetic affection to older people with dementia in care homes. The fact that neither of these groups have fully functioning minds adds an interesting overlay to ethical debates concerning the limits of automation (as do differing cultural interpretations of what constitutes ethical behaviour).

Personally, I think that most able-minded adults could tell the difference between true affection and affection simulated by an avatar or a robotically embodied AI.

But if human affection were for some reason missing, or unavailable, perhaps it’s better than nothing.  After all, we create strong bonds with our pets, so is there much of a difference? I’d say yes, because pets are living creatures with feelings of their own. AIs don’t have true feelings and never will.

I’ll leave kindness and compassion sitting rather uncomfortably on the fence in terms of their potential for automation and turn instead to another human trait that overlaps emotional intelligence, which is common sense. Currently, a five-year-old child has far more common sense than the most advanced AI ever built. A child doesn’t need massive amounts of data in order to learn either. A child simply wanders around, exploring and interacting with its surroundings, and soon develops general knowledge, language and complex skills. Eventually, they combine broad intelligence with imagination and, if you’re lucky, a quirky curiosity about the world they inhabit. AIs do not. Even the most advanced AI today doesn’t even come close.

If you define intelligence (as opposed to intelligent behaviour) as the ability to understand the world to the point where you can make accurate predictions about various aspects of it and then use such knowledge to work out what else might be true then AI still isn’t very intelligent. Human brains endlessly update and refine themselves based upon physical interactions with the world using inputs from sensory organs. An AI might come close to doing this in a narrow area, but not to the extent that humans are able.

Despite recent developments, AI is still ruled by a mathematical logic that is devoid of any broad contextual understanding or flexibility. For example, a surgical robot might know how to do what it does, but not why it’s doing it. Frankly, the AI just doesn’t care. It would also be unaware of, and therefore unconcerned by, whatever exists outside its immediate operating environment. It is perfectly possible to extend any situational awareness and any navigational ability, but not, in my view, to the extent that humans can navigate the world. It’s one thing to design an autonomous vehicle that can ‘understand’ a complex road system, but it’s another thing entirely to design a machine that can travel around any human built (or natural) environment interacting with the almost infinite number of objects, people and ideas that may come its way. This isn’t simply because the world is highly complex, it’s because much of what happens in the world can be subtle, nuanced and confusing. The world is an interconnected system containing random elements and feedback loops, with the result that it can and does constantly change.

For humans, this isn’t too much of a problem. This is because even babies come fully equipped with a highly sophisticated sensory perception system and a reflexive learning mechanism that allows them to quickly react to changed circumstances. In fact, our ability to adapt to changed circumstances and survive in wildly different environments is possibly what marks us out from every other living species. Human intelligence is highly fluid, with the result that we are hugely adaptable and resilient. One reason for this is that our learning occurs even when we have very limited experience (what an AI might regard as limited data) or even no experience at all

Something that’s strongly related to this point is abstraction. AIs do not possess the ability to distil past experience and apply it in new situations or to radically different concepts. AIs cannot think metaphorically in terms of something being like something else. AIs cannot think in terms of abstract ideas and beliefs, which is the basis of so much human insight and invention. It’s also the source of much merriment and humour. An AI capable of writing a really funny joke? Don’t make me laugh out loud. As Viktor Frankl observed in his book Man’s Search for Meaning, “it is well known that humour, more than anything else in the human make-up, can afford an aloofness and an ability to rise above any situation, if only for a few seconds”. If an AI is being funny it is no more than a ventriloquist’s dummy.

Creativity, or original thinking, is often held up as an example of something we possess that AIs do not. I don’t think this is completely true. It’s perfectly possible for a machine to be creative. Alpha Go’s highly unusual 37th move during its second match against Lee Sedol in 2016 is proof enough of that. Perhaps Alpha Go’s human challenger should have recalled the words of the Dutch chess grandmaster Jan Hein Donner, who was once asked how he’d prepare for a chess match against artificial intelligence.  His response was that “I would bring a hammer.” Deep Mind, the company behind Alpha Go might respond to that response by saying that this wouldn’t be fair, but this is example of how humans can think laterally as well as logically. Such originality and unpredictability is another human trait that AIs will have to deal with.

One thing I can foresee (eventually) is an AI creating an artwork that resonates with people, not because it was created by an AI, which is frankly irrelevant, but because of the beauty of the work produced. Perhaps this could be achieved by an AI studying great artworks throughout history, working out some rules for what humans find aesthetically pleasing and synthesising some original content.

A formulaic approach works for Hollywood films and popular music, so why not art? I’m not suggesting that an AI could write Citizen Kane or Gorecki’s Symphony No.3, but I’m sure an AI could generate a passable script outline or some nice melodies. This would be especially true with human help, or collaboration, which I suspect is how things will develop. The future of AI is not binary. It is not humans Vs. AI, but humans + AI.

It may even be possible, but I highly doubt it, for an AI to propose a new artistic paradigm, such as Cubism, which would rely on pattern breaking rather than pattern recognition, but to what end? Such a move would only matter is we, as humans, decided it did. Art exists within a cultural construct in which humans collectively agree what has meaning or consequence, but I don’t expect an AI would be able to understand that anytime soon either.

For me great art can be a number of things. It can bring joy, by being a beautiful representation of something we agree collectively is important, it can be provocative, or revealing, in terms of addressing a deep issue, or an important question, or it can be something that speaks to the human condition. How can an AI speak to the human condition when it is not, and never will be, human? An AI can never have any real understanding human realities such as birth and death, nor the hunger, lust, joy, fear and jealousy that surround so many human impulses and colours so much human activity.

Again, you might argue that this simply isn’t true. All it would require for an AI to address such philosophical questions would be enough data and a program. But what would the data consist of and what might the programme be designed to do? What exactly is the problem when it comes to the human condition?

Human existence is not a logical problem. Moreover, the question would surely be different each time it was asked, because we are all different versions of the same thing. Every human mind is personalised based upon individual experience. Not only are no two human minds the same, no two individuals are even changed in the same way by the same experience. How do you code for that beyond reducing everything to averages and shallow approximations?

In short, an AI can never know what it feels like to be me because it isn’t and never will be me. AIs are cold calculating machines incapable of summing up the human experience let alone my own experience. There is no ‘I’ in AI and never will be. Not unless someone works out what consciousness is and is able to replicate it artificially and I doubt that this will happen this side of eternity.  

For me, the fundamental flaw in the argument than an AI can do pretty much anything a human can do (eventually) stems from the idea that the human brain is similar to, or just like, a computer. It’s not. It’s not even close. Let’s be really clear about this. Computers store, retrieve and process information based upon pattern recognition and rules. We do not. We directly interact with the real world. Computers react to symbolic representations of it, which is an important difference. This comes back to computers not really knowing about anything including themselves. Does this matter? I think it matters a great deal.

It’s true that humans make conscious calculations about things all the time, but while such calculations are often rational, or logical, many times they are not. Moreover, our irrationality can extend well beyond the real world. Humans have Gods. AIs do not. AIs do not seek meaning beyond mathematical logic, patterns or rules, while we seek ideologies and ideas to help us explain our short existence. We even anthropomorphise inanimate objects giving them inner spirits and ghosts. No AI would think of that. No AI would give consideration to spiritual matters.

We are more than heads too. We have whole body intelligence. We sense and react to things in many different ways, often in ways that are unknown to us. One of the issues with AI to date has been a focus on logical-mathematical intelligence. This has now broadened to include linguistic, spatial, body-kinaesthetic and intrapersonal intelligence, but these are much harder domains to crack.  People can and do talk to each other in ways that a machine can find it easy to copy. But real conversation requires an understanding about whom one is talking to and constant assessments about that person’s interests, feelings, experience and intentions. Real conversation critically, involves some level of genuine empathy and curiosity about that person too. How can you teach an AI to be curious beyond coding it to ask a series of rather simplistic questions? Not all communication is verbal either, so you need to account for that too, which AI can do up to a point. But curiosity is based upon something else, which is a burning desire to know and understand the world and I’ve no idea how you’d code that. Yes, there’s reinforcement learning, which was used by AlphaGo, but how such learning might work alongside some human behaviours is unclear.

In short, human intelligence is a complex, nuanced and multi-faceted thing and to be of equal stature an AI would have to develop deep capability across a multitude of areas. Copying, or reverse engineering, the human brain is also easier said than done given that we know so little about how the human brain, let alone the human mind, works. Saying that in a decade or two we’ll have machines surpassing human levels of intelligence displays a profound ignorance concerning the complexity of the human mind.

Take a simple thing like human memory. We have next to no idea how this works and it could take another century for us to even come close.  This isn’t to say that brains can only be made biologically, but one suspects that making one otherwise could be a lot harder than some over caffeinated commentators think.

Then, of course, there’s the idea that many of the things that are hugely important to humans cannot be measured in terms of numbers or data alone. How, for example, do you weigh up the relative importance of a poem Vs. a grapefruit? Where would you even start?

It’s also worth remembering that not all human knowledge is on the web or is even digitalised. We don’t even know what’s missing.  Some things are not binary either. Many important questions, issues or dilemmas do not have simple yes or no answers. Some are fluid and others depend deeply on context and circumstance.

Take love. AIs are proving rather good at selecting potential partners for people, but is AI really doing anything more than increasing the search or shortening the odds?  An AI can never love.

An AI can never understand what love feels like any more than it can translate the ‘knowledge’ that it is snowing outside into anything meaningful or meaningfully relate such knowledge to a remembrance of things past. And what of lost love? An AI can be punished, or rewarded, but ultimately an AI can never understand the fragility of our existence or what loss can feel like.

An AI can recognise or sense pain, but cannot physically feel in the way that we do. An AI cannot regret either, any more than it can understand and accurately assess other human emotions such as courage, justice, faith, hope, greed or envy. How, for example, might an AI respond to a question like “should I leave Sarah for Jane?” Answering questions like this involves more than pattern recognition or if-then decision trees. It involves emotions, fears and dreams. Again, how do you code for that? Sure, you can make an AI aware of the emotional state of a humans (affective computing), but, in my view, any understanding will be no more than skin deep. Again, AIs can’t do broad context or understand deep history. This is another reason why I believe there will be huge developments in narrow AI, but very little progress in broad or general AI for decades.

For instance, you can program an autonomous vehicle to predict the behaviour of pedestrians, recognising if someone is drunk or about to carelessly cross the road while texting. But how do you anticipate a sudden, irrational and unprecedented desire by someone to throw themselves in front of a car driving at 50 mph, maybe because they think that it might be funny? How do you combine self-driving cars that follow rules with people that don’t? Perhaps the hardest problem for artificial intelligence will be real human stupidity.

Finally, two points. The first concerns the popular idea that an AI might one day be capable of doing more or less anything a human can do, which might include stealing human jobs in vast numbers.

Such a belief supposes no human reaction.

Humans may rebel in an anti-technological fashion or enact laws restricting what an AI is allowed to do. It wouldn’t be the first time that we’ve invented something that we collectively agree not to use or decide to limit. Or governments might decide to tax AI, with the result that further developments are constrained. 

A spin-off from this thought might be another, which is that in the future we may decide that while true AI is desirable, it is neither urgent or important relative to other matters and is therefore a wasteful use of human imagination and resources.  

We have many pressing problems in the world today, but machine intelligence isn’t one of them. We have climate change, poverty, wars, economic inequality, a lack of clean water and poor sanitation.

All these might be addressed using AI, but AI alone cannot solve any of them, because such issues involve politics. They involve not only the prioritisation of competing economic resources, but competing, and ever changing, policies and ideologies.

My last point simply concerns why. Why are we doing this? What purpose does radical automation, or true AI, ultimately serve? Is it related to simple arguments about economic efficiency and profit maximisation that benefit 1% of the world’s population. In general terms, that seems to be the case at the moment. Or might it be to do with ageing societies and a shortage of humans in some areas?

I’m not sure what the why is with AI. What I do think is that many of our current concerns about AI might not be about AI at all. AI is a focal point for other, much deeper, concerns about where we, as both a society and a species, are heading. The worry that AI will somehow ‘awaken’ and take over is similarly misplaced.  Even if an AI did awaken, why would it occur to it to do such a thing? Without anger, fear, greed or jealousy what might be the motive? I could be wrong, but even if I am this isn’t happening anytime soon and it’s far more likely that our fears surrounding AI are merely the latest of a long historical wave of hype and hysteria.

What I do think is possible, and I guess this is inspirational in a sense, is that if AI even gets close to doing some of the remarkable things that a handful of people say is possible, this might shine a very strong spotlight on what other things we humans should focus our attention upon. Ultimately, true AI could create a conversation about what we, as a species, are for and might ignite a revolution in terms of how we view our own intelligence and how we understand each other.  Counter-intuitively, a truly intelligent AI might spark an intelligence explosion in humans rather than computers, which could well be one of the best things ever to happen to humanity.

Richard Watson is Futurist-in-Residence at the Centre for Languages, Culture and Communication at Imperial College London.

AI Ethics

I’ve almost finished writing my piece on what AI cannot do. In the meantime, I’ve had a thought. The real problem, surely, isn’t making machines behave ethically. The bigger problem is making humans do so.

Addressing the AI Hype

I’m quite close to completing my piece on what AI cannot do (and possibly never will) , which is partly intended to address the hype and reduce the anxiety about what AI is capable of, especually with regard to future employment. In the meantime, this is worth a read. It’s a very sensible piece by Roger Schank. Bits of it I disagree with, but it’s on the money in many places. BTW, one thing to watch out for generally, if someone says AI will or won’t do something ask them by when. By 2030? By 2100? Never?

Here’s Roger’s piece and here’s the link to the original. Thanks to Dr. Afric Campbell for spotting this btw.

So, to make clear what AI is really about I propose the ten question test. Here are ten questions that any person could easily answer and that no computer can answer well. Since I consider myself an AI person I would not say that no computer will ever be able to answer these. I hope we can figure it out. But AI isn’t here yet, despite what venture capitalists, giant companies, and the media keep saying. 

1. What would be the first question you would ask Bob Dylan if you were to meet him?

I am using this one because IBM’s Watson “met” Bob Dylan and told him that his songs were about love fading. First, I might point out that that conversation is insane. If you were to meet Bob Dylan you might have some things you’d want to know. I’d like to know if he feels that his songs are “literature.” I’d also like to know if he thinks he helped a generation feel stronger about protesting injustice and war. I would not count all the words he used and tell him which word appears most often. Watson does not behave as intelligent entities do. Intelligent entities are curious. They have things they want to know and can recognize who can answer questions that come to their minds about different arenas of life.

Here is another: 

2. Your friend told you, after you invited him for dinner, that he had just ordered pizza. What will he eat? Will he use a knife and fork. Why won’t he change his plans?

You will notice that eating is not mentioned in question 2. Neither are utensils. So how could an “AI” understand these questions. It would have to know about how people function in daily life. It would have to know that we eat what we order, and that when we say we ordered food it means that we intend to eat it, and it also means that we don’t want to waste it. It would also have to know that pizza is typically eaten with one’s hands. It might also know that Donald Trump famously eats pizza with a knife and fork and might mention that when asked.

3. I am thinking of driving to New York from my home in Florida next week. What do you think?

In order to answer the above question, one would need a model of why people ask questions like that one. It is hard to answer if you don’t know the person who is asking. If you do know that person you would also know something about what he is really asking. Does he have a car that is too old to make the trip? Maybe he has a brand new car and he is asking your advice about whether a long trip is a good way to break it in. Maybe he knows you live in New York and might have an idea whether the roads are icy there. Real conversation involves people who make assessments about each other and know what to say to whom based on their previous relationship and what they know about each other. Maybe the asker is really asking about a place to stay along the way (if the person being asked lives in Virginia say.) Sorry, but no “AI” is anywhere near being able to have such a conversation because modern AI is not building complex models of what we know about each other.

4. Who do you love more, your parents, your spouse, or your dog?

What does this question mean and why would anyone ask it? Maybe the person being asked is hugging their dog all the time. Maybe the person being asked is constantly talking about his or her parents. People ask questions as well as answer them. Is there an “AI” that is observing the world and getting curious enough to ask a question about the inner feelings of someone with whom it is interacting. People do this all the time. “AI’s” do not.

5. My friend’s son wants to drop out of high school and learn car repair. I told her to send him over. What advice do you think I gave him?

If you know me, you would know how I feel about kids being able to follow their own interests despite what school wants to teach. So an intelligent entity that I told this to would probably be able to guess what I said. Can you? No “AI” could.

6. I just saw an ad for IBM’s Watson. It says it can help me make smarter decisions. Can it?

Here is the ad: https://www.ispot.tv/ad/7Fip/ibm-watson-analytics-make-smarter-decisions-feat-dominic-cooper

My guess is that this is something Watson can do. It can analyze data, and with more information a person can make better decisions. Could Watson make the decision? Of course not. Decision making involves prioritizing goals and being able to anticipate the consequences of actions. Watson can do none of that.

7. Suppose you wanted to write a novel and you met Stephen King. What would you ask him?

Here is another Watson ad: https://www.ispot.tv/ad/A6k6/ibm-stephen-king-ibm-watson-on-storytelling

I have no idea what IBM is trying to say to the general public here. Apparently IBM is very proud that it can count how many times an author says the word “love.” If I wanted advice on writing a novel I doubt I would ask Stephen King, but here is one thing that is sure. Watson wouldn’t understand anything he said about writing a novel and Watson won’t be writing any novels any time soon. Now as it happens my AI group frequently worked on getting computers to write stories of one sort or a another. We learned a lot from doing that. I am quite sure that IBM hasn’t even thought about what is involved in getting a computer to write novels. Having something the computer wants to say? Having had an experience that the computer is bursting to describe to people? That would be AI.

8. Is there anything else I need to know?

When might you ask such a question? You might have had a conversation with a chat bot and found out how to get somewhere you were trying go. Then you might (if you were talking to a person) ask if there is anything else you needed to know. Answering that question involves knowing whom you are talking to. (Oh, yeah, there is great Ethiopian Restaurant nearby and watch out for speed traps.) Let’s see the chat bot that can answer that question.

9. I can’t figure out how to grow my business. Got any ideas?

It is obvious why this is a difficult question. But, in business, people have conversations like that all the time. They use their prior experiences to predict future experiences. They make suggestions based on stuff they themselves have done. They give advice based on cases in their own lives and they usually tell personal stories to illustrate their points. That is what intelligent conversation sounds like. Can AI do that? Not today, but it is possible. Unfortunately there is no one that I know of who is working on that. Instead they are working on counting words and matching syntactic phrases.

They are also working on AI document checkers that will help Word with spell check, or grammar check. “NeuroGrammar™ uses its advanced neural-network artificial intelligence algorithms in order to analyse every noun phrase and verb phrase in every sentence for syntactic and semantic errors.”

How marvelous. So here is my last question:

10. Does what I am writing make sense?

Amazingly, this is hard. Why? Because in order to understand my points you need to match them to things you already think and see if I have helped you think about things better or decide that you disagree with what I am saying here based on your own beliefs. You already have an opinion on whether my writing style was comprehensible and whether the points I made made sense to you. You can do that. AI cannot. Do I think we could do that someday in AI? Maybe. We would have to have a complete model of the world and an understanding of what kinds of ideas people argue for and what counterarguments are reasonable. Intelligent people all do this. “AI’s” do not. An “AI” that understood documents would not be a grammar checker.

It would be nice if people stopped pushing AI that is based on statistics and word counts and “AI people” tried to do the hard work that making AI happen would require. 

Thoughts for the day

In Cambridge, again, yesterday for a workshop on autonomous flight. We somehow got onto autonomous vehicles on the ground and there was a comment from one participant that I think is worth sharing.

Essentially, it would be OK to hail an autonomous (self-driving) taxi/pod/whatever (with or without a driver/ attendant), but NOT OK if this were to contain another passenger in the back (someone they didn’t know). So, trust in an autonomous vehicle was 100%, but trust in an anonymous person was 0%. This was said by someone in their twenties and might be representative of generational attitudes towards trust. It could also be related to the various scandals surrounding Uber quite recently.

Reminds me of a comment by an elderly man who lived in a huge house in the middle of a vast estate He didn’t like walking in his own grounds, which were open to the public, because he might bump into someone that he hadn’t been introduced to.

The other thing that caught my attention yesterday, was a sign in the ‘window’ of Microsoft Research that said simply “Creating new realities”. Can you have more than one reality?
Is reality fixed? A mountain, for example, is there whether you want it to be or not. Or can you create new realities by overlaying virtual data, for instance?

My initial reaction was that there is only one authentic reality, but then I realised this was nonsense in a sense. How I perceive things will be different to how you perceive things but, more fundamentally, how a dog, or a bee, sees things is different to how humans see things. Butterflies, for instance, see using the ultra-violet wavelength. Humans see using the visible light spectrum (I think that’s correct, correct me if it’s not!).

Just because we see a flower as yellow, doesn’t necessarily mean the flower is yellow for other animals. And there’s smell of course, which can vary significantly between animals.

And this all relates to AI ethics…because what I think is ethical isn’t necessarily the same as what you think is ethical and differences can be magnified when you start dealing with countries and cultures. For instance, what China sees as ethical behaviour, for a drone or autonomous passenger vehicle, will be different to how the US sees it. And you thought Brexit was complicated.

What Can’t AI Do?

Image: All rights reserved, Richard Watson/Zeljko Zoricic

This was supposed to be so easy. A quick post on what AI is incapable of doing. Things we might congratulate ourselves about, especially those with an arts degree. Things that might underpin future-proof human employment perhaps. But the more I dug into this the more things became complicated. The first problem was when? You mean incapable of ever? Or incapable of in 20 years? Define your terms! Ever is a tough one, so I’m leaving ever alone, forever. But even if you narrow it down to, say, the year 2050, things remain muddled, largely because I keep meeting people who disagree with me. People that know a heck of a lot more than I do about this.

Regardless, here’s where I’m at with my list currently. An initial (draft) list of things AI will not be capable of doing well or at all by the year 2050. I encourage you to disagree with me or add something I’ve not thought of.

1. Common sense

This could well be the hardest thing for AI to crack, because common sense requires broad AI, not narrow AI and that’s nowhere in sight. I mean common sense in the broadest possible sense. Obviously some humans struggle with common sense too, but that’s another matter.

2. Abstract thinking

The ability to distil past experience and apply it in totally new domains or to novel concepts would appear to a human domain.  The ability to think of something in terms of something else. Perhaps the ability to think about your own thinking too. The obvious implication here is around invention.

3. Navigation

This is similar to common sense, but specifically refers to the ability to move around and understand our ever-changing and highly complex world of objects and environments. AIs can understand one thing, but generally not another and certainly not the whole. There is no deep context. A surgical robot understands surgery, but doesn’t understand anything else and much less why it is doing what it is doing. A strong link here is with robotics (embodied AI). A 5-year-old kid has better navigational skills than most AIs.

4. Emotional intelligence

IQ can be replicated (someone, please tell our schools), but EQ should remain as a human domain. I am more than aware of affective computing and various machines that can judge and respond to human emotions (and machines that have compelling and even alluring characters are coming soon), but all this is fake at the end of the day and I suspect that we might see through it. I think AIs might struggle with not only the complexity and nuance of human emotion, but the fact that humans aren’t very logical some of the time. For AIs to effectively deal with humans they need to deal with human emotions they would have to tap into our unconsious selves to do this effectively. Not impossible, but very hard. Perhaps a true test of general AI is the day that a computer gives the wrong answer to a question to spare someone’s feelings.

5. Creativity

I know AIs can write and compose music. They can think originally and creatively too, as Alpha go recently demonstrated. But high end creativity? The example I thought of was that while an AI can paint, it doesn’t understand the history of art and couldn’t invent something like cubism, partly because of a lack of context, and partly because cubism involves rule breaking. Cubism was to some extent illogical. However, I’m not convinced by my own argument. I think it’s possible that AIs could develop radically new forms of art. But,then again, would it matter? Would it mean anything? Would it touch on the human condition? If it neither matters nor means anything to people then I’m not sure it could be called art. Although, if we decided it was art then it would be perhaps. One further thought here. Creativity stems from making mistakes and curiousity to a degree. How do you code that?

6. Humour

Could an AI ever write a truly a truly funny joke? I suspect not, because jokes generally require a lateral leap or unexpexted change of direction that is to some extent nonsensical. Example. Joseph says to the innkeeper in Bethlehem, no, I said I want to see the manager!(Better example: Me in supermarket to overweight check-out guy beghind the till: “How are you today?” Him to me: ” Oh, you know, living the dream.”). See here for more.

7. Compassion

I think this one is safe. OK, you can programme an AI to follow ethical rules, but compassion often involves rule breaking or weighing up two factors that are both true but in conflict with each other. The difference between the letter and spirit of the law. Broad context is part of this again. This links to another thought, perhaps, which is that AIs will never be people persons (good with people). Do humans care? Possibly not.

8. Mortality/have a fear of death

I can’t see how an AI can be afraid of death without consciousness, and as far as I can see that’s nowhere in sight. The fact that humans are fragile and afraid of dying is hard to replicate (although there is that bit with HAL in 2001).

9. Learning from very small data sets

Can an AI learn from limited experience in the same way that humans do? I’m not sure, maybe. There might be a link towards what might be termed a sixth sense here too – the ability of humans to infer or predict that something will happen that goes beyond labelled data. What if there is no data, but you need to make a decision or act?

10. Love

Again, without consciousness? (and don’t give me that nonsense about AIs suddenly waking up. How?). I can’t see it. The same might apply to being kind, unless the need for kindness can somehow be deduced from a set of rules. But if that’s true, such kindness would not be not genuine, not sincere. Again, do people care?

Robots Vs. People

I attended a conference on AI at Cambridge University last week and one of the most interesting points was why we were developing robots to look after old people – why not just use people? The answer could be that we are running out of people, especially younger people, as is the case in Japan, but this simply begs another question. Why don’t we either educate younger people on the importance of looking after older people, especially one’s own relatives, or simply conscript younger people into the NHS for short periods (a wonderfully provocative idea proposed by Prof. Ian Maconochie at an Imperial College London lecture the other week).

What can’t computers do?

Are you worried that your job could be ravaged by a robot or consumed by a computer? You probably should be judging by a study by Oxford University, which said that up to half of jobs in some countries could disappear by the year 2030.

Similarly, a PEW Research Centre study found that two-thirds of Americans believe that in 50 years’ time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe. I will come back to the Oxford study and the inconsistency of the two PEW statements later, but in the meantime how might we ensure that our jobs, and those of our children, are safe in the future?

Are there any particular skills or behaviours that will ensure that people are employed until they decide they no longer want to be? Indeed, is there anything we currently do that artificial intelligence will never be able to do no matter how clever we get at designing AI?

To answer these questions, we should perhaps first consider what it is that AI, automated systems, robots and computers do today and then speculate in an informed manner about what they might be capable of in the future.

A robot is often defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly being used to replace people when people are too inefficient or too valuable. Not surprisingly, machines such as these are tailor made for repetitive tasks such as manufacturing or for beating humans at rule-based games. A logical AI can be taught to drive cars, another rule-based activity, although self-driving cars do run into one rather messy problem, which is people, who can be emotional, irrational or ignore rules.

The key word here is logic. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. If X then Y and so on. At least that’s been true historically. But machine learning now means that machines can be left to invent their own rules and logic based on patterns they recognise in large sets of data. In other words, machines can learn from experience much as humans do.

Having said this, at the moment it’s tricky for a robot or any other type of computer to make me a good cup of coffee, let alone bring it upstairs and persuade me to get out of bed and drink it. That’s the good news. A small child has more general intelligence and dexterity than even the most sophisticated AI and no robot is about to eat your job anytime soon, especially if it’s a skilled or professional job that involves fluid problem solving, lateral thinking, common sense, and most of all Emotional Intelligence or EQ.

Moreover, we are moving into a world where creating products or services is not enough. Companies must now tell stories and produce good human experiences and this is hard for machines because it involves appealing to human hearts as well as human heads.

The future will be about motivating people using stories not just numbers. Companies will need to be warm, tolerant, personable, persuasive and above all ethical and this involves EQ alongside IQ. Equally, the world of work is less about command and control pyramidal hierarchies where bosses sit at the top and shout commands at people. Leadership is becoming more about informal networks in which inspiration, motivation, collaboration and alliance building come together for mutual benefit. People work best when they work with people they like and it’s hard to see how AIs can help when so much depends on passion, personality and pervasiveness.

I’m not for a moment saying that AI won’t have a big impact, but in many cases I think it’s more a case of AI plus human not AI minus human. AI is a tool and we will use AI as we’ve always used tools – to enhance and extend human capabilities.

You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be taken or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t generally capable of original thought, especially thoughts that speak to what it means to be human. We’ve already seen a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing abstract expressionism. This requires rule breaking.I’m not saying that AI created creativity that means something or strikes a chord can’t happen, simply that AI is pretty much stuck because of a focus on the human head, not the human heart. Furthermore, without consciousness I cannot see how anything truly beautiful or remotely threatening can ever evolve from code that’s ignorant of broad context.  It’s one thing to teach a machine to write poetry, but it’s entirely another thing to write poetry that connects with and moves people on an emotional level and speaks to the irrational, emotional and rather messy matter of being a human being. There’s no ‘I’ in AI. There’s no sense of self, no me, no you and no us.

There is some bad news though. An enormous number of current jobs, especially low-skilled jobs, require none of the things I’ve just talked about.If your job is rigidly rule based or depends upon the acquisition or application of knowledge based upon fixed conventions then it’s ripe for digital disintermediation. This probably sounds like low-level data entry jobs such as clerks and cashiers, which it is, but it’s also some accountants, financial planners, farmers, paralegals, pilots, medics, and aspects of both law enforcement and the military.

But I think we’re missing something here and it’s something we don’t hear enough about. The technology writer Nicholas Carr has said that the real danger with AI isn’t simply AI throwing people out of work, it’s the way that AI is de-skilling various occupations and making jobs and the world of work less satisfying.

De-skilling almost sounds like fun. It sounds like something that might make things easier or more democratic. But removing difficulty from work doesn’t only make work and less interesting, it makes it potentially more dangerous.

Remoteness and ease, in various forms, can remove situational awareness, for example, which opens up a cornucopia of risks.The example Carr uses is airline pilots, who through increasing amounts of automation are becoming passengers in their own planes. We are removing not only the pilot’s skill, but the pilot’s confidence to use their skill and judgment in an emergency.

Demographic trends also suggest that workforces around the world are shrinking, due to declining fertility, so unless the level of workplace automation is significant, the biggest problem most countries could face is finding and retaining enough talented workers, which is where the robots might come in. Robots won’t be replacing anyone directly, they will just take up the slack where humans are absent or otherwise unobtainable. AIs and robots will also be companions, not adversaries, especially when we grow old or live alone. This is happening in Japan already.

One thing I do think we need to focus on, not only in work, but in education too, is what AI can’t do. This remains uncertain, but my best guess is that skills like abstract thinking, empathy, compassion, common sense, morality, creativity, and matters of the human heart will remain the domain of humans unless we decide that these things don’t matter and let them go. Consequently, our education systems must urgently move away from simply teaching people to acquire information (data) and apply it according to rules, because this is exactly what computers are so good at. To go forwards we must go backwards to the foundations of education and teach people how to think and how to live a good life.

Finally, circling back to where I started, the Oxford study I mentioned at the beginning is flawed in my view. Jobs either disappear or they don’t. Perhaps the problem is that the study, which used an algorithm to assess probabilities, was too binary and failed to make the distinction between tasks being automated and jobs disappearing.

As to how it’s possible that people can believe that robots and computers will “probably” or “definitely” perform most of the jobs in the future, while simultaneously believing that their own jobs will “probably” or “definitely” be safe. I think the answer is because humans have hope and human adapt. For these two reasons alone I think we’ll be fine in the future.

.