What Can’t AI Do?

Image: All rights reserved, Richard Watson/Zeljko Zoricic

This was supposed to be so easy. A quick post on what AI is incapable of doing. Things we might congratulate ourselves about, especially those with an arts degree. Things that might underpin future-proof human employment perhaps. But the more I dug into this the more things became complicated. The first problem was when? You mean incapable of ever? Or incapable of in 20 years? Define your terms! Ever is a tough one, so I’m leaving ever alone, forever. But even if you narrow it down to, say, the year 2050, things remain muddled, largely because I keep meeting people who disagree with me. People that know a heck of a lot more than I do about this.

Regardless, here’s where I’m at with my list currently. An initial (draft) list of things AI will not be capable of doing well or at all by the year 2050. I encourage you to disagree with me or add something I’ve not thought of.

1. Common sense

This could well be the hardest thing for AI to crack, because common sense requires broad AI, not narrow AI and that’s nowhere in sight. I mean common sense in the broadest possible sense. Obviously some humans struggle with common sense too, but that’s another matter.

2. Abstract thinking

The ability to distil past experience and apply it in totally new domains or to novel concepts would appear to a human domain.  The ability to think of something in terms of something else. Perhaps the ability to think about your own thinking too. The obvious implication here is around invention.

3. Navigation

This is similar to common sense, but specifically refers to the ability to move around and understand our ever-changing and highly complex world of objects and environments. AIs can understand one thing, but generally not another and certainly not the whole. There is no deep context. A surgical robot understands surgery, but doesn’t understand anything else and much less why it is doing what it is doing. A strong link here is with robotics (embodied AI). A 5-year-old kid has better navigational skills than most AIs.

4. Emotional intelligence

IQ can be replicated (someone, please tell our schools), but EQ should remain as a human domain. I am more than aware of affective computing and various machines that can judge and respond to human emotions (and machines that have compelling and even alluring characters are coming soon), but all this is fake at the end of the day and I suspect that we might see through it. I think AIs might struggle with not only the complexity and nuance of human emotion, but the fact that humans aren’t very logical some of the time. For AIs to effectively deal with humans they need to deal with human emotions they would have to tap into our unconsious selves to do this effectively. Not impossible, but very hard. Perhaps a true test of general AI is the day that a computer gives the wrong answer to a question to spare someone’s feelings.

5. Creativity

I know AIs can write and compose music. They can think originally and creatively too, as Alpha go recently demonstrated. But high end creativity? The example I thought of was that while an AI can paint, it doesn’t understand the history of art and couldn’t invent something like cubism, partly because of a lack of context, and partly because cubism involves rule breaking. Cubism was to some extent illogical. However, I’m not convinced by my own argument. I think it’s possible that AIs could develop radically new forms of art. But,then again, would it matter? Would it mean anything? Would it touch on the human condition? If it neither matters nor means anything to people then I’m not sure it could be called art. Although, if we decided it was art then it would be perhaps. One further thought here. Creativity stems from making mistakes and curiousity to a degree. How do you code that?

6. Humour

Could an AI ever write a truly a truly funny joke? I suspect not, because jokes generally require a lateral leap or unexpexted change of direction that is to some extent nonsensical. Example. Joseph says to the innkeeper in Bethlehem, no, I said I want to see the manager!(Better example: Me in supermarket to overweight check-out guy beghind the till: “How are you today?” Him to me: ” Oh, you know, living the dream.”). See here for more.

7. Compassion

I think this one is safe. OK, you can programme an AI to follow ethical rules, but compassion often involves rule breaking or weighing up two factors that are both true but in conflict with each other. The difference between the letter and spirit of the law. Broad context is part of this again. This links to another thought, perhaps, which is that AIs will never be people persons (good with people). Do humans care? Possibly not.

8. Mortality/have a fear of death

I can’t see how an AI can be afraid of death without consciousness, and as far as I can see that’s nowhere in sight. The fact that humans are fragile and afraid of dying is hard to replicate (although there is that bit with HAL in 2001).

9. Learning from very small data sets

Can an AI learn from limited experience in the same way that humans do? I’m not sure, maybe. There might be a link towards what might be termed a sixth sense here too – the ability of humans to infer or predict that something will happen that goes beyond labelled data. What if there is no data, but you need to make a decision or act?

10. Love

Again, without consciousness? (and don’t give me that nonsense about AIs suddenly waking up. How?). I can’t see it. The same might apply to being kind, unless the need for kindness can somehow be deduced from a set of rules. But if that’s true, such kindness would not be not genuine, not sincere. Again, do people care?

2 thoughts on “What Can’t AI Do?

  1. Something just sent to me…

    Dear Richard,

    Sorry for the late reply, I was out of town for a family issue. By this time, I kept your question in my mind. After my research, I found a couple of things that AI cant do. As a tech-woman, it really took a long time to find out. Here is my list. I hope it helps you. Sixth Sence / Presence (It is an unexplainable thing, sometimes we know something will happen and it happens) (It is different from prediction, machine predicts by using labeled data, what if there is no data?) 

    Fear of death 

    Feeling in touching (maybe its chemical, maybe emotional, But when we feel alone or sad, as a human being we touch and share the thing that we even don’t speak, then feel comfortable. Maybe you can call aura)

  2. I almost deleted this, it’s weak. But I’ll leave it up and do something with more weight. An essay, not a list. Apologies.

Leave a Reply

Your email address will not be published. Required fields are marked *