I’m sufferung from a total lack of motivation, possibly because neither of the two books I’m writing have deadlines. It could also be the school holidays – too many people around, too much noise, too much movement. I’d write more about this if I could, but obviously I can’t. Here’s something from the New Yorker instead.
I’m just starting a new timeline going out a few million years to speculate about the future of space/space travel/space colonisation. If anyone has any outlandish ideas I’m all ears. The only proviso is the ideas must be technically/theoretically possible.
This was supposed to be so easy. A quick post on what AI is incapable of doing. Things we might congratulate ourselves about, especially those with an arts degree. Things that might underpin future-proof human employment perhaps. But the more I dug into this the more things became complicated. The first problem was when? You mean incapable of ever? Or incapable of in 20 years? Define your terms! Ever is a tough one, so I’m leaving ever alone, forever. But even if you narrow it down to, say, the year 2050, things remain muddled, largely because I keep meeting people who disagree with me. People that know a heck of a lot more than I do about this.
Regardless, here’s where I’m at with my list currently. An initial (draft) list of things AI will not be capable of doing well or at all by the year 2050. I encourage you to disagree with me or add something I’ve not thought of.
1. Common sense
This could well be the hardest thing for AI to crack, because common sense requires broad AI, not narrow AI and that’s nowhere in sight. I mean common sense in the broadest possible sense. Obviously some humans struggle with common sense too, but that’s another matter.
2. Abstract thinking
The ability to distil past experience and apply it in totally new domains or to novel concepts would appear to a human domain. The ability to think of something in terms of something else. Perhaps the ability to think about your own thinking too. The obvious implication here is around invention.
This is similar to common sense, but specifically refers to the ability to move around and understand our ever-changing and highly complex world of objects and environments. AIs can understand one thing, but generally not another and certainly not the whole. There is no deep context. A surgical robot understands surgery, but doesn’t understand anything else and much less why it is doing what it is doing. A strong link here is with robotics (embodied AI). A 5-year-old kid has better navigational skills than most AIs.
4. Emotional intelligence
IQ can be replicated (someone, please tell our schools), but EQ should remain as a human domain. I am more than aware of affective computing and various machines that can judge and respond to human emotions (and machines that have compelling and even alluring characters are coming soon), but all this is fake at the end of the day and I suspect that we might see through it. I think AIs might struggle with not only the complexity and nuance of human emotion, but the fact that humans aren’t very logical some of the time. For AIs to effectively deal with humans they need to deal with human emotions they would have to tap into our unconsious selves to do this effectively. Not impossible, but very hard. Perhaps a true test of general AI is the day that a computer gives the wrong answer to a question to spare someone’s feelings.
I know AIs can write and compose music. They can think originally and creatively too, as Alpha go recently demonstrated. But high end creativity? The example I thought of was that while an AI can paint, it doesn’t understand the history of art and couldn’t invent something like cubism, partly because of a lack of context, and partly because cubism involves rule breaking. Cubism was to some extent illogical. However, I’m not convinced by my own argument. I think it’s possible that AIs could develop radically new forms of art. But,then again, would it matter? Would it mean anything? Would it touch on the human condition? If it neither matters nor means anything to people then I’m not sure it could be called art. Although, if we decided it was art then it would be perhaps. One further thought here. Creativity stems from making mistakes and curiousity to a degree. How do you code that?
Could an AI ever write a truly a truly funny joke? I suspect not, because jokes generally require a lateral leap or unexpexted change of direction that is to some extent nonsensical. Example. Joseph says to the innkeeper in Bethlehem, no, I said I want to see the manager!(Better example: Me in supermarket to overweight check-out guy beghind the till: “How are you today?” Him to me: ” Oh, you know, living the dream.”). See here for more.
I think this one is safe. OK, you can programme an AI to follow ethical rules, but compassion often involves rule breaking or weighing up two factors that are both true but in conflict with each other. The difference between the letter and spirit of the law. Broad context is part of this again. This links to another thought, perhaps, which is that AIs will never be people persons (good with people). Do humans care? Possibly not.
8. Mortality/have a fear of death
I can’t see how an AI can be afraid of death without consciousness, and as far as I can see that’s nowhere in sight. The fact that humans are fragile and afraid of dying is hard to replicate (although there is that bit with HAL in 2001).
9. Learning from very small data sets
Can an AI learn from limited experience in the same way that humans do? I’m not sure, maybe. There might be a link towards what might be termed a sixth sense here too – the ability of humans to infer or predict that something will happen that goes beyond labelled data. What if there is no data, but you need to make a decision or act?
Again, without consciousness? (and don’t give me that nonsense about AIs suddenly waking up. How?). I can’t see it. The same might apply to being kind, unless the need for kindness can somehow be deduced from a set of rules. But if that’s true, such kindness would not be not genuine, not sincere. Again, do people care?
“One small town, Williamson (West Virginia), with a population of just 3,000, shipped in more than 20 million opioid pills, mostly oxycodone and hydrocodone, in a seven year period.” (The New Redneck Rebellion, FT Life & Arts, 29-30 June 2019).
I’m looking out of a window at 39,000 feet thinking about what the lack of oxygen outside might be doing to my brain. I’m wondering why my mind is wandering and puzzling how it’s possible that the altitude, or perhaps it’s the expansive horizon, is elevating my thinking. I’m also questioning why I never think like this when I’m frantically searching for a parking space at the Heston Motorway Services (Eastbound), on a modestly miserable Monday.
Astronauts have reported similar feelings of wonderment and even bewilderment looking back at the earth from higher up in space. Indeed, there’s a phrase for this shift in perception – it’s called the overview effect and describes how daily distractions disappear when viewed from such an elevated perspective. One can rise above any inconsequential thoughts and see everything as being connected to everything else, at which point one can glimpse the faint reflection of human continuity. The vastness of space somehow makes people feel simultaneously special and totally irrelevant. This can induce feelings of serenity or absolute panic. Some astronauts have even found God floating 500 kilometres above the earth’s surface.
I spend a lot of time on planes. Given the right combination of seat number, seat incline and flight time, thoughts like these occur with scheduled regularity. My best guess is that it’s because there’s a certain level of disconnection at 39,000 feet. I am often alone too, and while other passengers could use digital devices to make calls or send emails on planes, most generally don’t. Planes are among the last sacred spaces, places where people instinctively feel that any linear silence or mental privacy should be preserved.
There is also the thought that you cannot get off. Once aboard, you have to tightly fasten your seat belt and surrender yourself to this truth. This constraint can result in a certain calmness, which some people cite as being a prerequisite for fresh thinking. There is simply less we can do at 39,000 feet. This means we can think more about where we’ve been or where we are going in a physical and metaphysical sense. “If you want to change where you’re at, you have to change where you’re at” as a friend of mine once said.
I’m out of the office. Well, I’m away from my usual office at any rate. I’m writing and my mind has wandered off into the distance to consider how a room, or a view at least, can impact how one thinks or what one writes. I just searched ‘writers’ rooms’ and got this load of clutter below. I’m not sure I could write surrounded by that much distraction. On the other hand clutter can result in accidental combinations of information so perhaps what you really need is both. You need clutter to input random information and then emptyness to start connecting them in novel ways.
Something I wrote for Fast Company 14 years ago (!!!) about the relationship between spaces and ideas here.
As AI becomes more critical to the inner workings of the world, more attention is being paid to the inner workings of computer code. AIs already make millions of decisions about who gets a job, who gets a home loan or even who goes to jail, so being able to check whether or not an algorithm is biased, or just plain wrong, is increasingly important.
Some errors are simply that. Accidents. But others are the result of what’s been called ‘the white guy problem.’ Most coders, especially in the US, are male. 88 per cent of all patents driving big tech are developed by all male teams. All female teams generate just 2 per cent of patents. Hence, conscious or unconscious biases can creep into any code, with the result that facial recognition software doesn’t recognise dark skins or facial expression software thinks most Asian people are blinking. This situation gets even more serious when it comes to predicting criminality.
Algorithms designed to predict the likelihood of defenders committing further crimes have been shown to flag black defendants are doubly likely to re-offend, which this has no foundation in fact. In a less seriously, but nevertheless shocking instance, algorithms made black residents in some areas pay 50 per cent more for their car insurance than white customers even after factoring in the effects of low incomes and actual crime. It’s seriously unlikely that subconscious bias built into the code paid no part in this.
You might think that Silicon Valley in California would be the last place to suffer from equality issues, but that’s simply not the case and not just with code. Several of the founders of high profile tech companies have been forced to resign due to what amounts to sexist conduct. According to many observers, men working in big tech either suffer from ‘on the spectrum’ awkwardness around women or they are hostile towards women and minorities. A study by the Center for Talent Innovation, for example, found that 52 per cent of women had quit their jobs in tech because of a “hostile environment” while a staggering 62 per cent had suffered sexual harassment. There has been progress, but no government really wants to tackle this head on while these companies are so powerful.