I sometimes get asked how I look at things, especially in the sense of how do I know what to notice and what to ignore. My glib answer is often the rule of 3. If 3 people mention the same thing, or I see 3 examples of something in different contexts, I tend to pay attention.
A good example is Explainable AI. Early this year a coder mentioned an idea for what he called ‘software that rusts’. For some unexplainable reason this instantly grabbed my attention. It was somewhat illogical and possibly contradictory, but there was something in the idea. Digital is pristine and identical. But humans like imperfection and uniqueness.
Last week I was taking with some students at the Dyson Lab at Imperial College and we got talking about AI to AI interactions and I came up with the idea of Digital Provenance. This would be a bit like Blockchain, in the sense that you could see the history of something that was digital, but it would have a far richer and more human storyline. In other words, digital products would be able to reveal where they were coded, but also when? and by whom? In other words, the idea of provenance or ‘farm to fork’ eating transferred to software code or anything that was digital.
Then the day before yesterday I was with some people and the concept of Explainable AI came up. The best way of thinking about this might to think in terms of a black box that can be opened up. I think this will become increasingly important as and when accidents happen with AI and fully autonomous systems. These machines need to explain themselves to us. They need to be able to argue with us over what they did and why and reveal their biases if asked. At the moment most of these AI systems are secret and neither users, regulators or governments can look inside. But if we start trusting our lives with these systems then this has to change.
BTW, since I’m getting into AI, I’d like to highlight a problem that’s been around for centuries – human stupidity. In a sense, the issue going forward isn’t artificial intelligence, it’s real human stupidity. In particular, the human stupidity caused by an overreliance on machines. As Sherry Turkle once said, “what if one of the consequences of machines that think, is people that don’t?” There is a real danger of a culture of learned incompetence and human de-skilling arising from our use of smart machines.
Silly example: I was at London Bridge Station earlier in the week trying to get on the Jubilee Line. The escalators were broken. The queues were horrific. So, I asked why we couldn’t use the escalators. “Because they’re broken” was the response. “But they are steps” I replied. “They still work.” OMG.