What can’t computers do?

Are you worried that your job could be ravaged by a robot or consumed by a computer? You probably should be judging by a study by Oxford University, which said that up to half of jobs in some countries could disappear by the year 2030.

Similarly, a PEW Research Centre study found that two-thirds of Americans believe that in 50 years’ time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe. I will come back to the Oxford study and the inconsistency of the two PEW statements later, but in the meantime how might we ensure that our jobs, and those of our children, are safe in the future?

Are there any particular skills or behaviours that will ensure that people are employed until they decide they no longer want to be? Indeed, is there anything we currently do that artificial intelligence will never be able to do no matter how clever we get at designing AI?

To answer these questions, we should perhaps first consider what it is that AI, automated systems, robots and computers do today and then speculate in an informed manner about what they might be capable of in the future.

A robot is often defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly being used to replace people when people are too inefficient or too valuable. Not surprisingly, machines such as these are tailor made for repetitive tasks such as manufacturing or for beating humans at rule-based games. A logical AI can be taught to drive cars, another rule-based activity, although self-driving cars do run into one rather messy problem, which is people, who can be emotional, irrational or ignore rules.

The key word here is logic. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. If X then Y and so on. At least that’s been true historically. But machine learning now means that machines can be left to invent their own rules and logic based on patterns they recognise in large sets of data. In other words, machines can learn from experience much as humans do.

Having said this, at the moment it’s tricky for a robot or any other type of computer to make me a good cup of coffee, let alone bring it upstairs and persuade me to get out of bed and drink it. That’s the good news. A small child has more general intelligence and dexterity than even the most sophisticated AI and no robot is about to eat your job anytime soon, especially if it’s a skilled or professional job that involves fluid problem solving, lateral thinking, common sense, and most of all Emotional Intelligence or EQ.

Moreover, we are moving into a world where creating products or services is not enough. Companies must now tell stories and produce good human experiences and this is hard for machines because it involves appealing to human hearts as well as human heads.

The future will be about motivating people using stories not just numbers. Companies will need to be warm, tolerant, personable, persuasive and above all ethical and this involves EQ alongside IQ. Equally, the world of work is less about command and control pyramidal hierarchies where bosses sit at the top and shout commands at people. Leadership is becoming more about informal networks in which inspiration, motivation, collaboration and alliance building come together for mutual benefit. People work best when they work with people they like and it’s hard to see how AIs can help when so much depends on passion, personality and pervasiveness.

I’m not for a moment saying that AI won’t have a big impact, but in many cases I think it’s more a case of AI plus human not AI minus human. AI is a tool and we will use AI as we’ve always used tools – to enhance and extend human capabilities.

You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be taken or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t generally capable of original thought, especially thoughts that speak to what it means to be human. We’ve already seen a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing abstract expressionism. This requires rule breaking.I’m not saying that AI created creativity that means something or strikes a chord can’t happen, simply that AI is pretty much stuck because of a focus on the human head, not the human heart. Furthermore, without consciousness I cannot see how anything truly beautiful or remotely threatening can ever evolve from code that’s ignorant of broad context.  It’s one thing to teach a machine to write poetry, but it’s entirely another thing to write poetry that connects with and moves people on an emotional level and speaks to the irrational, emotional and rather messy matter of being a human being. There’s no ‘I’ in AI. There’s no sense of self, no me, no you and no us.

There is some bad news though. An enormous number of current jobs, especially low-skilled jobs, require none of the things I’ve just talked about.If your job is rigidly rule based or depends upon the acquisition or application of knowledge based upon fixed conventions then it’s ripe for digital disintermediation. This probably sounds like low-level data entry jobs such as clerks and cashiers, which it is, but it’s also some accountants, financial planners, farmers, paralegals, pilots, medics, and aspects of both law enforcement and the military.

But I think we’re missing something here and it’s something we don’t hear enough about. The technology writer Nicholas Carr has said that the real danger with AI isn’t simply AI throwing people out of work, it’s the way that AI is de-skilling various occupations and making jobs and the world of work less satisfying.

De-skilling almost sounds like fun. It sounds like something that might make things easier or more democratic. But removing difficulty from work doesn’t only make work and less interesting, it makes it potentially more dangerous.

Remoteness and ease, in various forms, can remove situational awareness, for example, which opens up a cornucopia of risks.The example Carr uses is airline pilots, who through increasing amounts of automation are becoming passengers in their own planes. We are removing not only the pilot’s skill, but the pilot’s confidence to use their skill and judgment in an emergency.

Demographic trends also suggest that workforces around the world are shrinking, due to declining fertility, so unless the level of workplace automation is significant, the biggest problem most countries could face is finding and retaining enough talented workers, which is where the robots might come in. Robots won’t be replacing anyone directly, they will just take up the slack where humans are absent or otherwise unobtainable. AIs and robots will also be companions, not adversaries, especially when we grow old or live alone. This is happening in Japan already.

One thing I do think we need to focus on, not only in work, but in education too, is what AI can’t do. This remains uncertain, but my best guess is that skills like abstract thinking, empathy, compassion, common sense, morality, creativity, and matters of the human heart will remain the domain of humans unless we decide that these things don’t matter and let them go. Consequently, our education systems must urgently move away from simply teaching people to acquire information (data) and apply it according to rules, because this is exactly what computers are so good at. To go forwards we must go backwards to the foundations of education and teach people how to think and how to live a good life.

Finally, circling back to where I started, the Oxford study I mentioned at the beginning is flawed in my view. Jobs either disappear or they don’t. Perhaps the problem is that the study, which used an algorithm to assess probabilities, was too binary and failed to make the distinction between tasks being automated and jobs disappearing.

As to how it’s possible that people can believe that robots and computers will “probably” or “definitely” perform most of the jobs in the future, while simultaneously believing that their own jobs will “probably” or “definitely” be safe. I think the answer is because humans have hope and human adapt. For these two reasons alone I think we’ll be fine in the future.

.


Thought for the day

Here’s what I hope is an uplifting thought about the future. There is a chance that as we increasingly fuse our physical, biological and digital worlds, we will slowly start to see that we are not just individuals, but part of something so much greater. We will start to see that everything connects and that we are part of, and in our own way responsible for, a whole that’s not only very big, but totally unique within the known universe.

As the Buddhist said to the hot dog vendor: “make me one with everything.”

Will a robot eat your job?

A recent bank of England Study has said that 15 million of the UKs 30 million jobs could be at risk from automation over the coming years. Meanwhile, a US PEW study found that two-thirds of Americans believe that in 50 years time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe.

Putting to one side the obvious inconsistency, how might we ensure that our jobs, and those of our children, are safe in the future? Are there any particular professions, skills or behaviours that will ensure that we remain gainfully employed until we decide that we no longer want to be? Indeed, is there anything of significance that we do that artificial intelligence can’t? And how might education be upgraded to ensure peak employability in the future? Finally, how realistic are doomsday forecast that robotics and automation will destroy countless millions of jobs?

To answer all of these questions we should first delve into what it is that robots, artificial intelligence, automated systems and computers generally do today and then speculate in an informed manner about what they might be capable of in distant tomorrows.

A robot is generally defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly used to replace people when people are absent, expensive or in short supply. This is broadly true with automated systems as well. They are programmed to acquire data, make decisions, complete tasks or solve problems based upon pre-set rules. Not surprisingly, machines such as these are tailor made for repetitive tasks such as industrial manufacturing or for beating humans at rule-based games such as chess or Go. They can be taught to drive cars, which is another rule-based activity, although self-driving cars do run into one big problem, which is humans that don’t follow the same logical rules.

The key phrase here is rule-based. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. At least that’s been true historically. Machine learning now means that machines can be left to invent their own rules based on observed patterns. In other words, machines can now learn from their own experience much like humans do. In the future it’s even possible that robots, AIs and other technologies will be released into ‘the wild’ and begin to learn for themselves through human interaction and direct experience of their environment, much in the same way that we do.

Having said this, at the moment it’s tricky for a robot to make you cup of tea let alone bring it upstairs and persuade you to drink it if you’re not thirsty. Specialist (niche) robots are one thing, but a universally useful (general) robot is that has an engaging personality, which humans feel happy to converse with, is something else.

And let’s not forget that much of this ‘future’ technology, especially automation, is already here but that we choose not to use it. For example, airplanes don’t need pilots. They already fly themselves, but would you get on a crewless airplane? Or how about robotic surgery? This too exists and it’s very good. But how would you feel about being put to sleep and then being operated on with no human oversight whatsoever? It’s not going to happen. We have a strong psychological need to deal with humans in some situations and technology should always be considered alongside psychology.

That’s the good news. No robot is about to steal your job, especially if it’s a skilled or professional job that’s people-centric. Much the same can be said of AI and automated systems generally. Despite what you might read in the newspapers (still largely written by humans despite the appearance of story-writing robots) many jobs are highly nuanced and intuitive, which makes coding difficult. Many if not most jobs also involve dealing with people on some level and people can be illogical, emotional and just plain stupid.

This means that a great many jobs are hard to fully automate or digitalise. Any job that can be subtly different each time it’s done (e.g. plumbing) or requires a certain level of aesthetic appreciation or lateral thought is hard to automate too.Robots, AIs and automated systems can’t invent and don’t empathise either. This probably means that most nurses, doctors, teachers, lawyers and detectives are not about to be made redundant. You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be made or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t capable of original thought, especially thoughts that speak to what it means to be human. We’ve already had a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing cubism. This requires rule breaking not pattern recognition.

Similarly, it’s one thing to teach a machine to write poetry or compose music, but it’s entirely another thing to write poetry or music that connects with people on an emotional level and speaks to their experience of being human. Machines can know nothing about the human experience and while they can be taught to know what something is they can’t be taught to know what something feels like from a human perspective.

There is some bad news though. Quite a bit in fact. An enormous number of current jobs, especially low-skilled jobs, require none of these things. If your job is rigidly rule based, repetitive or depends upon the application of knowledge based upon fixed conventions then it’s ripe for digital disruption. So is any job that consists of inputting data into a computer. This probably sounds like low-level data entry jobs, which it is, but it’s also potentially vast numbers of administration, clerical and production jobs.

Whether you should be optimistic or pessimistic about all this really depends upon two things. Firstly, do you like dealing with people and secondly, do you believe that people should be in charge. Last time I looked humans were still designing these systems and were still in charge. So it’s still our decision whether or not technology will displace millions of people or not.

Some further bad news though is that while machines, and especially robots, avatars and chatbots, are currently not especially empathetic or personable right now they could become much more so in the future. Such empathy would, of course, be simulated, but perhaps this won’t matter to people. Paro, a furry seal cub that’s actually a robot used in place of people with dementia in aged-care homes, appears to work rather well, as does PaPeRo, a childcare and early learning robot used to teach language in kindergartens. You might argue that elderly people with dementia and kindergarten kids aren’t the most discerning of users, but maybe not. Maybe humans really will prefer the company of machines to other people in the not too distant future.

Of course, this is all a little bit linear. Yes, robots are reliable and relatively inexpensive compared to people, but people can go on strike and governments can intervene to ensure that certain industries or professions are protected. An idle population, especially an educated one, can cause trouble too and no government would surely allow such large-scale disruption unless they knew that new jobs (hopefully less boring jobs) would be created.

Another point that’s widely overlooked is that demographic trends suggest that workforces around the world will shrink, due to declining fertility. So unless there is a significant degree of workplace automation the biggest problem we might face in the future is finding and retaining enough talented people, not worrying about mass unemployment. In this scenario robots won’t be replacing or competing with anyone directly, but will simply be taking up the slack where humans are absent or otherwise unobtainable.

But back to the exam question. Let’s assume, for a moment, that AI and robotics really do disrupt employment on a vast scale and vast numbers of people are thrown out of work. In this case, which professions are the safest and how might you ensure that it’s someone else’s job that’s eaten by robots and not yours?

The science fiction writer Isaac Asimov said that: “The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” This sounds like poets, painters and musicians, but it’s also scientists, engineers, lawyers, doctors, architects, designers and anyone else that works with fluid problems and seeks original solutions. Equally safe should be anyone working with ideas that need to be sold to other people. People that are personable and persuasive will remain highly sought as will those will the ability to motivate people using narratives instead of numbers. This means managers, but also dreamers, makers and inventors.

In terms of professions, this is a much harder question to answer, not least because some of the safest jobs could be the new ones yet to be thrown up by developments in computing, robotics, digitalisation and virtualization. Nevertheless, it’s highly unlikely that humans will stop having interpersonal and social needs, and even more unlikely that the delivery of all these needs will be within the reach of even the most flexible robots or autonomous systems. Obviously the designers of robots, computers and other machines should be safe, although it’s not impossible that these machines will one day start to design themselves. The most desirable outcome, and it’s also the most likely in my view, is that we will learn to work alongside these machines, not against them. We will design machines that find it easy to do the things we find tiresome, hard or repetitive and they will use us to invent the questions that we want our machines to answer.

As for how people can simultaneously believe that robots and computers will “probably” or “definitely” be performing most of the jobs in the future, while believing that their own jobs will remain safe, this is probably because humans have two other things that machines do not. Humans have hope and they can hustle too. Most importantly though, at the moment we humans are in still charge and it is up to us what happens next.

The Future of High Performance Computing

Just FYI, anyone that’s interested in HPC, super-computing, advanced modelling & simulation, problems, prediction, cyber-security and any associated field might be interested in this. It’s on Thursday 23 February in London. Event link here.

Beginning of a new Current & Future uses of HPC map below….

Current & Future Applications of HPC

Modelling & Simulation
Preventing the invention of unnecessary
Prediction of technology breakthroughs
Modelling specific species against climate change
Dynamic longevity prediction
Predicting M&A activity/hostile takeovers
Lifelike recreation of dead actors in movies
Volcano modelling
Real time national mood modelling
Hyper-local personal weather forecasts
Complete human brain simulations
Prediction of social unrest using global social media feeds
Finding holes in existing research
Finding new knowledge in Big Data
Automation of scientific research
Radiation shield modelling
Molecular dynamics modelling
Space weather forecasting
Trawling scientific data to find genetically applicable treatments
Molecular dynamics forecasting
Automation of scientific research
Aesthetics prediction
Seismic mapping of planets
Hurricane forecasting
Modelling of tornado trajectory & speed
Galaxy simulations
Oil well forecasting
Movie special effects
Simulation of fluid dynamics
Virtual crash testing
Re-creation of the origin of the universe
Earthquake prediction
Population growth simulations
Climate change modelling
Aerodynamics design
Whole city simulations
Pollution forecasting
Radiation shield modelling
Molecular dynamics modelling
Modelling impacts of bio-diversity loss
Power grid simulation & testing
Modelling of organizational behaviour
Optimization of citywide traffic flows
Emergency room simulation
Major incident modelling & simulation
Space weather forecasting

Healthcare & Medicine
Dynamic real-time individual longevity forecasts
Mapping blood flow
Prediction of strokes, brain injury & vascular brain disease
Pandemic modelling
Unravelling protein folding
Curing Alzheimer’s disease
Virtual neural circuits
Bio-tech research for SMEs
Acceleration of drug discovery & testing
Decoding of genetic data
Whole body imaging at scale
Remote medical triage
Foreign aid & disaster relief allocation
Dynamic simulations of muscle & joint interactions
Bone implant modelling
Modelling of the nervous system
Longevity prediction at birth
Design of super efficient water filters

Fintech
Pre-trade risk analysis
Bond pricing
Real-time hedging
Fraud detection
Self-writing financial reports
Automatic regulatory control & compliance
Pre and post-trade analysis
Dynamic allocation of government tax revenues
News prediction
Flash crash prediction
Optimisation of investment strategies
Automated hiring & firing of employees
Automated due diligence for M&A
Whole economy simulation

Software & data
Software that writes itself
Holographic data storage
Coding for ultra-low energy use
Data that generates its own models

Engineering, materials & manufacturing
Space station design
Space colony design
Design of new aeronautics materials
Zero gravity manufacturing & design
Predicting properties of undiscovered materials
Design of smart cities
Identification of redundant assets
Optimization of just in time manufacturing
Optimization of crowd-sourced delivery networks
Design of ‘impossible’ buildings & structures

Security
Recording of every individual human conversation on earth
Modelling of factors likely to lead to a revolution
Deliberate cyber-facilitation of revolutions
Breaking 512-bit encryption ciphers
War forecasting algorhythms
Virtual nuclear weapon testing
Modelling behaviour of terrorist suspects
Crime prediction down to individual streets
Identification of terrorist suspects
Forecasting of geo-political upheavals
Hyper-realistic war gaming
Simulation of large scale cyber attacks
Missile trajectory simulation
Screening of data from multiple spectra & media in real time
Threat detection
Crisis management decision support

Note: This is just me going off on a bit of a jazz riff at the moment. All subject to change!

People as Pets

A Korn Ferry study of 800 business leaders across the globe has found that business leaders think that tech will create more value than people in the future. 44 per cent of bosses go as far as saying that automation, AI and robotics (let’s create a new acronym here and call it AAIR – as it void, vacuum, full of hot…) will make staff “largely irrelevant” in the future.

Reminds me of a boss I heard about not so long ago that referred to his people as “pets” (the only reason the management team employed people at all is that regulation made the company do so).

Ref: CityAM 17.11.16 (P09)

Richard Watson on The Future, Automation and AI

I did a talk at the University of Northampton Business School last week, but before I started I spoke to John Griff at BBC Radio Northampton. The funny thing was that while I’d been told about this well in advance I’d totally forgotten. Hence zero preparation on my part. But guess what, because I didn’t prepare anything I didn’t obsess about what I was going to say and therefore didn’t screw it up (also due to an excellent interviewer that asked some good questions and put me at ease btw).

One of my more intelligent interviews with a great ending…

BBC iPlayer….(spool on to 1 minute 15 seconds)

Reasons to be Cheerful

Screen shot 2016-08-08 at 14.05.47
Who are we? Why are we here and where are we going? If anxiety about climate change, financial meltdowns, terrorism, immigration, pandemics, and the robot that recently went rogue on Twitter isn’t enough to worry about, we’ve got existential questions to contend with. Mostly we haven’t though, largely because a daily deluge of digital distractions means we rarely get much further than worrying about why BBC Three is now only available online or whether answering an instant message during sex is less rude than not responding until tomorrow.

The best bit of advice for such complex conundrums probably comes from The Hitchhiker’s Guide to the Galaxy by Douglas Adams. The advice is simply: “Don’t Panic!” This is wise counsel; especially if instead of looking forwards, you look backwards over a few thousand years and notice that the human race is more or less still here. We’re survived ice ages, financial collapses, religiously inspired terrorism, immigration, pandemics and the threat of new technology before. We’ve even endured Paris Hilton, Ed Miliband and Paul Daniels, at least one of which turned out to be all right really.

None of this means we should become complacent. As the great economist JK Galbraith once said” There will be no harm in making mild preparations for our destiny.” Furthermore, while the thought of robots stealing our jobs and perhaps even our children’s souls is a concern, it could be that the pace of developments in computing means that it isn’t the clumsy idiot savant software than we have today that we need to worry about, but what their silicon descendants might be capable of doing in 20 or thirty years time. If human minds as sharp as Stephen Hawking, Bill Gates and Elon Musk believe that true artificial intelligence (broad or general AI) might be the last invention that the human race ever makes, perhaps we should stop looking at Rich Cats of Instagram and pay proper attention to what’s really going on.

I really have no idea how the distant future will pan out, but my best guess is that we’ll more or less be alright, especially if we, the human race, can somehow remain humane and coalesce around a common vision of what our new technology is actually for. Technology is a tool. It is a way. It is a means. It has never been, and never should be, an end it itself. Technology should only be used to enhance or compliment human thinking and relationships, never to replace them. Any wealth that accumulates from the application of such technology should also be gently persuaded to trickle towards those individuals who find themselves on the wrong side of many of these marvels.

I am hopeful that as AI and robotics evolves and starts to behave more like us this will shine an especially strong spotlight on what it is that humans actually are and what they do best. Nobody expects a second renaissance as a result of artificial intelligence, but it would be nice if real human stupidity were diluted and that we congratulated ourselves about the fact that it’s only humans, not machines, that are capable of creating real joy and that imagination, along with empathy, do not appear to be computable.

In the meantime, rejoice in the knowledge that even the smartest robot in the room still struggles to make a decent cup of tea and is totally bewildered by why humans might want their tea, along with their pizza, delivered by drone when they appear to be pondering the nature of their own existence through a poem by Shelly. Illogical.

Digital vs Human (final fiddle)

Back page text

It’s hard to believe, but the book goes on. Days away from printing now, but still trying to get the cover right and also re-writing the back cover text at the last minute. The key thing here is obviously to convey what the book is about for people that haven’t read it (and one supposes don’t have more than about ten seconds to do so before they move on to another book). The current words are below, with the first pass below that. The key point, for me at least, is not that the book is about digital systems, robotics or artificial intelligence, but the looming battle between human and digital minds. More specifically, it’s about what a small group of people is arguably imposing on the rest of the human race. This has shades of the 1% (or the 99%), but also the banking crisis. This is an observation that’s been picked up by one of my favourite columnists at the Financial Times, Gillian Tett. Prior to 2007/8 banking was run by a tiny group of people and nobody else really understood what they were doing. IT at the cutting edge is much the same. It’s a small group of experts and almost nobody else has a clue about what they are doing or what the longer-term consequences might be.

Current re-write.
From the author of the international bestseller Future Files comes the one book you need to prepare for tomorrow.

Life has never been better. By most measures our physical lives have improved greatly in recent years. So why do we feel that all is not well? As technologies developed by a tiny handful of designers and developers are changing our lives, we are beginning to question whose interests are being served. Are they here for our benefit? Or are we here for theirs? Richard Watson hereby extends an exuberant invitation to look more closely at the world we’re creating and think more deeply about who it is that we want to be.

Original text including my edit comments (in bold)
The blurb may require cuts to fit into the back cover design. Our current proposal is the following: Surely there is room at the front or more likely the back to run or repeat all the quotes?

From the author of the international bestseller Future Files comes the one book to help you prepare for tomorrow.

On most measures that matter, we’ve never had it so good. Physically, life for humankind has improved immeasurably over the last fifty years. Yet, spreading across the world, there is a crisis of confidence in progress. Jumps….
(Do we even need most of the above? How about simply starting with the below and creating room for more testimonials? For example…

From the author of the international bestseller Future Files comes the one book you need to read to prepare for the world of tomorrow. (Still jumps?) To a large degree, the history of the next fifty years will be about the relationship between people and technologies created by a tiny handful of designers and developers. These (their?) inventions will undoubtedly change our lives, but just what are they capable of, and — as they transform the media, the economy, healthcare, education, work, and the home — what kind of lives do we want to lead?

Richard Watson, the author of the international bestseller Future Files, hereby extends an exuberant invitation for us to think deeply about the world of today and envision what kind of world we wish to create in the future. (for tomorrow?).