What can’t computers do?

Are you worried that your job could be ravaged by a robot or consumed by a computer? You probably should be judging by a study by Oxford University, which said that up to half of jobs in some countries could disappear by the year 2030.

Similarly, a PEW Research Centre study found that two-thirds of Americans believe that in 50 years’ time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe. I will come back to the Oxford study and the inconsistency of the two PEW statements later, but in the meantime how might we ensure that our jobs, and those of our children, are safe in the future?

Are there any particular skills or behaviours that will ensure that people are employed until they decide they no longer want to be? Indeed, is there anything we currently do that artificial intelligence will never be able to do no matter how clever we get at designing AI?

To answer these questions, we should perhaps first consider what it is that AI, automated systems, robots and computers do today and then speculate in an informed manner about what they might be capable of in the future.

A robot is often defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly being used to replace people when people are too inefficient or too valuable. Not surprisingly, machines such as these are tailor made for repetitive tasks such as manufacturing or for beating humans at rule-based games. A logical AI can be taught to drive cars, another rule-based activity, although self-driving cars do run into one rather messy problem, which is people, who can be emotional, irrational or ignore rules.

The key word here is logic. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. If X then Y and so on. At least that’s been true historically. But machine learning now means that machines can be left to invent their own rules and logic based on patterns they recognise in large sets of data. In other words, machines can learn from experience much as humans do.

Having said this, at the moment it’s tricky for a robot or any other type of computer to make me a good cup of coffee, let alone bring it upstairs and persuade me to get out of bed and drink it. That’s the good news. A small child has more general intelligence and dexterity than even the most sophisticated AI and no robot is about to eat your job anytime soon, especially if it’s a skilled or professional job that involves fluid problem solving, lateral thinking, common sense, and most of all Emotional Intelligence or EQ.

Moreover, we are moving into a world where creating products or services is not enough. Companies must now tell stories and produce good human experiences and this is hard for machines because it involves appealing to human hearts as well as human heads.

The future will be about motivating people using stories not just numbers. Companies will need to be warm, tolerant, personable, persuasive and above all ethical and this involves EQ alongside IQ. Equally, the world of work is less about command and control pyramidal hierarchies where bosses sit at the top and shout commands at people. Leadership is becoming more about informal networks in which inspiration, motivation, collaboration and alliance building come together for mutual benefit. People work best when they work with people they like and it’s hard to see how AIs can help when so much depends on passion, personality and pervasiveness.

I’m not for a moment saying that AI won’t have a big impact, but in many cases I think it’s more a case of AI plus human not AI minus human. AI is a tool and we will use AI as we’ve always used tools – to enhance and extend human capabilities.

You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be taken or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t generally capable of original thought, especially thoughts that speak to what it means to be human. We’ve already seen a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing abstract expressionism. This requires rule breaking.I’m not saying that AI created creativity that means something or strikes a chord can’t happen, simply that AI is pretty much stuck because of a focus on the human head, not the human heart. Furthermore, without consciousness I cannot see how anything truly beautiful or remotely threatening can ever evolve from code that’s ignorant of broad context.  It’s one thing to teach a machine to write poetry, but it’s entirely another thing to write poetry that connects with and moves people on an emotional level and speaks to the irrational, emotional and rather messy matter of being a human being. There’s no ‘I’ in AI. There’s no sense of self, no me, no you and no us.

There is some bad news though. An enormous number of current jobs, especially low-skilled jobs, require none of the things I’ve just talked about.If your job is rigidly rule based or depends upon the acquisition or application of knowledge based upon fixed conventions then it’s ripe for digital disintermediation. This probably sounds like low-level data entry jobs such as clerks and cashiers, which it is, but it’s also some accountants, financial planners, farmers, paralegals, pilots, medics, and aspects of both law enforcement and the military.

But I think we’re missing something here and it’s something we don’t hear enough about. The technology writer Nicholas Carr has said that the real danger with AI isn’t simply AI throwing people out of work, it’s the way that AI is de-skilling various occupations and making jobs and the world of work less satisfying.

De-skilling almost sounds like fun. It sounds like something that might make things easier or more democratic. But removing difficulty from work doesn’t only make work and less interesting, it makes it potentially more dangerous.

Remoteness and ease, in various forms, can remove situational awareness, for example, which opens up a cornucopia of risks.The example Carr uses is airline pilots, who through increasing amounts of automation are becoming passengers in their own planes. We are removing not only the pilot’s skill, but the pilot’s confidence to use their skill and judgment in an emergency.

Demographic trends also suggest that workforces around the world are shrinking, due to declining fertility, so unless the level of workplace automation is significant, the biggest problem most countries could face is finding and retaining enough talented workers, which is where the robots might come in. Robots won’t be replacing anyone directly, they will just take up the slack where humans are absent or otherwise unobtainable. AIs and robots will also be companions, not adversaries, especially when we grow old or live alone. This is happening in Japan already.

One thing I do think we need to focus on, not only in work, but in education too, is what AI can’t do. This remains uncertain, but my best guess is that skills like abstract thinking, empathy, compassion, common sense, morality, creativity, and matters of the human heart will remain the domain of humans unless we decide that these things don’t matter and let them go. Consequently, our education systems must urgently move away from simply teaching people to acquire information (data) and apply it according to rules, because this is exactly what computers are so good at. To go forwards we must go backwards to the foundations of education and teach people how to think and how to live a good life.

Finally, circling back to where I started, the Oxford study I mentioned at the beginning is flawed in my view. Jobs either disappear or they don’t. Perhaps the problem is that the study, which used an algorithm to assess probabilities, was too binary and failed to make the distinction between tasks being automated and jobs disappearing.

As to how it’s possible that people can believe that robots and computers will “probably” or “definitely” perform most of the jobs in the future, while simultaneously believing that their own jobs will “probably” or “definitely” be safe. I think the answer is because humans have hope and human adapt. For these two reasons alone I think we’ll be fine in the future.

.


4 Reasons Why We Shouldn’t Worry About AI

Now I’m aware that putting up an image like this does potentially risk an outbreak of panic. If you aren’t already concerned about globalisation, ageing workforces, declining productivity, the war for talent, millennials, managing virtual teams, too much connectivity and too much distraction then perhaps you are now.

I’m fairly sure that artificial intelligence is creating more anxiety than excitement in most quarters too. But relax. Don’t panic.

I’m not saying that AI isn’t an issue. It is. A big one. But we are inventing this technology and we, as individuals, organisations and society as a whole, remain in control of it. If we don’t like where AI is going, we should do something about it.

But that’s not why we should relax. We, and especially HR Directors, should relax because AI will never do what most of us do.

AI can do almost anything humans do, but with four critical exceptions in my view.

1.AIs can’t invent. They never will. Not at a fundamental level. AIs can paint, but they’ll never invent Cubism. AIs can write music and plays, but they’ll never be Mozart or Shakespeare. They’ll never be Steve Jobs or Elon Musk either.

2.AIs can never be truly empathetic. They can never have true emotional intelligence. They can fake it and the movie called Her is a good example. So too is a robotic seal called Paro that’s used in care homes. But AIs know nothing of the human hearts and never will. And without empathy you cannot effectively lead other people.

3.AIs can’t inspire humans. We can fully automate hiring and firing if we choose, we can use AI to spit out an endless deluge of data and workplace analytics, but I cannot see a future where humans will willingly follow AIs with a smile on their face and a spring in their step.

4.AIs contain computer code, not moral code. This has to be programmed by humans. AIs know nothing of fairness either. I could talk about the moral bankruptcy of Silicon Valley at this point, but I’ll spare you the sermon.

So that’s AI. Don’t panic. But I have something else on my mind.

Back in 2014, a Gallup global poll found that almost 90 per cent (I’ll say that again – almost 90 per cent) of employees were doing jobs that they didn’t really like. Almost 90 per cent were either “not engaged” or “actively disengaged” from their work. Given that work is where we spend most of our time, most of our lives in fact, the mind truly boggles.

Why might this be so and what might we do to fix this?

Here are another 3 things to think about.

1.Management. There’s too much of it. We don’t trust people enough. Witness our ongoing obsession with offices. I’m a big fan of the physical office. Work is social and so are people and I don’t think you can create a winning culture 100% virtually. But offices are a physical manifestation of a command and control mentality that’s past its sell-by date in some instances. They are remnants of a feudal system. We need to knock down the enclosures. We need to relax the rules.

I understand why factory workers need to go to a factory to get their work done, but why do knowledge workers? Why do people have to go to a fixed place of work 5 days a week, 9am to 5pm inside a building? Can’t we be a bit more flexible about how and when we allow people work? We need much more personalisation of work contracts and conditions.

2.Disconnection. We are too connected to our work. Thanks to laptops, smart mobiles and cloud computing work has invaded every area of our lives. You can’t even lie by a pool on holiday nowadays without someone being on the phone to the office. As Frankie once said, “Relax.” “Don’t do it.” “When you want to go to it.”

Employers have bought a slice of peoples’ time. Not all of it. If people are being paid to think, or solve problems involving other people, as they increasing are, they need to recharge themselves as well as their devices. Constant connection to the office is impacting physical and mental health and destroying relationships. It has to stop. Holidays should be compulsory. People shouldn’t be allowed to email or call people out of hours unless it’s a matter of survival.

3.Human intelligence. Are we not smart enough to see that we are being stupid?
I’m referring to how we educate people and integrate them into the workforce. We are obsessed with tests that measure logic and memory. Why, when this is precisely what computers are so clever at? We are also obsessed with STEM. Is it because we feel it’s how you futureproof a workforce or a career? Similar mistake.

Coding skills? I’ve met people that are inventing software that can write itself. But to invent things like this you need some level of creative intelligence alongside scientific skills. You need the science, but you also need the art and, as an aside, we should put these two disciplines back together where they started and where they belong.

We should insist that art, music and design are key components of the national curriculum and should be given the same degree of funding and respect as all other subjects. The bedrock is the 3Rs, but above this should sit subjects that teach people how to think critically and creatively.Organisations have a big role to play here. Organisations should provide lifelong learning opportunities across all these areas.If AI really is where things are going I’d speculate that the future actually lies in the humanities.

Most importantly of all, we need to value all forms of intelligence, especially the emotional intelligences. There are eight forms of intelligence, but we tend to only teach, measure or value one or two.

We are obsessed with logical and perhaps linguistic intelligence, but tend to ignore the rest. This is troubling, partly because logical intelligence is the type most likely to be impacted by AI and partly because most jobs nowadays involve dealing with people, and people can be emotional and irrational to put it mildly. Machines, in my view, can’t navigate in this landscape. Only people can. AI can help here, hugely, but it cannot and should not replace people.

If we fix all this, and especially if we start valuing what I’d term emotional work, then we will have a good chance to include the people that currently feel undervalued or valueless to organisations and to society at large.

If we can show people that we care about this and are attempting to rebalance the thing we currently call work then there’s a good chance that the stress, anxiety and dissatisfaction that surrounds so much work will dissipate or disappear.

If we give people the time and space to work with machines, not against them, we can invent a future in which people are paid for being human, not penalised for it.

Will a robot eat your job?

A recent bank of England Study has said that 15 million of the UKs 30 million jobs could be at risk from automation over the coming years. Meanwhile, a US PEW study found that two-thirds of Americans believe that in 50 years time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe.

Putting to one side the obvious inconsistency, how might we ensure that our jobs, and those of our children, are safe in the future? Are there any particular professions, skills or behaviours that will ensure that we remain gainfully employed until we decide that we no longer want to be? Indeed, is there anything of significance that we do that artificial intelligence can’t? And how might education be upgraded to ensure peak employability in the future? Finally, how realistic are doomsday forecast that robotics and automation will destroy countless millions of jobs?

To answer all of these questions we should first delve into what it is that robots, artificial intelligence, automated systems and computers generally do today and then speculate in an informed manner about what they might be capable of in distant tomorrows.

A robot is generally defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly used to replace people when people are absent, expensive or in short supply. This is broadly true with automated systems as well. They are programmed to acquire data, make decisions, complete tasks or solve problems based upon pre-set rules. Not surprisingly, machines such as these are tailor made for repetitive tasks such as industrial manufacturing or for beating humans at rule-based games such as chess or Go. They can be taught to drive cars, which is another rule-based activity, although self-driving cars do run into one big problem, which is humans that don’t follow the same logical rules.

The key phrase here is rule-based. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. At least that’s been true historically. Machine learning now means that machines can be left to invent their own rules based on observed patterns. In other words, machines can now learn from their own experience much like humans do. In the future it’s even possible that robots, AIs and other technologies will be released into ‘the wild’ and begin to learn for themselves through human interaction and direct experience of their environment, much in the same way that we do.

Having said this, at the moment it’s tricky for a robot to make you cup of tea let alone bring it upstairs and persuade you to drink it if you’re not thirsty. Specialist (niche) robots are one thing, but a universally useful (general) robot is that has an engaging personality, which humans feel happy to converse with, is something else.

And let’s not forget that much of this ‘future’ technology, especially automation, is already here but that we choose not to use it. For example, airplanes don’t need pilots. They already fly themselves, but would you get on a crewless airplane? Or how about robotic surgery? This too exists and it’s very good. But how would you feel about being put to sleep and then being operated on with no human oversight whatsoever? It’s not going to happen. We have a strong psychological need to deal with humans in some situations and technology should always be considered alongside psychology.

That’s the good news. No robot is about to steal your job, especially if it’s a skilled or professional job that’s people-centric. Much the same can be said of AI and automated systems generally. Despite what you might read in the newspapers (still largely written by humans despite the appearance of story-writing robots) many jobs are highly nuanced and intuitive, which makes coding difficult. Many if not most jobs also involve dealing with people on some level and people can be illogical, emotional and just plain stupid.

This means that a great many jobs are hard to fully automate or digitalise. Any job that can be subtly different each time it’s done (e.g. plumbing) or requires a certain level of aesthetic appreciation or lateral thought is hard to automate too.Robots, AIs and automated systems can’t invent and don’t empathise either. This probably means that most nurses, doctors, teachers, lawyers and detectives are not about to be made redundant. You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be made or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t capable of original thought, especially thoughts that speak to what it means to be human. We’ve already had a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing cubism. This requires rule breaking not pattern recognition.

Similarly, it’s one thing to teach a machine to write poetry or compose music, but it’s entirely another thing to write poetry or music that connects with people on an emotional level and speaks to their experience of being human. Machines can know nothing about the human experience and while they can be taught to know what something is they can’t be taught to know what something feels like from a human perspective.

There is some bad news though. Quite a bit in fact. An enormous number of current jobs, especially low-skilled jobs, require none of these things. If your job is rigidly rule based, repetitive or depends upon the application of knowledge based upon fixed conventions then it’s ripe for digital disruption. So is any job that consists of inputting data into a computer. This probably sounds like low-level data entry jobs, which it is, but it’s also potentially vast numbers of administration, clerical and production jobs.

Whether you should be optimistic or pessimistic about all this really depends upon two things. Firstly, do you like dealing with people and secondly, do you believe that people should be in charge. Last time I looked humans were still designing these systems and were still in charge. So it’s still our decision whether or not technology will displace millions of people or not.

Some further bad news though is that while machines, and especially robots, avatars and chatbots, are currently not especially empathetic or personable right now they could become much more so in the future. Such empathy would, of course, be simulated, but perhaps this won’t matter to people. Paro, a furry seal cub that’s actually a robot used in place of people with dementia in aged-care homes, appears to work rather well, as does PaPeRo, a childcare and early learning robot used to teach language in kindergartens. You might argue that elderly people with dementia and kindergarten kids aren’t the most discerning of users, but maybe not. Maybe humans really will prefer the company of machines to other people in the not too distant future.

Of course, this is all a little bit linear. Yes, robots are reliable and relatively inexpensive compared to people, but people can go on strike and governments can intervene to ensure that certain industries or professions are protected. An idle population, especially an educated one, can cause trouble too and no government would surely allow such large-scale disruption unless they knew that new jobs (hopefully less boring jobs) would be created.

Another point that’s widely overlooked is that demographic trends suggest that workforces around the world will shrink, due to declining fertility. So unless there is a significant degree of workplace automation the biggest problem we might face in the future is finding and retaining enough talented people, not worrying about mass unemployment. In this scenario robots won’t be replacing or competing with anyone directly, but will simply be taking up the slack where humans are absent or otherwise unobtainable.

But back to the exam question. Let’s assume, for a moment, that AI and robotics really do disrupt employment on a vast scale and vast numbers of people are thrown out of work. In this case, which professions are the safest and how might you ensure that it’s someone else’s job that’s eaten by robots and not yours?

The science fiction writer Isaac Asimov said that: “The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” This sounds like poets, painters and musicians, but it’s also scientists, engineers, lawyers, doctors, architects, designers and anyone else that works with fluid problems and seeks original solutions. Equally safe should be anyone working with ideas that need to be sold to other people. People that are personable and persuasive will remain highly sought as will those will the ability to motivate people using narratives instead of numbers. This means managers, but also dreamers, makers and inventors.

In terms of professions, this is a much harder question to answer, not least because some of the safest jobs could be the new ones yet to be thrown up by developments in computing, robotics, digitalisation and virtualization. Nevertheless, it’s highly unlikely that humans will stop having interpersonal and social needs, and even more unlikely that the delivery of all these needs will be within the reach of even the most flexible robots or autonomous systems. Obviously the designers of robots, computers and other machines should be safe, although it’s not impossible that these machines will one day start to design themselves. The most desirable outcome, and it’s also the most likely in my view, is that we will learn to work alongside these machines, not against them. We will design machines that find it easy to do the things we find tiresome, hard or repetitive and they will use us to invent the questions that we want our machines to answer.

As for how people can simultaneously believe that robots and computers will “probably” or “definitely” be performing most of the jobs in the future, while believing that their own jobs will remain safe, this is probably because humans have two other things that machines do not. Humans have hope and they can hustle too. Most importantly though, at the moment we humans are in still charge and it is up to us what happens next.

People as Pets

A Korn Ferry study of 800 business leaders across the globe has found that business leaders think that tech will create more value than people in the future. 44 per cent of bosses go as far as saying that automation, AI and robotics (let’s create a new acronym here and call it AAIR – as it void, vacuum, full of hot…) will make staff “largely irrelevant” in the future.

Reminds me of a boss I heard about not so long ago that referred to his people as “pets” (the only reason the management team employed people at all is that regulation made the company do so).

Ref: CityAM 17.11.16 (P09)

Staying human in an age of automation

Screen Shot 2016-01-11 at 12.26.40

 

I thought this, via Aeon, would be interesting, although when I watched the video (7 minutes here) I wasn’t so sure. Is he saying that we need to ‘gamify’ all aspects of work so that people are more engaged and have more fun? Why does work have to be fun all he time and surely some aspects of work need a level of deep thinking and attention that screens, constant movement and distraction destroy. Moreover, shouldn’t humans focus on what humans do best and use machines to amplify this?

Personally I think that some aspects of gaming could be useful to apply to real world situations, including work in some instances, but overall I think the negatives far outweigh the positives and that the argument is weak. Surely, this is yet another example of digital solutionism. Screens and games are fine, but they shouldn’t remove us, distract us or distance us from physical human contact or thinking that is deep, sustained and reflective.

This is worth a read too if you are pro, although again I don’t agree with it.

All is not well…

Screen Shot 2015-12-10 at 09.23.06

 

A study by the NASUWT Union in the UK found that 83% of teachers reported workplace stress. Meanwhile, the Public and Commercial services Unions claims that 2/3 of civil servants have suffered from ill health due to workplace stress. To cap things off, the NHS in the UK says that the prescription of heavy duty antidepressants increased by 29% last year. What is going on?