Delivery robots

So I saw one of these last night wandering around near London Bridge. I got talking to someone who said that research had shown that some customers didn’t like dealing with people when ordering delivery food. Apparently things can be a little awkward. Stangers and all that.

Two thoughts. This, and things like it, have almost certaintly been invented by people that do indeed have problems dealing with people. I’m sure such things work for them. But does this mean such things have to be imposed on the rest of us?

Second, what kind of world are we creating where people prefer interactions with machines to other people? A world, perhaps, were people live alone, work alone and don’t even go out to shop or eat. I’m sure that might work for a while, but I suspect that a long-term consequence might be emotional fragility and instability.

Remember not to forget to be human.

Will a robot eat your job?

A recent bank of England Study has said that 15 million of the UKs 30 million jobs could be at risk from automation over the coming years. Meanwhile, a US PEW study found that two-thirds of Americans believe that in 50 years time, robots and computers will “probably” or “definitely” be performing most of the work currently done by humans. However, 80 per cent also believe that their own profession will “definitely” or “probably” be safe.

Putting to one side the obvious inconsistency, how might we ensure that our jobs, and those of our children, are safe in the future? Are there any particular professions, skills or behaviours that will ensure that we remain gainfully employed until we decide that we no longer want to be? Indeed, is there anything of significance that we do that artificial intelligence can’t? And how might education be upgraded to ensure peak employability in the future? Finally, how realistic are doomsday forecast that robotics and automation will destroy countless millions of jobs?

To answer all of these questions we should first delve into what it is that robots, artificial intelligence, automated systems and computers generally do today and then speculate in an informed manner about what they might be capable of in distant tomorrows.

A robot is generally defined as an artificial or automated agent that’s programmed to complete a series of rule-based tasks. Originally robots were used for dangerous or unskilled jobs, but they are increasingly used to replace people when people are absent, expensive or in short supply. This is broadly true with automated systems as well. They are programmed to acquire data, make decisions, complete tasks or solve problems based upon pre-set rules. Not surprisingly, machines such as these are tailor made for repetitive tasks such as industrial manufacturing or for beating humans at rule-based games such as chess or Go. They can be taught to drive cars, which is another rule-based activity, although self-driving cars do run into one big problem, which is humans that don’t follow the same logical rules.

The key phrase here is rule-based. Robots, computers and automated systems have to be programmed by humans with certain endpoints or outcomes in mind. At least that’s been true historically. Machine learning now means that machines can be left to invent their own rules based on observed patterns. In other words, machines can now learn from their own experience much like humans do. In the future it’s even possible that robots, AIs and other technologies will be released into ‘the wild’ and begin to learn for themselves through human interaction and direct experience of their environment, much in the same way that we do.

Having said this, at the moment it’s tricky for a robot to make you cup of tea let alone bring it upstairs and persuade you to drink it if you’re not thirsty. Specialist (niche) robots are one thing, but a universally useful (general) robot is that has an engaging personality, which humans feel happy to converse with, is something else.

And let’s not forget that much of this ‘future’ technology, especially automation, is already here but that we choose not to use it. For example, airplanes don’t need pilots. They already fly themselves, but would you get on a crewless airplane? Or how about robotic surgery? This too exists and it’s very good. But how would you feel about being put to sleep and then being operated on with no human oversight whatsoever? It’s not going to happen. We have a strong psychological need to deal with humans in some situations and technology should always be considered alongside psychology.

That’s the good news. No robot is about to steal your job, especially if it’s a skilled or professional job that’s people-centric. Much the same can be said of AI and automated systems generally. Despite what you might read in the newspapers (still largely written by humans despite the appearance of story-writing robots) many jobs are highly nuanced and intuitive, which makes coding difficult. Many if not most jobs also involve dealing with people on some level and people can be illogical, emotional and just plain stupid.

This means that a great many jobs are hard to fully automate or digitalise. Any job that can be subtly different each time it’s done (e.g. plumbing) or requires a certain level of aesthetic appreciation or lateral thought is hard to automate too.Robots, AIs and automated systems can’t invent and don’t empathise either. This probably means that most nurses, doctors, teachers, lawyers and detectives are not about to be made redundant. You can program machines to diagnose illness or conduct surgery. You can get robots to teach kids maths. You can even create algorithms to review case law or predict crime. But it’s far more difficult for machines to persuade people that certain decisions need to be made or that certain courses of action should be followed.

Being rule-based, pattern-based or at least logic-based, machines can copy but they aren’t capable of original thought, especially thoughts that speak to what it means to be human. We’ve already had a computer that’s studied Rembrandt and created a ‘new’ Rembrandt painting that could fool an expert. But that’s not the same as studying art history and subsequently inventing cubism. This requires rule breaking not pattern recognition.

Similarly, it’s one thing to teach a machine to write poetry or compose music, but it’s entirely another thing to write poetry or music that connects with people on an emotional level and speaks to their experience of being human. Machines can know nothing about the human experience and while they can be taught to know what something is they can’t be taught to know what something feels like from a human perspective.

There is some bad news though. Quite a bit in fact. An enormous number of current jobs, especially low-skilled jobs, require none of these things. If your job is rigidly rule based, repetitive or depends upon the application of knowledge based upon fixed conventions then it’s ripe for digital disruption. So is any job that consists of inputting data into a computer. This probably sounds like low-level data entry jobs, which it is, but it’s also potentially vast numbers of administration, clerical and production jobs.

Whether you should be optimistic or pessimistic about all this really depends upon two things. Firstly, do you like dealing with people and secondly, do you believe that people should be in charge. Last time I looked humans were still designing these systems and were still in charge. So it’s still our decision whether or not technology will displace millions of people or not.

Some further bad news though is that while machines, and especially robots, avatars and chatbots, are currently not especially empathetic or personable right now they could become much more so in the future. Such empathy would, of course, be simulated, but perhaps this won’t matter to people. Paro, a furry seal cub that’s actually a robot used in place of people with dementia in aged-care homes, appears to work rather well, as does PaPeRo, a childcare and early learning robot used to teach language in kindergartens. You might argue that elderly people with dementia and kindergarten kids aren’t the most discerning of users, but maybe not. Maybe humans really will prefer the company of machines to other people in the not too distant future.

Of course, this is all a little bit linear. Yes, robots are reliable and relatively inexpensive compared to people, but people can go on strike and governments can intervene to ensure that certain industries or professions are protected. An idle population, especially an educated one, can cause trouble too and no government would surely allow such large-scale disruption unless they knew that new jobs (hopefully less boring jobs) would be created.

Another point that’s widely overlooked is that demographic trends suggest that workforces around the world will shrink, due to declining fertility. So unless there is a significant degree of workplace automation the biggest problem we might face in the future is finding and retaining enough talented people, not worrying about mass unemployment. In this scenario robots won’t be replacing or competing with anyone directly, but will simply be taking up the slack where humans are absent or otherwise unobtainable.

But back to the exam question. Let’s assume, for a moment, that AI and robotics really do disrupt employment on a vast scale and vast numbers of people are thrown out of work. In this case, which professions are the safest and how might you ensure that it’s someone else’s job that’s eaten by robots and not yours?

The science fiction writer Isaac Asimov said that: “The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” This sounds like poets, painters and musicians, but it’s also scientists, engineers, lawyers, doctors, architects, designers and anyone else that works with fluid problems and seeks original solutions. Equally safe should be anyone working with ideas that need to be sold to other people. People that are personable and persuasive will remain highly sought as will those will the ability to motivate people using narratives instead of numbers. This means managers, but also dreamers, makers and inventors.

In terms of professions, this is a much harder question to answer, not least because some of the safest jobs could be the new ones yet to be thrown up by developments in computing, robotics, digitalisation and virtualization. Nevertheless, it’s highly unlikely that humans will stop having interpersonal and social needs, and even more unlikely that the delivery of all these needs will be within the reach of even the most flexible robots or autonomous systems. Obviously the designers of robots, computers and other machines should be safe, although it’s not impossible that these machines will one day start to design themselves. The most desirable outcome, and it’s also the most likely in my view, is that we will learn to work alongside these machines, not against them. We will design machines that find it easy to do the things we find tiresome, hard or repetitive and they will use us to invent the questions that we want our machines to answer.

As for how people can simultaneously believe that robots and computers will “probably” or “definitely” be performing most of the jobs in the future, while believing that their own jobs will remain safe, this is probably because humans have two other things that machines do not. Humans have hope and they can hustle too. Most importantly though, at the moment we humans are in still charge and it is up to us what happens next.

Automation Angst

According to some techno-evangelists, humanity is on the verge of huge breakthroughs in computing, robotics, genetics, automation and artificial intelligence that will dwarf many of the inventions of the past two centuries. They might be right, but at what price? What might the cost be of these breakthroughs in terms of unemployment and inequality? Moreover, are these breakthroughs really as close as the evangelists claim and will they be as fundamental as those in the past?

It can be argued, for example, that compared to clean water or the invention of the motorcar, Facebook and Uber are trivial inventions. Most of our recent innovations are incremental improvements of innovations created years ago and most of the most significant change over the last 100 years has been social not technological.

Time will tell as to who’s right, but it does seem a fair bet to suggest that the search for economic efficiency and convenience will continue to displace workers on a significant scale and may focus wealth in a handful of places and professions. Fully autonomous farms, factories, warehouses, logistics and transport networks are probably not that far off and it’s possible that the development of digital and virtual products and services, many of them delivered for ‘free’, will result in mass consumption being decoupled from mass employment, which could be catastrophic.

None of this has to be a bad thing, of course, if new jobs are created and perhaps if the spoils of efficiency are fairly shared, although remember that while the Industrial Revolution created new jobs, wages in England were stagnant or declined for almost 40 years and that work conditions associated with many of the new jobs were appalling.

Having said this it’s almost impossible that all old jobs will disappear. Many of the developments that are nervously anticipated are still years away and many of the things that humans do will remain out of reach for robots and autonomous systems. Humans are far better than machines at abstraction, generalisation and creative thinking. We have vastly better common sense than machines, we’re agile, nimble and energy efficient too. And don’t forget that it’s humans that have rights and vote – and we can revolt too. Humans are also a deeply social species and physical connection is likely to remain important. Many of of our economic needs are also explicitly interpersonal or social.

The bad news is how robots and automated systems interact with human beings depends a great deal upon how much their designers know or care about human beings. In the same way that there’s a fine line between genetics and eugenics, there’s a thin line between technologies that enhance humanity and those that diminish it. Moreover, developments in robotics, information technology, neuro-technology and genetics all have the potential to vastly widen the gaps that already exist in health, intelligence, opportunity and achievement.

The amount of data that spills from these technologies also seriously threatens privacy and freedom of choice. There’s a very real possibility too that one day these technologies will advance to such a point that the owners of the technologies will be able to predict and control almost everything an individual does, thereby reducing humans to mere automatons.

This is unlikely, although an even worse scenario might involve the widespread adoption of mediocre artificial intelligence and predictive systems, which, little by little, become a train wreck of momentous proportions due to a decline in human agency or a crisis in human identity.

Governments and giant corporations are pouring billions into robotics and automation projects with almost no external oversight whatsoever. We urgently need the inclusion of an ethical code alongside any computer code and should be able to quiz the technologists, and perhaps one day the technologies themselves, about their knowledge, their skills, and their intentions. Ultimately, any AI singularity is a choice, not a destiny.

Humans

IMG_1279

The zeitgeist seems to be moving from zombies to another form of the undead – robots. Don’t know if you’ve seen the TV series Humans, but it touches on the key theme of what’s real (human) and what isn’t wonderfully. Reminds me of something I wrote for Future 50 a while back. Oh the image above? Just an early view of the shape of things to come!

Screen shot 2015-06-17 at 09.53.29

 

 

 

 

 

 

 

 

 

‘Uncanny resemblance’ is a term often used to describe something or, more usually, someone, who looks strangely or spookily familiar. In robotics the term ‘Uncanny Valley’ is used to describe how people instinctively reject robots that look too much like human beings, the valley in question being a trough in a graph showing robot rejection and acceptance.

The word ‘robot’ comes from the Czech word, meaning ‘servitude’, although some translations use the terms ‘obligatory work’, ‘forced labour’ or ‘drudgery’. Most popular visions of the future include robots, often with human-like forms, and with other features mimicking human height, eyes, limbs, movement and even human conversation. But this is precisely where the trouble starts. We have become accustomed to the idea of robots making other machines, cars, for example, and we are now getting used to robots in the form of cuddly toys, lawnmowers, vacuum cleaners and bomb-disposal machines. The Japanese are apparently even getting used to R2-D2-like nursery assistants and aged care robots. But what happens when someone builds a humanoid-like bot that looks and acts like, well, one of us? To some extent we already know.

In Japan, for example, Dr Hiroshi Ishiguro has created a robot that looks like … Dr Ishiguro. The resemblance is uncanny, prompting an uncomfortable reaction from observers, especially as ‘he’ is sporting the same glasses and wears the same clothes. From a distance you can hardly tell the difference. Interestingly, the idea of such robots tends to be rejected by adults, but is often accepted by young children. Not all young children, though. Before he made a robotic copy of himself, Dr Ishiguro made a lifelike copy of his four-year-old daughter. She was so upset after seeing it that she refused point blank to enter her father’s laboratory in case she encountered it again. As to what will happen if robots become so life- like in appearance and mannerisms that you really cannot tell the difference, that’s anyone’s guess.

Psychologically speaking, we recognize certain types of robot as lifelike – meaning human lifelike – and then we suddenly notice various non-human features or characteristics, which leads to feelings of unease, alienation and even disgust. Perhaps the same could be said of our reactions to dead bodies. Cartoon characters, cartoon-like avatars and cuddly toys, in contrast, do not present the same level of threat because they are not trying to trick us into believing they’re human. Perhaps this is linked to some kind of ancient species preservation or protection instinct. Or maybe we’ve all just been watching too much tech-noir science fiction? Some people totally reject the whole hypothesis, arguing that it’s ridiculous to reduce human authenticity to a single measurement on a graph, but as robotics, virtual reality, artificial intelligence, computer animation and synthetic biology all converge, this debate will get quite complex. This fact has not escaped the attention of artists, such as Patricia Piccinini, who has created human-hybrid sculptures and other controversial artworks. And if you think Patricia’s work is a little disturbing, have a look at the reborn‘dolls’ created by the photographer Rebecca Martinez for a project called ‘pretenders’ or the ‘lifelike’ artworks of Ron Mueck.

But apart from these controversies, what else can we expect from robots in the near future? At the moment the robotics industry is fragmented, with a plethora of standards and platforms, much like the computer industry was in the 1970s. Currently, most robots are also low-volume niche products, ranging from bomb-disposal and surveillance robots used by the military to domestic robots that cut the lawn or sweep the floor. But this will change due to the convergence of a handful of trends. First, the cost of computing power (processing and storage) is dropping fast. Second, distributed computing, voice and visual recognition technologies and wireless broadband connectivity are similarly dropping in price and increasing in availability.

Personal robots could soon be dispensing medicine, folding laundry, teaching kids and keeping an eye open for intruders. There could also be some less obvious uses for robots, especially in customer service roles. For instance, robots could carry your shopping bags in a supermarket or your suitcases in a hotel. They could replace guide dogs for visually impaired people or take over from care workers in nursing homes. Whether a machine will ever fully replace human or animal contact is a big question and will in large part depend on what these robots look like. However, attitudes may shift, especially if humanoid robots start to display synchronous behaviour (e.g. they mimic human gestures) and can learn to be emotional. As for whether people could form strong relationships with robots, that’s an open question, although our experience with animals might suggest that we will.
There is little evidence of any inner consciousness in domesticated animals, but we often treat them almost like human companions. Perhaps, by 2050, we will regularly have relationships with robots and even end up, in some cases, marrying them

South Korea Recruits Prison Robots

Good grief. I’m trying to write an essay on Quantum and DNA computing. It’s making my head hurt. So again (with apologies) something silly. Apparently a prison in the South Korean city of Pohang is testing 1.5m robot wardens on inmates. There’s a joke in there somewhere, but my head is too full of superposition and entanglement to function.

BTW, thanks to Matt who spotted this on the BBC News website.

Reindeer at 35,000 feet

Finns are getting weird. Back on Finnair and this time it’s reindeer salad and cloudberries at 35,000 feet. The highlight of this trip was Gate 37a. There was a woman dressed in white standing in a corner looking at the wall. To start with I thought perhaps she had done something very naughty and had been asked by airport officials to stand in the corner for ten minutes (oh, the memories!). Then I realised she wasn’t moving.

Why would you stand facing the wall? Surely you’d look outwards. I was fascinated. I was also interested by the fact that nobody else seemed to have noticed her or, if they had, they were ignoring her.

Then it dawned on me. This was either a brilliant statue or someone was doing performance art (yawn). It turned out it was neither. She finally moved when a small girl ran up to her and looked at her up close. Turns out she was making a phone call and, I presume, was trying to block out the surrounding distractions. However, my imagination had been stirred and my eyes soon settled on a Japanese woman with perfect white skin. She looked a bit like the female android created by Professor Hiroshi Ishiguro and she wasn’t moving either. A real life robot as hand luggage? No, just someone else good at not moving.

Other news to report. First, the Finnair spa has five different types of sauna, not four as previously stated (see June 14 post). Furthermore, there’s a bathing pool filled with mineral water (there might also be one filled with asses’ milk, but I couldn’t find it). Anyway, the interesting thing was that there was a window from the pool (also observable from one of the saunas) that looked directly at a concourse, through which people are passing dragging suitcases and unruly children. But the glass is one way. You can see them but they can’t see you.

Is this a continued part of the future? A privileged few frolicking in a giant bath, while everyone else is stressing out about finding something to drink or somewhere quiet to sit. The one-way glass interests me immensely. Was it there so that the few could gawp at the many and think how lucky they were? Was it there simply to add to the experience (sitting in a pool looking at aircraft taxiing on the tarmac does have a certain appeal)? Or was it there to somehow emphasise schadenfreude? I think that’s it.

It’s same reason that business-class-only flights don’t work. Psychologically, part of the reason business class seating works is that there are other economy seats close by. The cost of a business ticket includes the perverted thrill of seeing people turn right when you turn left or, better still, having people drill past you while you are sipping champagne.

Please don’t get me wrong. I am not some kind of status junkie. Far from it. I didn’t buy my ticket and I’m certainty not paying for it. I have some Yorkshire/Scottish/ no money heritage and my greatest thrill is spending the least possible amount of money on anything. Nevertheless, it’s interesting to observe this spectacle and speculate as to whether a similar dynamic might operate across other swathes of society in the future – a polarisation where, if you can afford it, you are silently whisked from one place to another and generally treated like a king, where, if you can’t, you are stuck on hold, forced to talk to machines, made to wait in line, rounded up and treated like cattle and generally fed to the lions, red in tooth and claw, of capitalism, extreme individualism and free-market economics (i.e. my trip on easyjet to Munich last week).

One more thing. The Future of the Internet by Jonathan Zittrain is worth reading, especially the chapter about what we can learn from Wikipedia. It’s especially interesting if you read it in a sauna with fresh herbs hanging from a roof made of roughly sawn pinewood with pine needles on the floor.