The Paradox of Killer Robots

The key argument in favour of autonomous vehicles, such as driverless cars, is generally that they will be safer. In short, computers make better decisions than people and less people will be killed on the world’s roads if we remove human control. Meanwhile, armed robots on the battlefield are widely seen as a very bad idea, especially if these robots are autonomous and make life or death decisions unaided by human intervention. Isn’t this a double standard? Why can we delegate life or death decisions to a car, but not to a robot in a conflict zone?

You might argue that killer robots are designed to kill people, whereas driverless cars are not, but should such a distinction matter? In reality it might be that driverless cars kill far more people by accident than killer robots, because there are so many more of these machines. If we allow driverless vehicles to make instant life of death decisions surely, we must allow the same for military robots? And why not extend the idea to armed police robots too? Same logic.

My own view is that no machine should be given the capacity to make life or death decisions involving humans. AI is smart, and getting smarter, but no AI is even close to being able to understand the complexities, nuances or contradictions that can arise in any given situation.

The right not to be seen

If you are wondering why privacy matters, here’s a great summary by Daniel Solove, A Professor at GW Law School. I hope he won’t mind me copying this here.

1. Limit on Power

Privacy is a limit on government power, as well as the power of private sector companies. The more someone knows about us, the more power they can have over us. Personal data is used to make very important decisions in our lives. Personal data can be used to affect our reputations; and it can be used to influence our decisions and shape our behavior. It can be used as a tool to exercise control over us. And in the wrong hands, personal data can be used to cause us great harm.

2. Respect for Individuals

Privacy is about respecting individuals. If a person has a reasonable desire to keep something private, it is disrespectful to ignore that person’s wishes without a compelling reason to do so. Of course, the desire for privacy can conflict with important values, so privacy may not always win out in the balance. Sometimes people’s desires for privacy are just brushed aside because of a view that the harm in doing so is trivial. Even if this doesn’t cause major injury, it demonstrates a lack of respect for that person. In a sense it is saying: “I care about my interests, but I don’t care about yours.”

3. Reputation Management

Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being. Although we can’t have complete control over our reputations, we must have some ability to protect our reputations from being unfairly harmed. Protecting reputation depends on protecting against not only falsehoods but also certain truths. Knowing private details about people’s lives doesn’t necessarily lead to more accurate judgment about people. People judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy. Privacy helps people protect themselves from these troublesome judgments.

4. Maintaining Appropriate Social Boundaries

People establish boundaries from others in society. These boundaries are both physical and informational. We need places of solitude to retreat to, places where we are free of the gaze of others in order to relax and feel at ease. We also establish informational boundaries, and we have an elaborate set of these boundaries for the many different relationships we have. Privacy helps people manage these boundaries. Breaches of these boundaries can create awkward social situations and damage our relationships. Privacy is also helpful to reduce the social friction we encounter in life. Most people don’t want everybody to know everything about them – hence the phrase “none of your business.” And sometimes we don’t want to know everything about other people — hence the phrase “too much information.”

5. Trust

In relationships, whether personal, professional, governmental, or commercial, we depend upon trusting the other party. Breaches of confidentiality are breaches of that trust. In professional relationships such as our relationships with doctors and lawyers, this trust is key to maintaining candor in the relationship. Likewise, we trust other people we interact with as well as the companies we do business with. When trust is breached in one relationship, that could make us more reluctant to trust in other relationships.

6. Control Over One’s Life

Personal data is essential to so many decisions made about us, from whether we get a loan, a license or a job to our personal and professional reputations. Personal data is used to determine whether we are investigated by the government, or searched at the airport, or denied the ability to fly. Indeed, personal data affects nearly everything, including what messages and content we see on the Internet. Without having knowledge of what data is being used, how it is being used, the ability to correct and amend it, we are virtually helpless in today’s world. Moreover, we are helpless without the ability to have a say in how our data is used or the ability to object and have legitimate grievances be heard when data uses can harm us. One of the hallmarks of freedom is having autonomy and control over our lives, and we can’t have that if so many important decisions about us are being made in secret without our awareness or participation.

7. Freedom of Thought and Speech

Privacy is key to freedom of thought. A watchful eye over everything we read or watch can chill us from exploring ideas outside the mainstream. Privacy is also key to protecting speaking unpopular messages. And privacy doesn’t just protect fringe activities. We may want to criticize people we know to others yet not share that criticism with the world. A person might want to explore ideas that their family or friends or colleagues dislike.

8. Freedom of Social and Political Activities

Privacy helps protect our ability to associate with other people and engage in political activity. A key component of freedom of political association is the ability to do so with privacy if one chooses. We protect privacy at the ballot because of the concern that failing to do so would chill people’s voting their true conscience. Privacy of the associations and activities that lead up to going to the voting booth matters as well, because this is how we form and discuss our political beliefs. The watchful eye can disrupt and unduly influence these activities.

9. Ability to Change and Have Second Chances

Many people are not static; they change and grow throughout their lives. There is a great value in the ability to have a second chance, to be able to move beyond a mistake, to be able to reinvent oneself. Privacy nurtures this ability. It allows people to grow and mature without being shackled with all the foolish things they might have done in the past. Certainly, not all misdeeds should be shielded, but some should be, because we want to encourage and facilitate growth and improvement.

10. Not Having to Explain or Justify Oneself

An important reason why privacy matters is not having to explain or justify oneself. We may do a lot of things which, if judged from afar by others lacking complete knowledge or understanding, may seem odd or embarrassing or worse. It can be a heavy burden if we constantly have to wonder how everything we do will be perceived by others and have to be at the ready to explain ourselves.

Daniel J. Solove is the John Marshall Harlan Research Professor of Law at George Washington University Law School, the founder of TeachPrivacy, a privacy/data security training company, and a Senior Policy Advisor at Hogan Lovells.

The World Waits to Wobble

My joke 3 years ago that the EU might fall apart before Britain had left is starting to look semi-serious. I do believe that the world, especially Europe and the US, is one shock away from a major meltdown. This could be triggered by a a major political or economic event or triggered by irrational emotional contagion. Fasten your seat belts folks, the ride is about to get bumpy.

Addressing the AI Hype

I’m quite close to completing my piece on what AI cannot do (and possibly never will) , which is partly intended to address the hype and reduce the anxiety about what AI is capable of, especually with regard to future employment. In the meantime, this is worth a read. It’s a very sensible piece by Roger Schank. Bits of it I disagree with, but it’s on the money in many places. BTW, one thing to watch out for generally, if someone says AI will or won’t do something ask them by when. By 2030? By 2100? Never?

Here’s Roger’s piece and here’s the link to the original. Thanks to Dr. Afric Campbell for spotting this btw.

So, to make clear what AI is really about I propose the ten question test. Here are ten questions that any person could easily answer and that no computer can answer well. Since I consider myself an AI person I would not say that no computer will ever be able to answer these. I hope we can figure it out. But AI isn’t here yet, despite what venture capitalists, giant companies, and the media keep saying. 

1. What would be the first question you would ask Bob Dylan if you were to meet him?

I am using this one because IBM’s Watson “met” Bob Dylan and told him that his songs were about love fading. First, I might point out that that conversation is insane. If you were to meet Bob Dylan you might have some things you’d want to know. I’d like to know if he feels that his songs are “literature.” I’d also like to know if he thinks he helped a generation feel stronger about protesting injustice and war. I would not count all the words he used and tell him which word appears most often. Watson does not behave as intelligent entities do. Intelligent entities are curious. They have things they want to know and can recognize who can answer questions that come to their minds about different arenas of life.

Here is another: 

2. Your friend told you, after you invited him for dinner, that he had just ordered pizza. What will he eat? Will he use a knife and fork. Why won’t he change his plans?

You will notice that eating is not mentioned in question 2. Neither are utensils. So how could an “AI” understand these questions. It would have to know about how people function in daily life. It would have to know that we eat what we order, and that when we say we ordered food it means that we intend to eat it, and it also means that we don’t want to waste it. It would also have to know that pizza is typically eaten with one’s hands. It might also know that Donald Trump famously eats pizza with a knife and fork and might mention that when asked.

3. I am thinking of driving to New York from my home in Florida next week. What do you think?

In order to answer the above question, one would need a model of why people ask questions like that one. It is hard to answer if you don’t know the person who is asking. If you do know that person you would also know something about what he is really asking. Does he have a car that is too old to make the trip? Maybe he has a brand new car and he is asking your advice about whether a long trip is a good way to break it in. Maybe he knows you live in New York and might have an idea whether the roads are icy there. Real conversation involves people who make assessments about each other and know what to say to whom based on their previous relationship and what they know about each other. Maybe the asker is really asking about a place to stay along the way (if the person being asked lives in Virginia say.) Sorry, but no “AI” is anywhere near being able to have such a conversation because modern AI is not building complex models of what we know about each other.

4. Who do you love more, your parents, your spouse, or your dog?

What does this question mean and why would anyone ask it? Maybe the person being asked is hugging their dog all the time. Maybe the person being asked is constantly talking about his or her parents. People ask questions as well as answer them. Is there an “AI” that is observing the world and getting curious enough to ask a question about the inner feelings of someone with whom it is interacting. People do this all the time. “AI’s” do not.

5. My friend’s son wants to drop out of high school and learn car repair. I told her to send him over. What advice do you think I gave him?

If you know me, you would know how I feel about kids being able to follow their own interests despite what school wants to teach. So an intelligent entity that I told this to would probably be able to guess what I said. Can you? No “AI” could.

6. I just saw an ad for IBM’s Watson. It says it can help me make smarter decisions. Can it?

Here is the ad: https://www.ispot.tv/ad/7Fip/ibm-watson-analytics-make-smarter-decisions-feat-dominic-cooper

My guess is that this is something Watson can do. It can analyze data, and with more information a person can make better decisions. Could Watson make the decision? Of course not. Decision making involves prioritizing goals and being able to anticipate the consequences of actions. Watson can do none of that.

7. Suppose you wanted to write a novel and you met Stephen King. What would you ask him?

Here is another Watson ad: https://www.ispot.tv/ad/A6k6/ibm-stephen-king-ibm-watson-on-storytelling

I have no idea what IBM is trying to say to the general public here. Apparently IBM is very proud that it can count how many times an author says the word “love.” If I wanted advice on writing a novel I doubt I would ask Stephen King, but here is one thing that is sure. Watson wouldn’t understand anything he said about writing a novel and Watson won’t be writing any novels any time soon. Now as it happens my AI group frequently worked on getting computers to write stories of one sort or a another. We learned a lot from doing that. I am quite sure that IBM hasn’t even thought about what is involved in getting a computer to write novels. Having something the computer wants to say? Having had an experience that the computer is bursting to describe to people? That would be AI.

8. Is there anything else I need to know?

When might you ask such a question? You might have had a conversation with a chat bot and found out how to get somewhere you were trying go. Then you might (if you were talking to a person) ask if there is anything else you needed to know. Answering that question involves knowing whom you are talking to. (Oh, yeah, there is great Ethiopian Restaurant nearby and watch out for speed traps.) Let’s see the chat bot that can answer that question.

9. I can’t figure out how to grow my business. Got any ideas?

It is obvious why this is a difficult question. But, in business, people have conversations like that all the time. They use their prior experiences to predict future experiences. They make suggestions based on stuff they themselves have done. They give advice based on cases in their own lives and they usually tell personal stories to illustrate their points. That is what intelligent conversation sounds like. Can AI do that? Not today, but it is possible. Unfortunately there is no one that I know of who is working on that. Instead they are working on counting words and matching syntactic phrases.

They are also working on AI document checkers that will help Word with spell check, or grammar check. “NeuroGrammar™ uses its advanced neural-network artificial intelligence algorithms in order to analyse every noun phrase and verb phrase in every sentence for syntactic and semantic errors.”

How marvelous. So here is my last question:

10. Does what I am writing make sense?

Amazingly, this is hard. Why? Because in order to understand my points you need to match them to things you already think and see if I have helped you think about things better or decide that you disagree with what I am saying here based on your own beliefs. You already have an opinion on whether my writing style was comprehensible and whether the points I made made sense to you. You can do that. AI cannot. Do I think we could do that someday in AI? Maybe. We would have to have a complete model of the world and an understanding of what kinds of ideas people argue for and what counterarguments are reasonable. Intelligent people all do this. “AI’s” do not. An “AI” that understood documents would not be a grammar checker.

It would be nice if people stopped pushing AI that is based on statistics and word counts and “AI people” tried to do the hard work that making AI happen would require. 

We’ll meat again…

..don’t know where, don’t know when.

So Michael Mansfield QC, a vegetarian, has stated that eating meat should be a crime against humanity in the future. So instead of being handed a menu in a pub with a vegetarian option, in the future you might be handed a vegan menu with a single meat option. Or perhaps we’ll see underground meat eating dens like drinking dens during prohibition. Maybe criminals will shift their focus from cyber-crime to the production of mince.

Few thoughts. If we suddenly stopped eating meat what might happen? We all know about the environmental benefits. People would be healthier too, but what happens to the animals themselves, the rural communities, cultural identity and tradition? There’s a strong case that it would be the poorest communities that would suffer the most too.

What interests me most though is the tone with which such arguments are made nowadays. It is all so angry. People seem to feel slighted if someone else does something they don’t agree with.

“Keep smiling through, just like you always do, till the blue skies drive the dark clouds far away”

Digital Afterlives

 

“The first time I texted James I was, frankly a little nervous. “How are you doing?” I typed, for want of a better question. “I’m doing alright, thanks for asking.” That was last month. By then James had been dead for almost eight months.” *

Once you died and you were gone. There was no in-between, no netherworld, no underworld. There could be a gravestone or an inscription on a park bench. Perhaps some fading photographs, a few letters or physical mementoes. In rare instances, you might leave behind a time capsule for future generations to discover.

That was yesterday. Today, more and more, your dead-self inhabits a technological twilight zone – a world that is neither fully virtual nor totally artificial. The dead, in short, are coming back to life and there could be hordes of them living in our houses and following us wherever we go in the future. The only question is whether or not we will choose to communicate with them.

Nowadays, if you’ve ever been online, you will likely leave a collection of tweets, posts, timelines, photographs, videos and perhaps voice recordings. But even these digital footprints may appear quaint in the more distant future. Why might this be so? The answer is a duality of demographic trends and technological advances. Let’s start with the demographics.

The children of the revolution are starting to die. The baby boomers that grew up in the shadows of the Second World War are fading fast and next up it’s the turn of those who grew up in the 1950s and 60s. These were the children that challenged authority and tore down barriers and norms. Numerically, there are a lot of this generation and what they did in life they are starting to do in death.They are challenging what happens to them and how they are remembered.

Traditional funerals, all cost, formality and morbidity are therefore being replaced with low-cost funerals, direct cremations, woodland burials and colourful parties. We are also starting to experience experiments concerning what is left behind, instances of which can be a little ‘trippy’.

If you die now, and especially if you’ve been a heavy user of social media, a vast legacy remains – or at least it does while the tech companies are still interested in you. Facebook pages persist after death. In fact going out a few decades there could be more people that are dead on Facebook than the living.  Already, memorial pages can be set up (depending on privacy settings and legacy contacts) allowing friends and family to continue to post. Dead people even get birthday wishes and in some instances a form of competitive mourning kicks in. Interestingly, some posts to dead people even become quite confessional, presumably because some people think conversations with the dead are private. In the future, we might even see a kind of YouTube of the dead.

But things have started to get much weirder. James, cited earlier, is indeed departed, but his legacy has been a computer program that’s woven together countless hours of recordings made by James and turned into a ‘bot – but a ‘bot you can natter to as though James were still alive. This is not as unusual as you might think.

When 32-year-old Roman Mazurenko was killed by a car, his friend Eugenia Kuyda memorialised him as a chatbot. She asked friends and family to share old messages and fed them into a neural network built by developers at her AI start-up called Replika. You can buy him – or at least what his digital approximation has become – on Apple’s app store. Similarly, Eter9 is a social network that uses AI to learn from its users and create virtual selves, called “counterparts”, that mimic the user and lives on after they die. Or there’s Eterni.me, which scrapes interactions on social media to build up a digital approximation that knows what you “liked” on Facebook and perhaps knows what you’d still like if you weren’t dead.

This might make you think twice about leaving Alexa and other virtual assistants permanently on for the rest of your life. What exactly might the likes of Amazon, Apple and Google be doing with all that data? Life enhancing? Maybe. But maybe death defying too. More ambitious still are attempts to extract our daily thoughts directly from our brains, rather than scavenging our digital footprints. So far, brain-computer interfaces (BCIs) have been used to restore motor control in paralysed patients through surgically implanted electrodes, but one day BCIs may be used alongside non-invasive techniques to literally record and store what’s in our heads and, by implication, what’s inside the heads of others. Still not Sci-Fi enough for you? Well how about never dying in the first place?

We’ve seen significant progress in extending human lifespans over the last couple of centuries, although longevity has plateaued of late and may even fall in the future due to diet and sedentary lifestyles. Enter regenerative medicine, which has a quasi-philosophical and semi-religious activist wing called Transhumanism. Transhumanism seeks to end death altogether. One way to do this might be via Nano-bots injected into the blood (reminiscent of the 1966 sci-fi movie Fantastic Voyage). Or we might generically engineer future generations or ourselves, possibly adding ‘repair patches’ that reverse the molecular and cellular damage much in the same way that we ‘patch’ buggy computer code.

Maybe we should leave Transhumanism on the slab for the time being. Nevertheless, we do urgently need to decide how the digital afterlife industry is regulated. For example, should digital remains be treated with the same level of respect as physical remains? Should there should be laws relating to digital exhumation and what of the legal status of replicants? For instance, if our voices are being preserved who, if anyone, should be allowed access to our voice files and could commercial use of an auditory likeness ever be allowed?

At the Oxford Internet Institute, Carl Öhman studies the ethics of such situations. He points out that over the next 30-years, around 3 billion people will die. Most of these people will leave their digital remains in the hands of technology companies, who may be tempted to monetise these ‘assets.’ Given the recent history of privacy and security ‘outages’ from the likes of Facebook we should be concerned.

One of the threads running through the hit TV series Black Mirror is the idea of people living on after they’re dead. There’s also the idea that in the future we may be able to digitally share and store physical sensations. In one episode called ‘Black Museum’, for example, a prisoner on death row signs over the rights to his digital self, and is resurrected after his execution as a fully conscious hologram that visitors to the museum can torture. Or there’s an episode called ‘Be Right Back’ where a woman subscribes to a service that uses the online history of her dead fiancé to create a ‘bot that echoes his personality. But what starts off as a simple text-messaging app evolves into a sophisticated voicebot and is eventually embodied in a fully lifelike, look-a-like, robot replica.

Pure fantasy? We should perhaps be careful what we wish for. The terms and conditions of the Replika app mentioned earlier contain a somewhat chilling passage: People signing up to the service agree to “a perpetual, irrevocable, licence to copy, display, upload, perform, distribute, store, modify and otherwise use your user content.’ That’s a future you they are talking about. Sleep well.

 

* The Telegraph magazine (UK) 19 January 2019. ‘This young man died in April. So how did our writer have a conversation with him last month?’