Memory & Understanding: Paper versus Pixels

027_24_1

A study by Pam A. Mueller of Princeton and Daniel M. Oppenheimer of UCLA has found that US college students who take notes on laptop computers are more likely to record lecturers’ words verbatim. Sounds like a good thing, but the study goes on to say that because notes are verbatim, students are therefore LESS LIKELY to mentally absorb what’s being said.

In one study, laptop-using students recorded 65% more of lectures verbatim than did those who used longhand; 30-minutes later, the laptop users performed significantly worse on conceptual questions. According to the researchers, longhand note takers learn by re-framing lecturers’ ideas in their own words.

This chimes with anecdotal evidence in the UK that some students aged around 16-18 are going back to index cards for exam revision because, as one said to me quite recently:  “stuff on screens doesn’t seem to sink in.”

Source: Sage Journals: ‘The pen is mightier than the keyboard: The advantages of longhand over laptop note taking’. See also Scientific American (Nov 2013) ‘Why the brain prefers paper.’ (summary here).

Wearable computing that isn’t

IMG_1959

 

It’s not just me then. I bought a Nike Fuel Band a while back to see if it worked. It did. I walked a bit more. The dog often went out twice rather than once. But then it started to feel like another thing to worry about. And those “Go Richard!” messages can get really annoying. Half of American adults who own an activity tracker no longer use it, and one third who have owned a wearable product stopped using it within six months according to Mike Merrill, writing for The Big Think.

Paper versus screens

Screen Shot 2014-01-16 at 09.43.39

 

 

 

 

 

 

 

Does the technology that we use to read change how we read? Since as far back as the 1980s, researchers have been looking at the differences between reading on paper and reading on screens. Prior to 1992, most studies concluded that people using screens read things more slowly and remember less about what they’ve read. Since 1992, a more mixed picture has emerged.

The most recent research suggests that people prefer to use paper when they need to concentrate, but this may be changing. In the US, 20% of all books sold are now e-books and digital reading devices have developed significantly over the last 5-10 years. Nevertheless, it appears that digital devices stop people from navigating effectively and may inhibit comprehension. Screens, it seems, drain more of our mental resources and make it harder to remember what we’ve read. This is not to say that screens aren’t useful – far from it – but more needs to be done to appreciate the advantages of paper and to limit the digital downsides of screens.

One of the issues is typography. Paper books contain two domains – a right and left hand paper – from which readers orientate themselves. There is also a sense of physical progression with paper books, which allows the reader to get some sense of overall place and form a coherent mental picture of the whole text. With screens things are different.

Digital pages are more ephemeral. They literally vanish once they have been read and it is difficult to see a page or a passage in the context of the larger text. Some research (e.g. a 2013 study by Anne Mangen at the University of Stavanger) suggests that this is precisely why screens often impair comprehension. It has even been suggested that operating a digital device is more mentally taxing than operating a book because screens shine light directly into a readers face causing eyestrain. A study by Erik Wastlund at Karlstad University, for example, found that reading a comprehension test on a screen increased levels of stress and tiredness versus people reading the same test on paper.

There is also the idea, rarely recognised, that people bring less mental effort to screens in the first place. A study by Ziming Lui at San Jose Sate University found that people reading on screens use a lot of shortcuts and spend time browsing or scanning for things not directly linked to the text. Another piece of research (Kate Garland/University of Leicester) makes the key point that people reading on a screen rely much more on remembering the text compared to people reading on paper who rely much more on understanding what the text means. This distinction between remembering and knowing is especially critical in education.

Research by Julia Parrish-Morris and colleagues (now at the University of Pennsylvania) found that three to five-year old children reading stories from interactive books spent much of their time being distracted by buttons and easily lost track of the narrative and what it meant. Clearly screens have considerable advantages. Convenience or fast access to information is one. For older or visually impaired readers the ability to change font size is another. But it is precisely the simplicity and uncomplicated nature of paper that makes it so special. Paper does not draw attention to itself. It does not contain hyperlinks or other forms of easy distraction and its tactile and sensory nature is not only pleasing but actually allows us to navigate and understand the text.

Screens Vs Paper (and comprehension)

IMG_1636

 

I’ve just (almost) completed some scenarios for the future of gaming so I’m back in the office scribbling like a demon. The latest scribble is a map of emerging technologies and it occurs to me that I am never happier than when I’ve got a sharp pencil in my hand and a large sheet of white paper stretching out in front of me.

Thinking of this, there was an excellent piece this time last year (22/29 December 2012) in the New Scientist on the power of doodles. Freud, apparently, thought that doodles were a back door into the psyche (of course he did – a carrot was never a carrot, right). Meanwhile, a study by Capital University suggests that the complexity of a doodle is not correlated in any way with how distracted a person is. Indeed, doodling can support concentration and improve memory and understanding. Phew.

While I’m on the subject of paper by the way, there’s an excellent paper on why the brain prefers paper in Scientific American (issue of November 2013). Here are a few choice quotes:

“Whether they realise it or not, people often approach computers and tablets with a state of mind less conductive to learning than the one they bring to paper.”

“In recent research by Karin James of Indiana University Bloomington, the reading circuits of five-year-old children crackled with activity when they practiced writing letters by hand, but not when they typed letters on a keyboard.”

“Screens sometimes impair comprehension precisely because they distort peoples’ sense of place in a text.”

“Students who had read study material on a screen relied much more on remembering than knowing.”

The Future of Privacy

Screen Shot 2013-12-09 at 14.52.43

We are currently living in the Technolithic, an age that forms part of the most significant revolution since the agricultural and industrial eras. The Technolithic is part of the information age, but what we are now creating is perhaps not what the early internet pioneers envisaged. In the early days, the internet was about finding information. It is currently largely about finding other people. One day, I hope, it will be about finding ourselves.

But before this can happen we have to deal with a very powerful force, a force that wants to know absolutely everything about us. This force can act for the common good, but can also operate to profit the few.

Who we are and what we are allowed to be, is at the very heart of this. Please don’t get me wrong. I am not calling on you to smash your computers or stop using LinkedIn. All I am asking you to do is to raise your gaze from your freshly picked Apples and Blackberries and to pay closer attention to some of the possible consequences of using these devices, especially the way in which machines and the people that control them appear to be profiting from something that not only belongs to us but defines us.

Digital connectivity has given us many wonderful things and improved our lives immeasurably. At the moment the balance is positive, but that doesn’t mean that we shouldn’t remain vigilant.

I was in Poland last year doing a TEDx talk and met one of the developers behind an app called Life Circle. This is an app that makes blood donation more effective by opening up the communication lines between blood banks and blood donors. This might sound mundane, but it isn’t. It’s a matter of life in death in some instances. Simply allowing a blood centre to have a users smartphone number means that they can see where their donors are in near real time.

This allows the blood bank to call in certain blood groups if there’s an emergency. It can work anywhere – from Warsaw to Washington – and users can link with social networks and potentially recruit more donors. Moreover, once linked you could potentially ‘game’ blood donation, although the idea of competing with others to see who can give the most blood would obviously be a very bad idea.

Another medical marvel is Google Flu Trends. If you don’t know about it already, it’s an early example of Big Data and near real-time prediction. The story is that some people had a feeling that there must be a correlation between outbreaks of flu in particular regions and search terms used in the same locations. If one could predict the right words you could catch an outbreak sooner.

200,000 people are hospitalised annually in the US alone due to flu and between 20,000 and 30,000 die, but until recently it took the Centre for Disease Control around a week to publish flu data.

The Big Data connection here, by the way, is that Google did not know which handful of search terms would correlate, so it just ran half a billion calculations to find out, and it turned out that 45 search terms were indeed related. What is going on here, and is starting to occur elsewhere too, is that rather than sampling small data sets we are able to look at huge amounts of data, sometimes at all the data in near-real-time, which can reveal correlations that were previously deeply hidden or totally unobservable.

In other words, many aspects of our daily existence that were previously closed, hidden or private are becoming much more open, transparent and public and much of this data has huge value and forms a wholly new asset class.

The website 23andme.com recently got into trouble in the US with the FDA because it was thought by some that the site, and the results of the tests that were being offered, was carrying too much weight and users were acting in ways that were not necessarily in their best interests. In other words, users were seeing probabilities or predictions as certainties.

Again, if you don’t know about this, the site essentially offers to quickly sequence your genome for around $99. A decade or so ago, this would have cost you around a billion dollars. The results might strongly suggest, for example, that a 25-year-old man would have issues with his heart when he was in his 50s or that a 15-year old girl had a significant chance of developing breast cancer.

There are clearly privacy issues galore around new technologies such as these – should your new employer have access to information that you are 70% certain to die in 20 years for example?

Another, more mundane, example of companies looking at people and predicting future outcomes is McDonald’s. They, along with many other fast food chains in the US, have started to use technology to predict what you are about to order – and start to prepare your meal before you have actually ordered anything.

How and why do they do this? In the US about 50% of fast food turnover is through the drive-thru window and customers can become stressed if the queue moves too slowly. CCTV cameras are therefore pointed at cars in the queue and these cameras are connected to software that works out what each car is, not based on individual number plates, but on the silhouette of each car.

This knowledge is then married to millions of bits of historical data about what the drivers of such cars tend to order and, hey presto, predictive sales and marketing. The general idea, I guess, is that if you are driving a 10-year-old Volvo station wagon you’ve probably got a mother and at least one happy meal coming up, whereas if you can see a brand new Hummer you are not about to sell a small salad and a bottle of water.

Is McDonald’s technology intrusive? Does it invade privacy? I don’t think so. If they are stealing anything it isn’t anything of great value. Moreover, you can mess with their minds by riding a bicycle into the drive-thru and ordering two Big Macs and three cokes.

There are many other examples of machines attempting to know us and predict our behaviour. One is called the Malicious Intent Detector and it’s used primarily in airports in the US. This machine also uses cameras connected to software.

The idea here is that body-language can tell us quite a lot about what people are thinking or, more usefully, what they are thinking of doing. Our facial expressions, our eye movements, our clothes, what we are doing with our hands all betray certain things about us.

Indeed as much as 90% of communication is believed to be non-verbal. Combine this thought with skin temperature analysis (sensed remotely), facial recognition, x-rays and software that looks at how our clothes are fitting and you have a fairly good way of finding out whether someone is carrying something they shouldn’t or is intent on doing something that they shouldn’t.

But we can take things further still. Predictive policing is a direct result of better data and better analysis of crime figures. What it is able to do, with astonishing accuracy, is predict not only where, but when crime will take place. If this sounds like the Department of Future Crime in the film Minority Report that’s more or less the idea. It doesn’t identify criminals directly, but does pinpoint potential targets down to 150 square metres on specific days in some cases.

But that’s just the beginning potentially. If one adds developments in remote brain reading we could possibly have a situation where even our inner thoughts are intercepted. The asymmetry of this situation – and indeed of Big Data generally – shifts the balance of power between the state and the individual so we should keep a careful watch on this.

I’ve had my Identity stolen twice, but the benefits of digital transactions still outweigh any negatives. However, if someone were to steal not only my date of birth, address and bank account details, but everything about me, I’d view this rather differently.

Let’s put it like this. If someone came up to you on the street and asked you for personal information would you give it to them? And what if they asked about your daily schedule, your friends, your work, your favourite shops, restaurants and holiday spots?

How about if they wanted to know which books you read, what kinds of meals you like, how much sleep you get and what you searched for online in the privacy of your own home? Would you find that a little unsettling? Would you at least ask why this person wanted this information? And what if they said that they wanted to sell this information, your information, onto someone else that you’d never met. Would you allow it?

This is essentially what’s going on right now with social networks, although I believe it’s about to get far worse. Part of the problem is the mobile phone, although the word ‘phone’ is rather misleading. After all, using a phone to speak to someone is dying out globally. Voice traffic is falling through the floor, while text based communication is going through the roof.

These phones, and there are more than 6 billion + of them now, are broadcasting information about us all of the time, especially if they are smart phones, which increasingly they are. In fact smart phones have been outselling PCs globally since about 2012. In the UK, almost 10% of five year olds now own a mobile phone and by ten-years-of-age, it’s 75%. Eventually, all of these will be smart phones.

Our mobile phones are actually a form of wearable computing and I’d expect wearables to explode over the coming years. I don’t simply mean more people owning more phones, but more people carrying devices that continually broadcast information about us, and this would includes clothing embedded with computers, shoes containing computers, digital wallets and even toothbrushes containing computers. This is broadly the internet of things and it where the problems will really start.

An internet connected toothbrush might seem trivial, ridiculous even, but trust me they are coming. To begin with they will be seen as expensive toys. You’ll be able to download your brushing history or compete with your friends in various dental games. They will form part of the self-tracking or quantified-self movement and will be bought alongside Nike Fuel bands and sleep monitors.

Nothing wrong with this unless your toothbrush data finds its way into the wrong hands. For example, what if dental care was to be refused – or made vastly more expensive – if you had not reached level 3 of the tooth fairy game?

Currently there are roughly 12 billion things connected by the internet. By 2045 some people think this number will be 7 trillion. This means computers and wireless connectivity in every man-made object on earth and a few natural objects too.
Trees with their own IP address? It’s totally possible. And don’t forget that we put ID chips in our cats and dogs so its probably only a matter of time before we start chipping our children too.

Anyway, the point here is that almost everything we do and almost everything we own in the future, will emit data and this data will be very valuable to someone. I rather hope that this someone is us and that we can opt in and out at will, earning micro-payments for the data relating to our activities if that’s what we want.

And this brings me to why privacy is one of the biggest problems in our new electronic age. At the heart of Internet culture is a force that wants to find out everything about you. And once it has found out everything about you and 7 billion others, that’s a remarkable asset, and other people will be tempted to trade and do commerce with that asset.

Does this matter?

I think it matters for three key reasons.

First, people can be harmed if there is no restriction on access to personal information. Medical records, psychological tests, school records, financial details, sites visited on the internet all hold intimate details of a person’s life and the public revelation or sharing of such information can leave a person vulnerable to abuse.

Second, privacy is fundamental to human identity. Personal information is, on one level, the basis of the person. To lose control of one’s personal information is in some measure to lose control of one’s life and one’s dignity. Without some degree of privacy, for example, friendship, intimacy and trust are all lost or, at the very least, meaningless.

Third, and most importantly of all perhaps, privacy is linked to freedom, especially the freedom to think and act as we like so long as our activities do not harm others.

If individuals know that their actions and inclinations are constantly being observed, commented upon and potentially criticized, they will find it much harder to do things that deviate from accepted norms. There does not even have to be an explicit threat. Visibility itself is a powerful way of enforcing norms.

This, to some extent, is what’s starting to happening already with every intimate photograph, and every indelicate tweet being attributable to source, whether the source wants it to be or not.

As Viktor Mayer-Schonberger has pointed out, the possession of data about used to mean an understanding of the past. But, increasingly, the possession of data is starting to mean an ability to predict and control the future.

In the right hands this knowledge will be a tool for great good. But we should remain vigilant, because in the wrong hands this knowledge will be used against us, either to control us or to profit from us in a manner that destroys us as autonomous human beings.

Don’t just do something, sit there

While I’m on the subject of digital detox (previous post), a few of you might have school age kids on holiday at the moment. Chances are you are frantic trying to organise things for the little darlings to do. Don’t. Read this instead.

Boredom is beautiful. Rumination is the prelude to creation. Not only is doing nothing one of life’s few remaining luxuries, it is also a state of mind that allows us to let go of the external world and explore what’s deep inside our head. But you can’t do this if ten people keep sending you messages about what they are eating for lunch or commenting on the cut of your new suit. Reflection creates clarity. It is a “prelude to engagement of the imagination,” according to Dr. Edward Hallowell, author of Crazy Busy. It is a useful human emotion and one that has historically driven deep insight.

Boredom hurts at first, but once you get through the mental anguish you can see things in their proper context or sometimes in a new light. Digital technology, and mobile technology in particular, appears to negate this. If you are trying to solve a problem it is now far too easy to become digitally distracted and move on. But if you persist, you might just find what you’ve been looking for. So don’t just do something after you’ve read this chapter, sit and think for a while.

Faced with nothing, you invent new ways of doing something. This is how most artists think when faced with a blank canvas. Historically, children have operated like this too. They moan and groan that they are bored, but eventually they find something to do—by themselves. Boredom is a catalyst for creative thought. Only these days it mostly isn’t. We don’t allow our children the time or the space to drift and dream. According to the UK Office of National Statistics, 45 percent of children under 16 spend just 2 percent of their time alone. Moreover, the amount of free time available to schoolchildren (after going to school, doing home- work, sleeping, and eating) has declined from 45 to 25 percent. Children are scheduled, organised, and outsourced to the point where they never have what New York University Professor Jerome Wakefield calls a chance to “know themselves.” It’s the same with adults. Our minds are rarely scrubbed and dust builds up to the point where we can’t see things properly.

Not only is it difficult to become bored, we can’t even keep still long enough to do one thing properly. Multitasking is killing deep thinking. Leo Chalupa, an ophthalmologist and neurobiologist at the University of California (Davis), claims that the demands of multitasking and the barrage of aural and visual information (and disinformation) are producing long-lasting and potentially permanent damage to our brains. A related idea is constant partial attention (CPA). Linda Stone, who has worked at both Apple and Microsoft Research Labs, knows about how high-tech devices influence human behaviour. She coined the term CPA to describe how individuals continually scan the digital environment for opportunities and threats. Keeping up with the latest information becomes addictive and people get bored in its absence.

In a sense this isn’t anything new. We were all doing this 40,000 years ago on the savannah, tucking into freshly killed meat while keeping keeping a look out for predators. But digitisation plus connectivity has increased the amount of information it’s now possible to consume to the extent that out attention is now fragmented all of the time. This isn’t always a bad thing, as Stone points out. It’s merely a strategy to deal with certain kinds of activity or information.

However, our attention is finite and we can’t be in hyper-alert, “fight-or-flight” mode 24/7. Constant alertness is stressful to body and mind and it is important to switch off, or at least reduce, some of the incoming information from time to time. As Carl Honoré, author of In Praise of Slow, says: “Instead of think- ing deeply, or letting an idea simmer in the back of the mind, our instinct is now to reach for the nearest sound bite.” We relax by cramming even more information into our heads.

Chalupa’s radical idea is that every year people should be encouraged to spend a whole day doing absolutely nothing. No human contact whatsoever. No conversation, no telephone calls, no email, no instant messaging, no books, no newspapers, no magazines, no television, no radio, and no music. No contact with people or the products of other human minds, be it written, spoken, or recorded.

Have you ever done nothing for 24 hours? Try it. It will do your head in for a while. Total solitude, silence, or lack of mental distraction destroys your sense of self. Time becomes meaning- less and recent memories start to disappear. There is a feeling of being removed from everything while being deeply connected to everything in the universe. It is fantastic and frightening all at the same time. But don’t worry, you soon feel normal again. Return to sensory overload and deep questions about a unifying principle for the universe soon disappear, to be replaced by important questions about what you’re going to eat for dinner tonight or how you’re going to find that missing Word file.

Consider what Bill Gates used to do. Twice a year for 15 years the world’s then richest man would take himself off to a secret waterfront hideaway for a seven-day stretch of seclusion. The ritual and the agenda of Bill’s think weeks were always the same: to ponder the future and to come up with a few ideas to shake up Microsoft. In his case this involved reading matter but no people. Given that Gates has been instrumental in the design of modern office life, it’s interesting that he felt the need to get away physically; one would expect him to inhabit a virtual world instead.

I once received a brief from the strategy director of a FTSE 100 company who wanted to take his team away to do some thinking. When I suggested that we should do just that—go away for a few days, read some books, think, and then discuss what we’d read— he thought I’d lost my mind. Why? Because there was “no process.” There were no milestones, stage gates, or concrete deliverables against which he could measure his investment.

The point of the exercise is this. Solitude (like boredom) stimulates the mind in ways that you cannot imagine unless you’ve experienced it. Solitude reveals the real you, which is perhaps why so many people are so afraid of it. Empty spaces terrify people, especially those with nothing between their ears.

But being alone and having nothing to think about allows your mind to refresh itself. Why not discover the benefits of boredom for yourself?