Can robots write?

Machine learning produces dazzling results, but some assembly is still required

Shutterstock

Alexandra Louise Uitdenbogerd, RMIT University

You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample:

I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.

But exactly how was the article created? And is it really true that software “wrote this entire article”?

How machines learn to write

The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)

OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.

But let’s step back and look at what text generation software actually does.

Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).

Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.




Read more:
From Twitterbots to VR: 10 of the best examples of digital literature


Writing by numbers

Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.

As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.

More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.

This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.

While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.

Imitating the brain

Neural networks work a bit like tiny brains made of several layers of virtual neurons.

A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.

The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.

To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.

Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.

In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.




Read more:
Friday essay: a real life experiment illuminates the future of books and reading


What’s so special about GPT-3?

GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.

The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.

GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.

The robot writer

But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.

This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.

This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.

AI song Beautiful the World, by Uncanny Valley.

We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.

For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.

Co-creativity: humans and AI together

So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?

A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.

Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.




Read more:
Computing gives an artist new tools to be creative


The Conversation


Alexandra Louise Uitdenbogerd, Senior Lecturer in Computer Science, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Importing books onto your iPad now that iTunes doesn’t support ebooks

I have been a devoted Kindle user for my e-book reading addiction. Unfortunately, my Kindle has become lethargic and prone to wasting valuable reading time. I could buy a new Kindle or one of the competing e-reading devices, or I could turn to the devices I already have.

No, I’m not talking about the stack of unread paper books on my shelf, I’m talking about my phone and tablet. Although, I probably should reduce the size of that stack before they feel unloved and attempt to move house without me.

So, having decided to utilise my iPad, I plugged it into my Windows* machine, booted up iTunes, and discovered a decided lack of Books to transfer.

iTunes1

A tad hard to transfer books to my iPad when the Book category has been removed. Now, if I was still using a Mac the problem would be easily remedied, as iBooks has taken over the role of synching books to the iPad. But on a Windows machine, I can only really transfer my books from the store to my computer.

Try as I might, I couldn’t find an article explaining how to transfer books. So I did what every computer user does when something doesn’t work: I kept clicking until something happened. After replacing the CPU, I tried the sideloading technique some Apps utilise. You open iTunes to the place you want the file (Books) and then open a File Explorer window to the directory that contains the books you want.

Transfer

As you can see in the screenshot, I’m transferring some Project Gutenberg books to my iPad. Just click and drag the files into the Books frame of iTunes and they will be copied. The transfer will begin once you hit sync.

Hopefully, this helps some other readers who want to transfer their e-books to their iPhone or iPad using a Windows machine. The same process works for adding books to other reading Apps, or other files to other Apps – I’ve added beta-manuscripts to the Kindle app this way.

*Yes, yes, deride Windows all you want. It works better than my broken Mac and Linux machines.

Book Review: Humans Need Not Apply by Jerry Kaplan

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial IntelligenceHumans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan

My rating: 4 of 5 stars

ABS brakes were the first step. The last will be us humans in observation cages next to the monkeys.

Jerry Kaplan is an expert in Artificial Intelligence and Computational Linguistics and attempts to guide the reader through what impacts AI and Robots will have on our future. In doing so, he raises many of the economic, ethical, and societal problems we are going to have to start addressing.

I first became aware of this book via CGP Grey’s short documentary of the same name (see below). To say there is a storm coming is an understatement. Kaplan guides us through the technological aspects of this topic with knowledge and skill. Where this book falls down is in his blind adherence to free-market solutions – ironically whilst pointing out several examples of where the free-market has failed in the past.

For example, some of his ideas about education are problematic. What he proposes with “job mortgages” is essentially traineeships and cadetships* that in my parents’ and grandparents’ generations were paid for by employers, with his modern twist being that employees should take out a job mortgage for. In other words, all of the cost and risk is moved from employers to employees.** How can anyone suggest that sort of thing as though they aren’t talking about slavery or indentured servitude?*** Sci-fi has been imagining that sort of scenario for decades and they weren’t calling it a good idea.

His comments about how rich people being in charge isn’t all bad, like back in ancient Eygpt… Because monarchies worked so well for everyone, who was a monarch.

Another gem was the idea that the free market could be in charge of wealth redistribution… Because it does such a great job of that right now. Now, in fairness, his plan was actually pretty good, but there were built in assumptions he didn’t really question despite laying out the framework with his discussion of automation taking our jobs.

Kaplan spent most of his book outlining what amounts to a post-scarcity world, a world where human “work” would essentially cease to exist, and thus cost, value and products become meaningless. How can you maintain our current economic system in that world? Don’t we need to be rethinking about what utopia we wish to design and the machines that will make that happen?

The final chapter has some interesting questions and ideas about what role humans can play in a world that the robots run and own. Whilst the ideas aren’t new, since science fiction has been prodding that topic for the best part of 70 years, he has grounded them in reality. If there is one takeaway from this book, it is that we all need to start planning the future now.

Overall, this was a fascinating book that is well worth reading.

* A point he acknowledges he is updating to be free-market and more “beneficial”
** It could be argued that this has already happened and Kaplan is just taking it one step further.
*** Again, a point he acknowledges with reference to AIs becoming free of ownership.

https://www.reddit.com/r/Futurology/c…
https://www.theguardian.com/culture/2…

View all my reviews

How Is Tech Changing the Way We Read?

maxresdefault1

With the rise of social media and smartphone use, we are all reading fewer books than we once did. All, not just those pesky millennials. Some people are worried about what this means for the future of literature and, well, our brains. But is it true that we are really reading less? And should I care?

Above The Noise recently did a video in which Myles covers some of the research on reading.

I always appreciate it when a Youtuber or Journalist manages to discuss a topic without devolving into head-shaking admonishment, especially when it comes to the topic of reading and books. Too often these sorts of videos and articles cite bad research or buy into industry propaganda.

I’ve previously discussed the misrepresentations made about reading ebooks, the overstating of the benefits of reading – when there are some well-researched benefits documented –  and even the way we write. And the Pew Research into reading was one of several references I’ve used in my discussion of Who Reads, something I cover quite a bit here.

And yet, there were still some things in the video that I hadn’t been aware of. So I think it is worth sharing. Enjoy.

From the video:

Reading has been an important part of the human experience for thousands of years, but believe it or not, that’s not a long time on the evolutionary timescale. Before the internet, it made sense to read long texts in a linear fashion, but that’s now changing as people are adapting to skimming shorter texts on their computers or phones. But what does this mean for the future of books?

What is literary reading?

Literary reading is, quite simply, the reading of any literature. This includes novels, short stories, poetry, and plays.

Are we reading less?

The rate at which Americans are reading literature for fun is down around 14% from the early 1980s. This doesn’t necessarily mean we are reading less, however. Many people still have to read for school or work. Then there are all the words, sentences, and messages we read on the internet from emails to texts to tweets. Some people believe that this means we are possibly reading more individual words than ever. It’s just being done in a different way. I’ve also discussed the decline of literature.

And this is changing our brains?

Some neuroscientists believe that scanning shorter texts the way we do on the internet, often jumping from hyperlink to hyperlink, is actually changing the wiring in our brains. We are becoming better at searching for key terms and scanning for information, but this means it can become more difficult to read a longer text all the way through without missing major points.

SOURCES:
Children, Teens, and Reading
The long, steady decline of literary reading
Who doesn’t read books in America?
Serious reading takes a hit from online scanning and skimming, researchers say

Are liberals more tech-friendly than conservatives?

Tech and science acceptance isn’t really a political thing, it is more about your ideology. Ideology creates idiots out of everyone, no matter their political leanings. For example, if tech were solely the domain of, or even dominated by, liberals, then you wouldn’t have Donald Trump using his smart phone to tweet this on Twitter:

Screen Shot 2016-07-29 at 5.57.50 AM

It is quite interesting that whilst disagreeing with 97% of experts on climate change Trump has managed to propose a xenophobic conspiracy whilst preaching nationalism and conservative ideology on an iPhone.* He really is a master of manipulative language. Of course, that isn’t the only brain dropping of anti-science nonsense from the Republican Presidential nominee. It is probably easier to list the science Trump and his supporters do believe** than cover all of the topics he has tweeted denial of. I will now list the science Trump has endorsed:

NB: he probably doesn't support plant biology either.
NB: he probably doesn’t support plant biology either.

We’d be mistaken to assume that science and technology denial or rejection are the sole domain of conservatives. On the liberal side the Greens presidential nominee, Jill Stein, has taken several anti-science stances, such as supporting not-medicine, and opposing genetic engineering (e.g. GMOs) and pesticides in agriculture. Often people like to divide science denial into conservatives denying climate change and evolution, whilst liberals deny vaccines and GMOs. But, as with most things, it isn’t quite that cut-and-dried. Take for example the topic of GMOs:

This really highlights that anti-science numpties are across the political spectrum and deny the scientific consensus for very different reasons. Some deny it because they find corporations scary (Greenpeace), some deny it because they are selling something (Joseph Mercola), some deny it because they are arrogant bloviators (Nicholas Taleb).

On the topic of climate change this spectrum also exists. We keep hearing about how liberals are all climate change supporters and how conservatives are all climate change deniers… Except that isn’t true.

You can see that there isn’t 100% agreement or disagreement from either side of US politics. You don’t even got 100% agreement from climate scientists (97% consensus), despite the overwhelming body of evidence. The Pew Research Centre has similar figures for other countries. Politics isn’t the real predictor because it is too simple. At the hard end of conservatism, the above chart suggests you would be wrong half of the time if you were to call a conservative a climate denier. Even if you call fence sitters deniers as well you are still going to be wrong over a third of the time. And that’s with all the misinformation that the conservative media pumps out (USA, Australia).

If we were to look at a proper political compass that didn’t oversimplify into left vs right, or were to take into account some other factors, then politics could be a better predictor. For example, free market ideology can be a good predictor of climate change denial (67% confidence). The ideology of the free market isn’t going to allow people to admit the market’s failure to account for the externality of carbon emissions. Similarly the ideology of anti-corporatism isn’t going to allow people to admit that companies might make life saving vaccines or develop safe biotechnology food.

The only thing political affiliation can really do is give you a general idea of why or how someone will be biased toward/away from certain technologies. It is definitely not the whole story.

A version of this post originally appeared on Quora.
*interestingly Trump may actually be anti-technology despite having embraced social media. Although, his ego probably doesn’t allow him to not use social media, so of course he has a work-around.
**not that science is about belief.

Pen vs Keyboard: FIGHT!!

header-1

For some reason the world of writers is filled with technophobic troglodytes intent on proving that their old-fashioned way of doing things is better. I’ve written previously about how older people’s favourite hobby since the dawn of time has been complaining about kids these days. This is also true of changes in technology, with people intent on justifying not learning to use a computer or e-reader. Because cutting down trees is the future of communication!

Once again I’ve stumbled across another article that misrepresents scientific studies to try and convince people that we need to clear forests, pulp them, flatten them into paper, cover them in ink, and act as snooty as possible. This time they – the nebulous they: my nemesis!! – are trying to pretend that taking notes with a pen is better than using a keyboard.

Ugh.

When will people learn that paper isn’t the medium we should be promoting? We need to be going back to scratching on rocks and cave walls. When was the last time a paper book lasted more than a hundred years out in the rain, snow, and blazing sun? That doesn’t even begin to compete with the longevity of the 50,000 year old cave paintings. Data retention for rock far surpasses the much inferior paper.

This isn’t the first article I’ve seen on The Literacy Site misrepresenting science. Hopefully they will acquire come scientific literacy soon and overcome their biases. If I turn blue and pass-out, try to act concerned. Let’s dive in.

New Research Explains How The Pen Is Mightier Than The Keyboard

It’s great when articles improve on the titles of science papers. I mean, who wants to read the science paper The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking? Pity that both titles misrepresent the actual findings. Also, is 2014 still regarded as new?

In her graduate assistant days, psychological scientist Pam Mueller of Princeton University used to take notes just like everyone else in the modern age: with a computer. One day, Mueller forgot her laptop and had to take notes the old-fashioned way. Rather than being held back by pen and paper, Mueller left class feeling as if she’d retained far more information than usual on that day. She decided to create a case study that could prove her hunch that writing longhand was actually better for comprehension than typing.

This is actually a good little story and illustrates how a lot of hypotheses are formed in science. This is the anecdote or observation that scientists want to turn into a hypothesis to create actual knowledge. But remember, this is an anecdote, which has as much value as used Easter egg wrappers that have been stuffed between the couch cushions. Putting anecdotal stories at the start of an article can set the audience up to not think too hard about the rest of the article, as you have given them the conclusion in a nice little story.

The study she created, published in Psychological Science, indicated that taking notes by hand is a more effective method than typing them on a laptop when it comes to processing information conceptually.

And here we jump straight off the rails, over the side of the bridge, and careen into the waiting river below. Sure, The Literacy Site is just quoting the press release, but that is lazy. The study itself has this line in the abstract that show how this claim is a misrepresentation of the findings:

We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.

In other words, the findings were that people spend all their time typing and no time actually listening and comprehending the lectures. Because the pen is an archaic device that is unwieldy and slow compared to the keyboard, students using a pen only write down notes after they have listened, picked out the key points, and conceptualised that information into a note. But don’t take my word for it, the press release on the University of Michigan website has a few recommendations including:

  • To interrupt verbatim note-taking on laptops, break up your lectures with short activities that encourage deeper processing of information.
  • Have students use laptops or other technologies to process–not just record–information.

Now it is time to discuss the study details a little bit, because someone might be interested in the methods section. I’m sure those people exist. Somewhere. Interested is probably the wrong word.

In the first of a series of studies led by Mueller, 65 college students watched various TED Talks in small groups, and were provided with either pens and paper or laptops for taking notes. When the students were tested afterward, the results were overwhelming. While the groups performed equally on questions that involved recalling facts, those who had taken longhand notes did significantly better when it came to answering conceptual questions.

Sorry, I need to catch my breath. I’m so shocked at the massive sample size. This is definitely enough people to represent the rest of society. Conclude away I say!

Anyway, these overwhelming results are just a tad whelming.

Whelming error bars.
Whelming error bars.

As you can see the performance on retaining facts was the same, with error bars that suggest 65 people is probably not enough to draw conclusions from. Not that anyone would be trying to claim this study is proof of anything, right? The next thing you see is the benefits of using a pen…. as long as you ignore those error bars and just accept the p-value tells us something of value. Given that those error bars overlap for the two groups, I wouldn’t be drawing conclusions from a p-value. Also, I’m not exactly sure why an ANOVA was used when there were only two groups to compare. KISS principle applies to statistics as well.

Now the study realised that 65 people wasn’t enough, so they repeated the study with a few variations twice more. In the second and third tests they had 151 and 109 people take notes. Each test had the typists writing between 250 and 550 words, whilst the pen wielders wrote roughly 150 to 400 words. Interestingly the note takers were writing verbatim 12-14% with their laptop but the pen users only managed 4-9% verbatim. This shows why the conclusions I’ve quoted above were drawn.

Out of interest, here are the results from the other two tests that were more convincing for that conceptual finding.

Okay, this is more like it.
Okay, this is more like it.

The second test with 151 people were tested with pen, laptop, and laptop with a lecture from the tester about how they really should pay attention. With 50 people per group you’d hardly jump up and down about the significance of this test, but clearly telling people to pay attention doesn’t… hey look a squirrel.

squirrel

Methinks possibly the greater number of treatments has lessened this test's results.
Methinks that possibly the greater number of treatments has lessened this test’s significance.

The third test with 109 people again tested for pen vs keyboard, but this time they allowed revision of notes before being questioned. This makes the groups even smaller, and again I’d question the significance of such a small sample. But the researchers summed up the results with this erudite paragraph:

However, a more nuanced story can be told; the indirect effects differ for conceptual and factual questions. For conceptual questions, there were significant indirect effects on performance via both word count and verbatim overlap. The indirect effect of word count for factual questions was similar, but there was no significant indirect effect of verbatim overlap. Indeed, for factual questions, there was no significant direct effect of overlap on performance. As in Studies 1 and 2, the detriments caused by verbatim overlap occurred primarily for conceptual rather than for factual information, which aligns with previous literature showing that verbatim note taking is more problematic for conceptual items.

In other words, doing lots of writing, particularly just copying what was said verbatim, makes you suck at understanding what the hell is going on. Oh, and study before the test. Apparently it helps too. Made that mistake at university.

So back at The Literacy Site they are skipping the other tests and just heading to the conclusions:

Mueller found that this was the result of laptop users trying too hard to transcribe the lecture rather than listening for the most important information and writing it down by hand. It may be an era where computers have made handwriting seem useless, but Mueller isn’t the only believer in the importance of longhand.

Notice the nuanced difference that seeing all three tests provides? We could be led to believe that there was overwhelming evidence for the pen, but what we see is that note takers need to readdress their methods of taking notes. Or they could just wing it.

An article in TIME discusses Karin James, an Indiana University psychologist, who published a 2012 study indicating writing is particularly important in the cognitive development of pre-literate children five and under. While using a computer for note-taking in some situations makes sense, it’s important not to overlook the longhand method.

It’s great that the article tries to incorporate some extra research. Citing one study with a small sample size is hardly compelling, certainly not worth writing an article about. But again the research is being misrepresented:

…the benefits of writing: increased creativity, better critical thinking, boosted self confidence, and a correlated improvement in reading capability with writing prowess.

But are these benefits real? The short answer: Mostly not. “There’s lot of caveats in handwriting research,” says Karin James, a psychologist at Indiana University

Curse those damn caveats! Why can’t we have a control group of kids we don’t teach to read and write?!

Which brings me to a final point about these old technologies vs new technologies articles: stop jumping the gun! We’re in a transition phase. This isn’t 1970s velvet suits with platforms versus 2010s hipster atrocities. This is typewriter hipster texting on his phone. Technology is changing and we’re still learning how to use it properly. The studies that are cited in many of these articles have very limited scope, test very few people, and are comparing new and established things. Has anyone taught laptop users to take notes effectively for the new medium? Do you actually need to take written notes at all in this modern age? We need to see more science done on the changes taking place, and we need the articles discussing the science to do more than discuss (one study from) one paper, and highlight the limitations. Well, unless you have already made up your mind about a topic and just want some links to throw at people in an argument. Screw being right!

This blog post is being shared online, in print, and carved into a cave wall. Comment below which format you preferred receiving it in.

 

The continued war on e-books

Stupid meme is stupid.
Stupid meme is stupid.

Do you love the smell of books?
Do you prefer the feel of paper?
Do you feel slightly superior to others because you paid for the hardcover?
Do you grasp at any excuse to deride e-books and the people who read them?
Well, I have found the article for you!

Recently on Mental Floss an article entitled “5 Reasons Physical Books Might Be Better Than E-Books” sought to comfort snooty readers who wanted ammunition to fling at e-book readers. In the proud tradition of deriding any new technology as bad (see e-books, e-cars, driverless cars, etc), this article introduces us to some research that is wonderfully out of context for the intent of the article’s argument. Let’s dig in.

Though e-book readers have become a more common sight around town, traditional books still have their evangelists. According to The New York Times, e-book sales have been falling in 2015. Print definitely isn’t dead. In fact, according to some research, it may actually be a better choice for some readers. While scientists are still trying to tease out exactly how digital reading affects us differently, here are five ways e-books might be inferior to their dead-tree cousins.

When deriding things it is always best to reference another article that derides the same thing. In this case the article references the wonderfully misleading NYT piece on e-book sales slipping. Pity that the sales didn’t slip… That’s right, the NYT misrepresented a slowing in e-book sales growth as a drop in sales. And did they mention why readers were stating a preference for paper? Yes. Hidden in the article is a little quote about how publishers had been protecting their paper sales by inflating e-book prices. Now, my economics is a tad rusty, but I’m pretty sure making something more expensive when there are direct substitutes on offer results in a decrease in sales of that item and an increase in the sales of the substitution item. At least, that’s what I’ve heard…

1. E-BOOKS CAN REDUCE READING COMPREHENSION.
In a study of middle schoolers, West Chester University researchers found that students who read on iPads had lower reading comprehension than when they read traditional printed books. They discovered that the kids sometimes skipped text in favor of interactive features in the e-books, suggesting that certain multimedia in children’s e-books can be detrimental to the practice of reading itself. However, the researchers noted that some interactive features in e-books are designed to enhance comprehension, and that those might be more helpful than game-type interactive graphics.

This is a fantastic study in how multitasking is terrible for concentration and thus impacts reading comprehension. iPads have all sorts of cool stuff on them, including little notifications telling you that your friend just liked your latest picture of your meal. And building those distractions into the book being read: sounds like a great idea! What this study doesn’t do is support the idea that e-books reduce reading comprehension.

2. YOUNG KIDS CAN GET DISTRACTED BY E-BOOKS.
Similar results were found by a small study by the Joan Ganz Cooney Center that consisted of 32 kids reading e-books and print books with their parents. It found that “enhanced” e-books might be distracting. Kids who read enhanced e-books—ones with interactive, multimedia experiences—were more engaged with them physically, but in the end they remembered fewer narrative details than those who read print books or basic e-books [PDF].

Don’t read the link. Don’t read the link. You read the link: didn’t you. Leaving aside the tiny study size for a moment (a point the study authors acknowledge), the study itself supports the points I made above about being distracted whilst reading. And if you look through the study you see a great little chart that showed the comparison of reading comprehension – expressed as story details recalled – was actually superior in basic e-books than in print books or enhanced e-books.

Screen Shot 2015-10-04 at 10.13.19 am

The findings of the study were literally stated as:

The enhanced e-book was less effective than the print and basic e-book in supporting the benefits of co-reading because it prompted more non-content related interactions.

Odd that the “e-books are bad” article failed to highlight this finding…

3. YOU REMEMBER LESS ABOUT A BOOK’S TIMELINE.
Another study of adults also found that e-books can be hard to absorb. The researchers asked 25 people read a 28-page story on a Kindle and 25 to read the story in paperback, then asked the readers to put 14 events from the story in chronological order. Those who read the story on a Kindle performed worse on the chronology test than the book readers, though they performed about the same as print readers in other tests. Earlier research by the same scholars, from Stavanger University in Norway, found that Norwegian 10th graders also remembered more about texts if they read them in print rather than on a computer screen [PDF].

Finally we come to a study on actual e-books on an actual e-reader versus their dead tree counterparts. Of course I’m again blown away by the sample size of the study, a massive 50 people. That should easily extrapolate to the rest of humankind. The linked article doesn’t give us much information, but I found a better one, and it has this summary:

In most respects, there was no significant difference between the Kindle readers and the paper readers: the emotional measures were roughly the same, and both groups of readers responded almost equally to questions dealing with the setting of the story, the characters and other plot details. But, the Kindle readers scored significantly lower on questions about when events in the story occurred. They also performed almost twice as poorly when asked to arrange 14 plot points in the correct sequence.

I’d link to the original paper, but it is behind a paywall. Suffice to say that the error margins were pretty big (even the paper readers got 34% of the plot points in the wrong order). And this was a short story, something that shouldn’t be that difficult for any reader. So this probably says as much about the story as anything. They’d need far more stories and participants to get a good idea of what is going on. But I will concede that reading on paper vs e-reader vs screen is definitely a different experience and has an influence. What that influence is, positive, negative, or just different, needs more research.

Interestingly the study of reading PDF texts on a screen vs paper texts in high school students showed why scrolling is a terrible way to read anything. Scroll down to read more about PDFs sucking.

4. THEY’RE NOT GREAT AS TEXTBOOKS.
While e-book textbooks are often cheaper (and easier to carry) than traditional door-stop textbooks, college students often don’t prefer them. In some surveys of college kids, the majority of students have reported preferring print books. However, a 2012 study from the UK’s National Literacy Trust of kids ages 8 to 16 found that more than 50 percent of children reported preferring screen reading [PDF].

It is odd to start a point and then go on to disprove it. E-book textbooks being cheaper, easier to carry, and in some surveys preferred by the majority of respondents, seems to me to be the opposite of “not great”. The preference for paper textbooks claim comes from a survey of 527 students, yet is immediately refuted by the UK survey of 34,910 students. I wonder which one is more representative of how students feel about textbooks?

In the comments of the Mental Floss article, someone made a good point in regard to the format of textbooks. Oftentimes the textbooks are PDFs, which brings us back to the point about scrolling, and adds the problem with taking notes. Clearly the format of the e-book plays a big part in how people feel about them.

5. THEY’RE TIRING.
Staring at a lit screen can be tiring for the eyes and the brain. A 2005 study from Sweden found that reading digitally required a higher cognitive workload than reading on paper. Furthermore, staring at LED screens at night can disrupt sleep patterns. A 2014 Harvard study found that people who used e-readers with LED screens at night slept worse and were more tired the next day. So, if you’re going to go for an e-book, go for one without the backlight.

Now let us talk about how bad e-books are for your brain…. Sorry, did I say e-books when I meant LED screens like your iPad and computer? Silly me. Having bright light, especially from white background screens, shining in your eyes at night isn’t a good thing. But that is about as related to e-books as X-Factor is to talented singers. So the message about changing your screen setup for night viewing only really applies to readers if they utilise a backlit screen for reading.

And now that we are at the end of the article, let’s throw in some information for the pretence of balance in the hopes you will ignore the headline and main article points:

BUT DON’T THROW AWAY YOUR E-READER JUST YET.
However, all this may not mean that reading on a Kindle is really going to melt your brain. For instance, reading an e-book on a computer is a much different experience than reading on a Kindle, which is specifically designed for consuming books. So, too, is playing with an interactive e-book on an iPad, compared to using a simpler e-book device that only presents the text, with no opportunities to click away into digital distractions.

This really does appear to be information that would have been better presented in the context of the “e-books are evil” points above; doesn’t it. Throwing in this sort of context at the end rather than in the discussion of the study findings is a cheap tactic, a ploy that sees important information left until after you have already formed your opinion on a subject, or just plain stopped reading the article. This information has far less chance of being retained than the others points made earlier in the article, thus the article has created the bias they were after (deliberately or otherwise).

And some studies have found that part of the difference between the way people absorb information from e-books versus paper might be due to approaching e-books differently—in one test, participants didn’t regulate their study time with digital books like they did with paper texts, leading to worse performances. It’s possible that our expectations of e-book reading—as well as the different designs of the digital reading experience on a computer or iPad or Kindle—might affect how we approach the text and how much effort we put into studying them. As generations of e-book readers evolve, and people become more accustomed to the idea of sitting down with a digital textbook, these factors could change—for better or for worse.

These are all good points, again made at the end of the article rather than at least being hinted at throughout. And unlike the main points in the article, these are unreferenced. Are these points from the studies already referenced (some are) or some other studies that aren’t worth mentioning? In the former, you would expect these points to have been raised earlier in the article in the proper context, in the latter, this feels like an attempt to downplay the statements as less important than the referenced points above. Either way we are left with the sentiment “change is scary” rather than “change is change”.

Hopefully this breakdown of the Mental Floss article shows just how disingenuous many of these anti-technology articles are, especially the “e-books are evil” articles. I’m not trying to say that e-books are what everyone should be reading, or that our forests are now saved from Dan Brown. There is clear evidence that our changing technology is changing the way we read and absorb information, and this transition period is still a learning phase as to how and if we will change our reading preferences. But negative preconceived ideas about e-books (or technology) don’t help in communicating about the change that is happening.

Update: This study compared reading on paper and screens and found stark differences. The sample size was again small, but the study appears to have been better conducted than the others I’ve discussed above. The conclusions from the paper suggest, as I have, that we need to look at teaching/learning how to read e-books and utilise e-readers.

To sum up, the consistent screen inferiority in performance and overconfidence can be overcome by simple methods, such as experience with task and guidance for in-depth processing, to the extent of being as good as learning on paper.

Further reading – https://theconversation.com/do-students-lose-depth-in-digital-reading-61897

I love to hate tech articles: internet speed

I love technology. So many cool things have been made during my lifetime that it is hard to believe that as a 30-something I can remember a time before mobile phones, laptops, CDs, DVDs, tablets, and even personal computers. Since I love technology and love to read, it is a no-brainer for me to keep abreast of the latest developments via tech articles, especially since these articles are so woefully out of touch with the average person that they are comedy gold.

The article that tickled my funny-bone this time was all about Google’s new internet service. As part of Google’s plans for world domination, whilst not being evil of course, they are entering the market with an optical fibre broadband rollout in the USA. This video explains the deal with Google Fibre (video a little old now):

The funny part is the article author lamenting his current internet pricing and speeds relative to the service offered by Google:

My $52-per-month plan bestows me with the unheralded power of 30Mbps down and 5Mbps up, a depressing far cry from the (Google Fibre) $70 (and 1 Gbps up and down).

Sorry, not funny ha-ha, but funny sad. According to the reports into global internet speeds, the average US internet user is downloading at 10Mbps, with only 34% of users getting speeds above that (see figure below), ranking them top 10 in the world for speed.

Akamai State of the Internet Report Q4 2013
Akamai State of the Internet Report Q4 2013

Meanwhile my Aussie internet is ranked 44th in the world, with average speeds of 5.8Mbps and 9.7% of users having average speeds higher than that.

Akamai State of the Internet Report Q4 2013 - Asia
Akamai State of the Internet Report Q4 2013 – Asia

So the tech author was lamenting internet that not only ranks as some of the best in the world, but is also some of the cheapest. In Australia we have some the most expensive broadband in the world (although not as a % of income) and the service is quite possibly far worse than our average speeds would suggest. Just a few kilometres from where I’m sitting, deep in rural Australia, there are people who can’t get the internet. Whilst I have above average internet at work and at home – although that average bar is low enough for an asthmatic 2 year old with no coordination to jump over – the copper network is antiquated and slows the speed of internet down the further you get from the hub. Once you are 20-30 kilometres out of town, the internet is so slow that you are running dialup speeds, which is ridiculous and annoying when most of the web assumes broadband speeds. These slow speeds means that most people not in town have satellite broadband, which is speed limited by the number of users at any one time, is really expensive, and even the top speeds are capped at 800kbps. No, I’m not kidding.

Australia isn’t even in a position to climb the internet rankings in any great hurry either. There were two broadband plans taken to the last Australian election: Fibre to the Home and Fibre to the Node. Since optic fibre (that stuff Google is rolling out) is only going to the node, that means people in cities will have to pay to get their connection upgraded, while we can now expect to see most rural areas of Australia covered by satellite plans (the ones I just told you sucked). Both of these options are more expensive regionally or where the “node” is a long way from the “home”. See the outline below (source):

Access to the internet is a wonderful thing: information at your fingertips, streaming news and entertainment, commerce on a global scale, and lots of porn. Australia is not really in the digital age, limping along with second-rate connections, political plans for second-rate upgrades, and monopolies charging big $$ for second-rate services. Which is why the tech articles are so interesting in the way they show the massive disconnect between the writers, tech services, and the rest of the world.

See other articles: http://spectrum.ieee.org/telecom/internet/the-rise-and-fall-of-australias-44-billion-broadband-project
http://www.politicalscience.com.au/2010/04/why-we-need-national-broadband-network.html
http://en.wikipedia.org/wiki/Internet_access

We think we’re smart

0109xkcd
XKCD nails it again.

Within science fiction and the wider society there is this idea that we’ll find aliens. I always find it funny when humans talk about discovering “other” intelligent life in the universe. Just a wee bit arrogant to consider ourselves intelligent. Yes, I do realise that I’m arguing that point using technology based on quantum mechanics, probably being read on a device that weighs less than 200g and fits in your pocket, linked by a distributed network, connected by orbital satellites. Science: it works….. bitches.

But I would continue my argument by saying that to some people that amazing interface of technology, that is allowing this blog post to be read around the world, might as well be explained as “magic, magic, magic, magic, magic, god did it.” I certainly couldn’t explain how quantum mechanics works, nor how it applies to communications technologies, let alone how it manages to stream all of my favourite porn media to my phone. Thus Arthur C Clarke’s third law – Any sufficiently advanced technology is indistinguishable from magic – holds true for the vast majority of people on this planet.

Now the argument against Clarke’s third law is that technology isn’t magic. In fact, in the entire history of human civilisation, with all the things that have been attributed to magic, all the great mysteries of the universe, once investigated, have turned out to be not magic. But I’m talking about the knowledge gap between the average person and the specialist in the field who develops all this cutting edge stuff that allows other specialists to do cool stuff; like making a hoverboard. We are surrounded by everyday items that most of us would struggle to explain the concept of how they work – magnets, how do they work? – let alone understand the complexities involved – magnets, this is how they work.

Douglas Adams brilliantly satirised this idea in his novel Mostly Harmless. Arthur Dent crash lands on an alien planet where the local humanoid populace are rather backward in comparison to us humans. Arthur comes from a planet of television, cars, planes, computers: all sorts of neat stuff. But he doesn’t know how any of it works, nor how to go about reverse engineering any or it. So he becomes the sandwich maker.

Essentially, we point to all of our human achievements to show how smart we are, but in reality most of us haven’t the first clue about any of those achievements. We just aren’t as smart as we would like to think.

Now compare this to aliens. Humans are pretty proud of having gone to the moon, cashing in on all 12 of us who have done so, but to be visited by aliens requires interstellar travel. That requires technology we probably haven’t even dreamed of yet (possibly not, e.g. warp). An alien race that can do that is so far beyond human achievement and intelligence. Thus,  I’m suggesting that even at our best, we would be babbling morons in comparison to an intelligent life-form that has managed interstellar travel.

Sure, the aliens that decide to cross interstellar space may be the Cleatus of their species. Their technology may actually have reached the point of sentience and doesn’t require anything of its “makers”. But think of how advanced such a species would be, not to mention how arrogant (rightly or wrongly). There is no reason for them to look upon Earth and see humans as intelligent (e.g. climate change and reality TV). There is also no reason to believe that we’d even notice these aliens. An intelligent life-form travels between star systems, has the technology for that not to have taken billions and billions of years, and some dude with an out of focus camera is going to be the only person to see them?

So I think that humans are rather egotistical to think of ourselves as intelligent life in the universe. I also think that it is arrogant to believe that an alien species would regard us as intelligent. I also think that we’d have little chance of encountering intelligent alien life unless they wanted to be encountered. This is just my view, but the main thing is, Neil DeGrasse Tyson agrees with me (or is that I agree with him?):

sagan on tech