Tyson Adams

Putting the 'ill' back in thriller

Archive for the category “Science”

Reading is good for the brain…. d’uh

I may have mentioned it before, but I am a science nerd. It may also be painfully obvious that I like reading. And before you ask, yes I do wear glasses and own a lab coat. I can fancy dress as anything from a doctor to a scientist.

What I love about science is the way it goes about trying to understand the universe. In fact science even came up with a few studies on how reading is fantastic for you. Psychologists from Washington University used brain scans to see what happens inside our heads when we read stories. They found that ‘‘readers mentally simulate each new situation encountered in a narrative”. The brain weaves these situations together with experiences from its own life to create a mental synthesis. Reading a book leaves us with new neural pathways – although that’s hardly surprising nor unique.

Nicole Speer, also from Washington University, utilized brain-imaging to look at what happens inside the brains of participants while they read. She discovered that as people read, they are constructing a virtual reality inside their heads every time they read. That’s a fancy way of saying they imagined the stuff they were reading.

A reader’s brain in action.

So. The book is better… Who’d have thunk?

It is good to have some evidence that our brains get more out of reading. Without evidence, claims are not worth the air they consume. Just ask anyone who has tried to get conspiracy theorists to provide evidence for their claims.

Another study scanned readers’ brains to see how reading compared to web browsing (reading plus).*

Each volunteer underwent a brain scan while performing web searches and book-reading tasks.

Both types of task produced evidence of significant activity in regions of the brain controlling language, reading, memory and visual abilities.

However, the web search task produced significant additional activity in separate areas of the brain which control decision-making and complex reasoning – but only in those who were experienced web users. (Source)

Brain activity in a personal not used to using the web while reading

Brain activity in web newcomers: similar for reading and internet use
Surfing the net brain in action.

The researchers said that, compared to simple reading, the internet’s wealth of choices required people to make decisions about what to click on in order to get the relevant information. So not only is reading good, but exploring and interacting with what you are reading is even better. Surfing the net, getting lost in a fictional world…. wait that is the same point twice. Anyway, it leads to even more brain activity.

Now before you all go in search of internet porn to enlarge your brain, remember that you’re meant to be reading the porn sites for the articles.


* It took me a bit of searching to find the original journal paper for this study. The BBC article and original press release were easy. A personal gripe of mine is when press releases and news articles fail to link to the original article so that we can fact check the claims. So as part of growing your brain with reading and internet browsing, please spend some time searching for and reading the original scientific papers that are reported. And if it wasn’t peer reviewed, then it could have been made up, like that rubbish about us only using 10% of our brain.


Science writing explained


Have you ever heard a scientist talk and wondered what the hell they were saying? Did they use the word theory to mean something other than “I reckon”? Well, you’re not alone.

Language is very important to scientists. Without precise language there would be no way for them to write peer reviewed papers that could send an insomniac to sleep. Communicating science is all about letting everyone in on the data and knowledge that is being accumulated in the endless march forward into the unknown. But because scientists are marching into the unknown, they prefer to make their statements as vague and non-committal as possible. This way, if they are correct they have cautiously alluded to the right answer, and if they are wrong they can pretend their statement was hinting at the correct answer all along.

In keeping with my previous explanations of music reviews and book reviews I have found a chart explaining science terms. This list has helped me, I hope it helps you too.

Credibility or Clicks: Bret Stephens and The New York Times


When The New York Times hired Bret Stephens many supporters of sound science were concerned. Bret has a history as a climate science denier and disinformer, using his clout as a Pulitzer Prize winning journalist to undermine climate science. With the publication of his inaugural column at The New York Times the concerns were confirmed.

Bret’s piece attacks climate science by attempting to argue that nothing can be 100 percent certain, so it is only rational to doubt claims of that sort. Except that is nonsense.

Climate science has never claimed 100 percent certainty. The evidence for human influences on climate is overwhelming, but scientists don’t claim to know anything with 100 percent certainty. That isn’t how science works. Climate science is routinely reported with error margins and uncertainties.

This isn’t the only problem with Bret’s article. He makes many other factual errors, as covered by Dana Nuccitelli and others. So Bret’s article is either deliberately deceptive, or naively uninformed.

It is hardly the first time Bret has been a climate disinformer. In his previous role at the Wall Street Journal he wrote similar articles that sought to undermine climate science and disinform his readers. During a January 23rd 2015 appearance on Real Time with Bill Maher, Bret utilized a splurge of cherry picked historical events and reports to discredit climate science. He included the much-debunked 1970s cooling argument, and an irrelevant reference to a fisheries management conference, in his argument that the experts are probably wrong. Just ignore all the evidence. And don’t check Bret’s claims too closely. So being deceptive or uninformed is nothing new for Bret.

This slideshow requires JavaScript.

Charts of misinformation in opinion pieces during Bret’s time at the Wall Street Journal (source: MediaMatters.org)

Writing an opinion column at The New York Times that is either deceptive or uninformed does not speak well of the credibility of Stephens nor his new employer. Why would a respected news outlet like The New York Times publish a column that is deceptive or uninformed?

James Bennett, an editor at The New York Times defended their original hiring decision in the face of criticism. Bennett said,“The crux of the question is whether his work belongs inside our boundaries for intelligent debate, and I have no doubt that it does. I have no doubt he crosses our bar for intellectual honesty and fairness.”

Yet with his very first column, Bret has shown a lack of intellectual honesty and fairness. So exactly how low is the bar being set?

No credible news outlet could allow one of their opinion columnists to continue to write nonsense for them. Has The New York Times sold their credibility on climate science for conservative clicks? Are they doing this to create sensationalism? In either case, it speaks to the standing of The New York Times that they would use such an important issue in climate change to hurt public understanding of the issue for attention.

Certainly many scientists have decided that The New York Times no longer deserves their subscription (e.g. 1, 2). The response from The New York Times is hardly complimentary to their new slogan “Truth is more important now than ever”. When you respond to scientists who have cancelled their subscriptions over Bret Stephens’ climate disinformation by arguing there are two sides to the debate, or that the scientists can’t stand differing opinions, you wonder if The New York Times understands what Truth actually means.

If The New York Times values truth then they shouldn’t have hired Bret Stephens to write about climate change. If they care about their credibility now they will sack him. But it seems clear that they have sold their credibility for clicks.

Now, about Michael Pollan and how he’s wrong on GMOs and farming.

NB: This post has previously appeared elsewhere.

6 Story Arcs


I’ve written before about plots and how there aren’t as many of them as you’d think – somewhere between 1 and 36 depending upon how you want to break them down. Recently there was some research published that analysed 1,737 fiction novels to figure out how the story arcs are constructed. Let’s pretend there is a big difference between a plot and a story arc

The study used Project Gutenberg – i.e. public domain works – and the results suggest that there are only really six story arcs:

Fall-rise-fall: ‘Oedipus Rex’, ‘The Wonder Book of Bible Stories’, ‘A Hero of Our Time’ and ‘The Serpent River’.

Rise-fall: ‘Stories from Hans Andersen’, ‘The Rome Express’, ‘How to Read Human Nature’ and ‘The Yoga Sutras of Patanjali’.

Fall-rise: ‘The Magic of Oz’, ‘Teddy Bears’, ‘The Autobiography of St. Ignatius’ and ‘Typhoon’.

Steady fall: ‘Romeo and Juliet’, ‘The House of the Vampire’, ‘Savrola’ and ‘The Dance’.

Steady rise: ‘Alice’s Adventures Underground’, ‘Dream’, ‘The Ballad of Reading Gaol’ and ‘The Human Comedy’.

Rise-fall-rise: ‘Cinderella’, ‘A Christmas Carol’, ‘Sophist’ and ‘The Consolation of Philosophy’.

The most popular stories have been found to follow the ‘fall-rise-fall’ and ‘rise-fall’ arcs.

Or for those that prefer to read graphs because it makes them feel intellectual:

Screen Shot 2016-07-09 at 8.22.03 PM.png

For those that just saw a bunch of squiggles in those graphs, what you are looking at is the story arc plotted over time for each story analysed. They’ve broken these into similar groups then added an average (the orange line). You can see how some of the story arcs follow the average more, whilst some types vary more. To see an individual story arc, they picked out Harry Potter as an example in the paper, but have the rest archived here (Project Gutenberg books) and here (a selection of classic and popular novels). As they note:

The entire seven book series can be classified as a “Rags to riches” and “Kill the monster” story, while the many sub plots and connections between them complicate the emotional arc of each individual book. The emotional arc shown here, captures the major highs and lows of the story, and should be familiar to any reader well acquainted with Harry Potter. Our method does not pick up emotional moments discussed briefly, perhaps in one paragraph or sentence (e.g., the first kiss of Harry and Ginny).

Harry Potter plot

This is all nice and good, but why is this interesting? Well, aside from using my favourite statistical technique – principal components analysis – this study shows that authors create, and the audience expect, structures that are familiar. The fact that two of the story arcs (rise-fall and fall-rise-fall) are the most common emphasises this point. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful and familiar to us. There is scope to play within the formula, but ultimately we desire stories that fit conventions.

So yes, there is no original art being made.

Pen vs Keyboard: FIGHT!!


For some reason the world of writers is filled with technophobic troglodytes intent on proving that their old-fashioned way of doing things is better. I’ve written previously about how older people’s favourite hobby since the dawn of time has been complaining about kids these days. This is also true of changes in technology, with people intent on justifying not learning to use a computer or e-reader. Because cutting down trees is the future of communication!

Once again I’ve stumbled across another article that misrepresents scientific studies to try and convince people that we need to clear forests, pulp them, flatten them into paper, cover them in ink, and act as snooty as possible. This time they – the nebulous they: my nemesis!! – are trying to pretend that taking notes with a pen is better than using a keyboard.


When will people learn that paper isn’t the medium we should be promoting? We need to be going back to scratching on rocks and cave walls. When was the last time a paper book lasted more than a hundred years out in the rain, snow, and blazing sun? That doesn’t even begin to compete with the longevity of the 50,000 year old cave paintings. Data retention for rock far surpasses the much inferior paper.

This isn’t the first article I’ve seen on The Literacy Site misrepresenting science. Hopefully they will acquire come scientific literacy soon and overcome their biases. If I turn blue and pass-out, try to act concerned. Let’s dive in.

New Research Explains How The Pen Is Mightier Than The Keyboard

It’s great when articles improve on the titles of science papers. I mean, who wants to read the science paper The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking? Pity that both titles misrepresent the actual findings. Also, is 2014 still regarded as new?

In her graduate assistant days, psychological scientist Pam Mueller of Princeton University used to take notes just like everyone else in the modern age: with a computer. One day, Mueller forgot her laptop and had to take notes the old-fashioned way. Rather than being held back by pen and paper, Mueller left class feeling as if she’d retained far more information than usual on that day. She decided to create a case study that could prove her hunch that writing longhand was actually better for comprehension than typing.

This is actually a good little story and illustrates how a lot of hypotheses are formed in science. This is the anecdote or observation that scientists want to turn into a hypothesis to create actual knowledge. But remember, this is an anecdote, which has as much value as used Easter egg wrappers that have been stuffed between the couch cushions. Putting anecdotal stories at the start of an article can set the audience up to not think too hard about the rest of the article, as you have given them the conclusion in a nice little story.

The study she created, published in Psychological Science, indicated that taking notes by hand is a more effective method than typing them on a laptop when it comes to processing information conceptually.

And here we jump straight off the rails, over the side of the bridge, and careen into the waiting river below. Sure, The Literacy Site is just quoting the press release, but that is lazy. The study itself has this line in the abstract that show how this claim is a misrepresentation of the findings:

We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.

In other words, the findings were that people spend all their time typing and no time actually listening and comprehending the lectures. Because the pen is an archaic device that is unwieldy and slow compared to the keyboard, students using a pen only write down notes after they have listened, picked out the key points, and conceptualised that information into a note. But don’t take my word for it, the press release on the University of Michigan website has a few recommendations including:

  • To interrupt verbatim note-taking on laptops, break up your lectures with short activities that encourage deeper processing of information.
  • Have students use laptops or other technologies to process–not just record–information.

Now it is time to discuss the study details a little bit, because someone might be interested in the methods section. I’m sure those people exist. Somewhere. Interested is probably the wrong word.

In the first of a series of studies led by Mueller, 65 college students watched various TED Talks in small groups, and were provided with either pens and paper or laptops for taking notes. When the students were tested afterward, the results were overwhelming. While the groups performed equally on questions that involved recalling facts, those who had taken longhand notes did significantly better when it came to answering conceptual questions.

Sorry, I need to catch my breath. I’m so shocked at the massive sample size. This is definitely enough people to represent the rest of society. Conclude away I say!

Anyway, these overwhelming results are just a tad whelming.

Whelming error bars.

Whelming error bars.

As you can see the performance on retaining facts was the same, with error bars that suggest 65 people is probably not enough to draw conclusions from. Not that anyone would be trying to claim this study is proof of anything, right? The next thing you see is the benefits of using a pen…. as long as you ignore those error bars and just accept the p-value tells us something of value. Given that those error bars overlap for the two groups, I wouldn’t be drawing conclusions from a p-value. Also, I’m not exactly sure why an ANOVA was used when there were only two groups to compare. KISS principle applies to statistics as well.

Now the study realised that 65 people wasn’t enough, so they repeated the study with a few variations twice more. In the second and third tests they had 151 and 109 people take notes. Each test had the typists writing between 250 and 550 words, whilst the pen wielders wrote roughly 150 to 400 words. Interestingly the note takers were writing verbatim 12-14% with their laptop but the pen users only managed 4-9% verbatim. This shows why the conclusions I’ve quoted above were drawn.

Out of interest, here are the results from the other two tests that were more convincing for that conceptual finding.

Okay, this is more like it.

Okay, this is more like it.

The second test with 151 people were tested with pen, laptop, and laptop with a lecture from the tester about how they really should pay attention. With 50 people per group you’d hardly jump up and down about the significance of this test, but clearly telling people to pay attention doesn’t… hey look a squirrel.


Methinks possibly the greater number of treatments has lessened this test's results.

Methinks that possibly the greater number of treatments has lessened this test’s significance.

The third test with 109 people again tested for pen vs keyboard, but this time they allowed revision of notes before being questioned. This makes the groups even smaller, and again I’d question the significance of such a small sample. But the researchers summed up the results with this erudite paragraph:

However, a more nuanced story can be told; the indirect effects differ for conceptual and factual questions. For conceptual questions, there were significant indirect effects on performance via both word count and verbatim overlap. The indirect effect of word count for factual questions was similar, but there was no significant indirect effect of verbatim overlap. Indeed, for factual questions, there was no significant direct effect of overlap on performance. As in Studies 1 and 2, the detriments caused by verbatim overlap occurred primarily for conceptual rather than for factual information, which aligns with previous literature showing that verbatim note taking is more problematic for conceptual items.

In other words, doing lots of writing, particularly just copying what was said verbatim, makes you suck at understanding what the hell is going on. Oh, and study before the test. Apparently it helps too. Made that mistake at university.

So back at The Literacy Site they are skipping the other tests and just heading to the conclusions:

Mueller found that this was the result of laptop users trying too hard to transcribe the lecture rather than listening for the most important information and writing it down by hand. It may be an era where computers have made handwriting seem useless, but Mueller isn’t the only believer in the importance of longhand.

Notice the nuanced difference that seeing all three tests provides? We could be led to believe that there was overwhelming evidence for the pen, but what we see is that note takers need to readdress their methods of taking notes. Or they could just wing it.

An article in TIME discusses Karin James, an Indiana University psychologist, who published a 2012 study indicating writing is particularly important in the cognitive development of pre-literate children five and under. While using a computer for note-taking in some situations makes sense, it’s important not to overlook the longhand method.

It’s great that the article tries to incorporate some extra research. Citing one study with a small sample size is hardly compelling, certainly not worth writing an article about. But again the research is being misrepresented:

…the benefits of writing: increased creativity, better critical thinking, boosted self confidence, and a correlated improvement in reading capability with writing prowess.

But are these benefits real? The short answer: Mostly not. “There’s lot of caveats in handwriting research,” says Karin James, a psychologist at Indiana University

Curse those damn caveats! Why can’t we have a control group of kids we don’t teach to read and write?!

Which brings me to a final point about these old technologies vs new technologies articles: stop jumping the gun! We’re in a transition phase. This isn’t 1970s velvet suits with platforms versus 2010s hipster atrocities. This is typewriter hipster texting on his phone. Technology is changing and we’re still learning how to use it properly. The studies that are cited in many of these articles have very limited scope, test very few people, and are comparing new and established things. Has anyone taught laptop users to take notes effectively for the new medium? Do you actually need to take written notes at all in this modern age? We need to see more science done on the changes taking place, and we need the articles discussing the science to do more than discuss (one study from) one paper, and highlight the limitations. Well, unless you have already made up your mind about a topic and just want some links to throw at people in an argument. Screw being right!

This blog post is being shared online, in print, and carved into a cave wall. Comment below which format you preferred receiving it in.


When Science Fiction Became Science Fact

One of my favourite science blogs, From Quarks to Quasars, had a great post from Isabelle Turner that I needed to share. Take a look at the things from science fiction that became science fact, and wonder whether it was prediction, influence, or just wishful interpretation on our part.


One (of the many) problems of arguing with science deniers

In a recent post I discussed some points about how to spot anti-science nonsense. Pick a subject, any subject, and there will be someone – probably Alex Jones – making an outrageous claim about it. But don’t worry, they’ll solve the problem with items available from their reasonably priced store: $1440 per litre is a bargain price for something you don’t need and doesn’t do as claimed.

Credit: Jason Hymes

Credit: Jared Hyames

Obviously scammers are gonna scam, and anti-scientists are going to not-science. The thing is once you understand that something is wrong you have some responsibility to make sure the misinformation doesn’t spread like a leaky diaper. With great power knowledge comes great responsibility. Which means you have to start discussing science with science deniers. Don’t forget to place a cushion on your desk and wear padded gloves.

Despite having the advantage of science/facts in the argument against science deniers, you have the decided disadvantage that you can’t just make stuff up (despite how tempting and financially rewarding it is). In fact you have to be better informed about not only your side of the argument but also about the science denier’s arguments.

Sounds odd, doesn’t it? You have to learn nonsense to talk about science. That makes as much sense as being pro-life and pro-death penalty. Bear with me here. Take this example of climate change denier Bret Stephens arguing against Bill Maher on Real Time:

Bret sounds convincing, doesn’t he? Bret sure thinks so. He makes some vague references to headlines from the 1930s and 1970s as dismissals of current concerns about oceans. Then he references an economic study on environmental policy priorities, all whilst looking very smug and sure of himself. These statements leave Bill at a stumbling point because he has to admit he doesn’t know what the hell Bret is talking about. The video edited out the pant-less victory lap Bret did of the studio, complete with crotch gyrations in Bill’s face, as he screamed “Take that liberal media!”

Now it isn’t a bad thing to admit you don’t know stuff. Nobody knows everything, it is arrogant to act like you do. Arrogance is of course the result of being surrounded by Knowitalls, an invisible mythical creature that looks like a cross between a unicorn and Bill O’Reilly. Anyway, I’m glad Bill Maher admitted he didn’t know about the study; if only he would do the same with his position on vaccination and GM/GMOs. But the admission did make him appear less convincing as he couldn’t directly rebut the points made.

And here is why you need to know what the anti-science people “know”. Take the first points Bret makes about the oceans dying. His two dates mentioned are actually making reference to points unrelated to the issue of climate change causing ocean acidification. The first date was reference to the Overfishing Conference in 1936 about whaling and fishery management (as far as I can ascertain), issues that were addressed by introducing catch sizes, fishing licenses, and the phasing out of whaling. So Bret is trying to justify inaction on climate change to save ocean damage by referencing an environmental concern that was acted upon. What a great argument!

His second date was the 1975 Newsweek and New York Times (and others) article about global cooling. This is a well worn climate change denier talking point/myth that has been thoroughly debunked yet has evolved beyond a PRATT point and become a zombie point. Some myths just won’t die and are constantly in search of brains to infect/affect.

We then hear Bret reference a Bjorn Lomborg study on best use of resources and where climate change ranked. Very convincing, aside from the fact that it was complete and utter nonsense. See, Bjorn doesn’t accept the actual risks and actual current changes that have occurred due to climate change. So his entire analysis and argument started off from a completely flawed position and was thus doomed to fail to draw any worthwhile conclusions. Actual experts have torn apart his work, particularly his “conference”, here, here and here. But Bill didn’t know this, thus the points made stand unchallenged and as a sort of “valid” evidence.

And this is why it is important to know your enemy. If you know the arguments they are likely to raise, then you can have rebuttals ready. In the case of citing Lomborg’s work you can point out the failings before people have a chance to take it seriously. In the case of old magazine articles, you can point out you only read them for the pictures. But it means you don’t just have to know the science, you have to know the anti-science.

It is also worth noting that Bret reeled off a string of statements that were essentially nonsense dressed up as facts. That is a tried and trusted debating tactic known as the Gish Gallop, and it is very hard to argue against. It takes a lot more energy to redress the nonsense than they take stating it, not to mention time wasted not making your own points. Also helps that science has to have facts on its side, anti-science can make it all up on the spot.

Of course the obvious thing to say here is that the anti-science movement often don’t see themselves as anti-science and will use similar tactics. They will familiarise themselves with the science in order to dismiss it. This is possibly the most annoying part of science communication, those imbedded in anti-science positions aren’t ignorant of the facts, they are wilfully ignorant of their fact-ness.

Beware the meme!

Memes fly around the internet like quantum accelerated particles. Some are fun, some are informative, others are utterly ridiculously wrong. Unfortunately people get caught up in pretty pictures with inspiring – or is that insipid – quotes printed on them, so they start following someone on social media, someone who spreads as much nonsense as inspirational quotes.

Take for example this quote from Mark Twain:
Mark Twain on nonsense background
At face value there is a great message from Twain about not storing up emotional baggage. Let’s just ignore the scientific inaccuracy of how acids work and how the materials of the respective containers and the Ka (acid dissociation constant) of the acid are going to be the deciding factors in how much damage the acid does. But once you move past the quote and pretty picture you start to notice certain things about the picture, namely that there is some weird design stuff going on it. There’s some spacey looking stuff in the background, there’s a person with no skin, and some sort of lattice work design: what the hell is this stuff? That’s called the Flower of Life, something that has been incorporated into Sacred Geometry, a load of nonsense that would have Mark Twain penning scathing insults toward; Twain loved science.

Let’s take a look at another meme:

Chakra nonsenseAgain we have a bit of text that implies that good relationships are much deeper than the shallow, fleeting, physical attraction. This one is, however, more obvious in its ridiculousness. In amongst the rainbows and pretty city the two outlines of people are hovering above, there are glowing lights in the bodies of the people. Take a guess at what they are meant to be. Chakras. That’s right, we’ve gone all new-agey nonsense right out in the open. So once you spot the new-age nonsense you realise the word “soul” isn’t being used in the allegorical sense but in the “I believe all sorts of rubbish” sense.

And now we descend into health nuttery:
Milk nonsense

This is a typical health meme that these sorts of social media pages post: half truths, misconceptions, lies and nonsense.

Let’s start at the top: there are no pus cells in milk. The meme seems to be referring to the somatic cell count of milk, which is not the same thing, and just part of the biology fail on display here. The 135 million figure is from the detection levels for mastitis in cows, which says that uninfected cows will have less than 150,000 cells/mL (they’ve clearly scaled up to a litre of milk in that glass, which doesn’t look like a litre glass to me).

Growth hormones: misleading at best. Food has hormones in it, produced by the food, be that plants or animals. Remember how soy is meant to be good for menopausal women? Yep: plant hormones. So milk will have naturally occurring hormones in it. Some countries have limited/banned the use of growth hormones in animal production, others have allowed it. And this brings us to one of the many reasons pasteurisation is used in milk production, as it breaks down most of the hormones.

Antibiotics: nope, they test every truck of milk as it leaves the farm gate to make sure there is no antibiotic contamination.

Feces: again this is misleading, and also one of the main reasons for pasteurisation. You aren’t so much going to end up with feces in the milk as the bacteria associated. So it is important to kill the nasties and why raw milk is considered dangerous.

Cholesterol: I’m not sure where they got the figures from but they seem to be assuming 200 mL of full fat milk. Odd considering they were assuming 1,000 mL for the pus/somatic cells. Yes, milk has 24 mg of cholesterol per 100 mL. And that isn’t necessarily a bad thing.

Calories: I’m not sure why food having calories in it is bad…… Figures are roughly correct for 200 mL of full fat cows milk though.

Fat: Again, I’m not sure why food having fat in it is bad.

Acidic protein: This one is quite funny because there are a lot of acidic proteins. And obviously these acidic proteins leaching calcium from bones is one of those things that “mainstream medicine is ignoring” – aka the rallying cry made by purveyors of nonsense. Pity that dietary protein (which can include dairy) has actually been shown to be good for bones. The issue here is actually a couple of health myths. The first is the acid/alkaline diet that is utter nonsense. The second is the overstating of health benefits of milk, specifically as they relate to bone health and osteoporosis development.

Now I’m not saying that milk is bad for you, but it also isn’t the most awesome drink ever made – that would be whiskey. Milk should be like whiskey: consumed in moderation.

The point about memes is that they are only as good as their creator. The intention of the above memes is clearly to help people, inspire them to lead better lives, even if it is by showing them some pretty pictures with brain droppings written on them. But sadly it is obvious that these memes were created by someone who is not in touch with reality, which makes their health advice something to be avoided. Beware the meme: it could be nonsense!

How to spot anti-science nonsense

Just recently I was asked a question on one of my climate change posts. The question, whilst not about climate change nor climate science, was about similar anti-science nonsense that acts to confuse and befuddle those who aren’t familiar with the field. The comment in full:

I like your writing, I wish more would understand your logic when they spout facts and relationships. If you have time please, an article (though imperfect) comments,

“Bacteria…and plants use a seven-step metabolic route known as the shikimate pathway for the biosynthesis of aromatic amino acids; glyphosate inhibits this pathway, causing the plant to die…. Monsanto says humans don’t have this shikimate pathway, so it’s… safe……however, that our gut bacteria has this pathway, and these bacteria supply our body with crucial amino acids. Roundup …kills bacteria, allowing pathogens to grow; interferes with the synthesis of amino acids including methionine, which leads to shortages in critical neurotransmitters and folate; chelates (removes) important minerals like iron, cobalt…”

I would love to know your take on that possible cause and affect.
Thank You for your Time !
Dennis Buchanan

Reference : http://healthimpactnews.com/2014/mit-researcher-glyphosate-herbicide-will-cause-half-of-all-children-to-have-autism-by-2025/

Dennis has asked how likely it is that this sciency sounding article is correct. The short answer is that you are more likely to get this week’s lottery numbers from one of these articles than any reliable facts. How can I be so dismissive? Well the thing is I’m not being dismissive, it just sounds like that because my skeptical science eye has spotted many holes in the quote and article. So let us go through them like a rugby player at an all you can eat buffet.

The source.

The first thing to note is the source of the article and the “expert” cited within. There are some tell-tale signs that a webpage may be unreliable, such as when they use terms such as “truth”, “natural”, “alt” as a prefix to any word, and “health” as their names. Health Impact News isn’t the giveaway here, it could be a legitimate source of information. In this case the giveaway is the byline “News that impacts your health, that other media sources may censor.” See: it’s a conspiracy!!! (Font = sarcasm) And conspiracy claims are always reliable (/sarcasm).

If you check out Web of Trust you can see that Health Impact News perpetuates a number of dubious and fraudulent claims, such as vaccine myths from the anti-vaxxer nutters. Which means that the slant the website is running is one that doesn’t respect scientific evidence. Not that this alone is enough to dismiss the claims.

The other source is the “expert” cited, one Stephanie Seneff. To say that this computer scientist is out of her depth in the field of health, genetics and chemistry is like suggesting Justin Bieber’s music is appealing to people with taste. She makes all sorts of wacky and unfounded claims about herbicides, GMOs and Monsanto, so calling her an expert or citing her work should get you laughed out of any room you are standing in.

The claim.

What the article claims is really the crux of the dismissal. If someone claimed to have seen bigfoot doing lines of blow with someone other than Charlie Sheen, we’d be immediately suspicious since we know that greater than 90% of all cocaine is snorted in the company of Sheen. Similarly when someone claims that the most extensively tested herbicide of all time, the safest agrichemical ever made, the most widely used agrichemical on the market, is responsible for [insert health consequence here, in this case autism] then you should be a tad suspicious.

Let’s ignore the fact about the extensive safety testing. Let’s also ignore the fact that autism seems to be the disease de jour of the alt-health fear-mongers, linked to everything from GMOs to vaccines. Let’s also ignore the fact that agrichemical safety and efficacy have virtually nothing to do with the safety and efficacy of individual GMOs (GM and GE being another kettle of fish entirely), despite what the article tries to imply. Let’s also ignore that glyphosate binds tightly to organic matter and is rapidly broken down in the environment so actual levels consumed will be negligible, and those amounts won’t be doing anything in the digestive tract. Let’s just assume that glyphosate is getting into our bodies and causing damage at huge levels: what evidence is there to suggest it is glyphosate and not any other agrichemical or environmental toxin that has increased during the same time period (e.g. coal pollution)? What evidence is there to suggest there has actually been any rise in maladies that aren’t as a result of something else (because everyone knows that fat people got fat whilst only eating celery sticks)?

The reference material or evidence.

Big claims require even bigger evidence. Solid evidence. One thing I hate about news sites is that they so often make oblique references to a study that may or may not have been published in a reputable journal, rather than just link straight to the journal and paper in question. In this case there is no link to a journal, reputable or not, just links to other unreliable sites such as The Mind Unleashed and The Alliance of Natural Health USA webpage, as well as a Youtube video. So far I’m underwhelmed.

Remember, this article is reporting on Seneff’s claim that half of all people will be autistic by 2025 thanks to herbicides. Half!! This is a condition that has a median occurrence of 62 cases per 10,000 people. The spectacular rise in autism that we should expect in the next decade for a herbicide that has been in wide use for many decades already would require a bit more evidence than “well, we reckon.” Seneff claimed a correlation between glyphosate use and a rise in autism. She clearly didn’t compare the rise in autism to organic food.

Damned organic food giving kids autism!!

Well, if you dig further into the reference of the reference (seriously, how hard is it to cite your sources properly!?!) you will find an actual journal paper by Seneff and Samsel in a journal called Entropy. Have you heard of Entropy and is it recognised as a go-to journal for science on the topic of, well, anything? Nope. And what about the study itself which claims that just about every malady you can think of is linked to glyphosate, what evidence does it present? Well pretty much none. To quote this article:

The evidence for these mechanisms, and their impact on human health, is all but nonexistent. The authors base their claim about CYP enzymes on two studies, one of liver cells and one of placental cells, which report endocrine disruptions when those cells are exposed to glyphosate. Neither study is CYP-specific (The effect of pesticides on CYP enzymes, by contrast, has been studied specifically.) As for the gut bacteria, there appears to be no research at all on glyphosate’s effect on them.

Samsel and Seneff didn’t conduct any studies. They don’t seem interested in the levels at which humans are actually exposed to glyphosate. They simply speculated that, if anyone, anywhere, found that glyphosate could do anything in any organism, that thing must also be happening in humans everywhere. I’d like to meet the “peers” who “reviewed” this.

Yep. That is a rebuttal from a Huffington Post article. Let that sink in for a moment. Even Huff Post don’t want to touch Seneff’s claims with a ten foot pole.

So far we have found that the suspicions about this article are well founded. The site is not reliable, the “expert” cited is not reliable, the sources cited are not reliable, the evidence cited is essentially non-existent, the claims made are not particularly plausible, and there is no evidence to support the claims. But this leaves us with a problem: short of hours of research on each point made, how do I confirm that these people are lying to me on the internet? Because you should be able to trust the internet, right?

The rebuttal.

The average person can’t be expected to be an expert in all topics, nor be expected to have the time to track down and read every piece of science to confirm an article is accurate. But there are people on the internet who have their favourite topics that they will write (or make videos) about. This means you just have to search for rebuttals to articles. Google can be handy for this if you are familiar with how to weed out the rubbish results. Joining forums or following experts in various fields can help as well (e.g. Skeptics Stack Exchange, Science Based Medicine). There are also webtools available to help find good information. I’ve already mentioned Web of Trust above, but there are many others.

rbutr is one such tool that can help with finding rebuttal articles (disclaimer: I am involved with rbutr on social media). In the case of the Health Impact News article there were two linked rebuttals (I’ll be adding this one as well), here and here. This really helps to figure out whether the arguments presented are valid (although in this case a basic application of logic should suffice). But there were more rebuttals linked to the Seneff journal article, 7 of them: here, here, here, here, here, here, and here. These links allow people to easily see the arguments laid bare.

Thus we can now see that the article can be dismissed as rubbish. A fair bit of work to get there, but in the end we did it (~25 references and 1600 words later). Makes installing rbutr and Web of Trust in your web browser look like a great option, doesn’t it!

In the information age ignorance is a choice. But informing yourself isn’t as easy as just reading articles on subjects. Using a critical eye, applying logic, and accessing quality information has to be done to avoid being misinformed. When all said and done, evidence wins. And cat videos. And dog videos. In fact any video featuring a cute animal wins.

How to be creative

Couple of interesting videos I thought I’d share. The first is a recent video that refers to some fascinating research that looked at musical creativity with fMRI scans.

The second video is from the indomitable John Cleese.

Creativity is not an easy thing to achieve. I hope these two videos give others a few pointers.

Update: Another great video from Brain Craft on creativity to add to the list.

Skeptically Challenged

Skeptically Challenged 19 Oct 2014
I’ve been quite busy recently. There is the usual writing going on, but I also have a few articles in the works, another rugrat in the works, and I’ve also been interviewed for the Skeptically Challenged Podcast.

In the podcast, Ross, Ketan and myself discuss a range of topics and try to bring the science. Ketan discusses the mythical wind turbine syndrome, I discuss a recent climate paper, and we cover the promises of fusion power from Lockhead Martin and the recent Ebola hysteria.


Consider supporting Skeptically Challenged on Patreon;

Ross Balch

Ross Balch

Tyson Adams

Tyson Adams


Ketan Joshi


Ross: https://twitter.com/skep_challenged

Ketan: https://twitter.com/KetanJ0

Edit: Ross and I discussed a couple of other topics in the session below: Supplements and Atheists in Rehab.


Discussed topics:










New Captain Disillusion Video! – http://youtu.be/h0pIZH-W6b4?list=UUEOXxzW2vU0P-0THehuIIeg

Some links to the material I was name dropping:

Also, stay tuned until the very end and you’ll hear just one of the bits that Ross will have for subscribers, mainly jokes. Now just imagine how we managed to work rocket powered Miley Cyrus into the discussion.

20 Proven Benefits of Being an Avid Reader

This fMRI scan reveals distinctive increases in brain activity during close reading across multiple brain regions, with strength of activation shown in red for horizontal cross sections of the brain.

This fMRI scan reveals distinctive increases in brain activity during close reading across multiple brain regions, with strength of activation shown in red for horizontal cross sections of the brain.

If TV is the lard developing, heart attacking inducing, entertainment form, then reading is the brain workout. I’ve previously posted about how reading is good for the brain, but science is keen on finding out more, so there is always new research that brings up cool findings. I’m reposting an interesting article I found (here) that lists some benefits from reading with links to the research, proving that reading is good for you.


Merely reading a word reflecting a colour or a scent immediately fires up the corresponding section of the brain, which empathises with written experiences as if they actually happened to the audience. Researchers believe this might very well help sharpen the social acumen needed to forge valuable relationships with others.


In correlation with the previous perk, sensual stimulation makes it easier for aging brains to keep absorbing and processing new information over time. This occurs when the occipital-temporal cortex essentially overrides its own programming and adapts to better accommodate written language.


Avid readers enjoy a heightened ability to retain their cognitive skills over their peers who simply prefer other media — even when exposed to lead for extended periods, as indicated by an article in Neurology. It serves as something of a “shield” against mental decay, allowing the body to continue through the motions even when facing temporary or permanent challenges.


When educators at Obafemi Awolowo University incorporated education-themed comics and cartoons into primary school classrooms, they noted that the welding of pictures to words in a manner different than the usual picture books proved unexpectedly beneficial. Exposure to these oft-marginalized mediums actually nurtured within them a healthy sense of creativity — a quality necessary for logical and abstract problem solving.


On the whole, readers tend to display more adroit verbal skills than those who are not as fond of books, though it must be noted that this doesn’t inherently render them better communicators. Still, they do tend to sport higher vocabularies, which increase exponentially with the volume of literature consumed, and may discern context faster.


Anne E. Cunningham and Keith E. Stanovich’s “What Reading Does for the Mind” also noted that heavy readers tend to display greater knowledge of how things work and who or what people were; once again, findings were proportionate to how much the students in question devoured in their literary diets. Nonfiction obviously tends to send more facts down the hatch, though fiction certainly can hold its own in that department as well.


Some students obviously don’t perform well on tests despite their prodigious abilities, but in general, findings (such as those offered by the National Endowment for the Arts) show a link between pleasure reading and better scores. The most pronounced improvement, unsurprisingly, occurred on exams focused on analyzing reading, writing, and verbal skills.


According to a 2009 University of Sussex study, picking up a book could be one of the most effective strategies for calming down when life grows too overwhelming — great for both mental and physiological reasons. The University of Minnesota built on these findings and recommends reading some form of literature for at least half an hour every day for optimum relaxation.


Fully engaged reading sessions — not just skimming, in other words — actively engage the sections of the brain responsible for thinking critically about more than just texts. Writing, too, also serves as an excellent conduit sharpening the skills necessary for parsing bias, facts vs. fictions, effective arguments, and more.


In a British Medical Journal article, academics at the French National Institute of Medical Research showcased their findings regarding the relationship between a mind occupied by reading and a lower risk of dementia. Obviously, literature isn’t going to act as a cure, but nonreaders are 18% more likely to develop the condition and experience worsened symptoms.


Readers genetically or environmentally predisposed to MCI, Alzheimer’s, and other disorders characterized by cognitive decline won’t escape their fate if they live long enough; but not only do their literary habits push back the onset, these conditions also encroach at a more sluggish pace. More than any other way to pass the time, picking up some sort of book (no matter the medium) proves among the most effective strategies for delaying and slowing dementia.


Along with bolstering critical thinking skills, the authors of “Reading and Reasoning” in Reading Teacher noted that literary intake also positively influences logic and reasoning. Again, though, the most viable strategy for getting the most out of reading involves picking apart the words themselves, not merely flipping through pages.


Improved literacy means improved self-esteem, particularly when it involves kindergarten and middle school students whose grades will swell as a result, although high schoolers, college kids, and adults are certainly not immune to this mental health perk. Set realistic reading goals and work toward them for an easy, painless (and stress-free) way to kick up the spirits when confidence starts wavering.


Neuron published a Carnegie Mellon paper discovering how the language centers of the brain produced more white matter in participants adhering to a reading schedule over the period of six months. Seeing as how this particular tissue structure controls learning, it’s kind of sort of a good thing to be building, especially when it comes to language processing.


Brain flexibility is how the essential organ stratifies itself, delegates tasks, and compensates for damages, and Carnegie Mellon researchers believe reading might serve as a particularly excellent way to encourage this. These discoveries of how the brain organizes itself beg for further insight into the autism spectrum and other conditions that may stem from poor neurological communication.


The physiology of reading itself contributes to better memory and recall, specifically the part involving bilateral eye movement. However, it holds no influence over implicit memories: most of the benefit comes when recalling episodic memories.


Kids and parents who read aloud together enjoy tighter bonds than those who do not, which is essential to encouraging the healthiest possible psychological profile. Along with the cognitive perks, these sessions build trust and anxiety-soothing comfort needed to nurture positive behavior and outlooks.


Listening skills improve reading, and reading improves listening skills, particularly when one speaks words out loud instead of silently. When learning a primary or secondary (or beyond) language in particular, fostering interplay between the two ability sets makes it much easier to soak up vocabulary and grammar.


Once again, any bookish types hoping to claim the full benefit of this cognitive phenomenon gain it via close reading and analysis, not skimming, speed reading, and skipping. Because the activity is far from passive, it challenges the mind to focus, focus, focus: which certainly carries over into other areas of life!


Psychology professionals in the United Kingdom and United States gravitate towards bibliotherapy when treating non-critical patients, thanks to studies printed up in the journal Behaviour Research and Therapy. The practice involves prescribing a library card, which recipients use to check out one of the approved 35 self-help books for 12 weeks; as a supplement (not a replacement) to conventional therapy, it has proven extremely valuable to the clinically depressed and anxious.

Bonus: It’s Good for Author’s Brains Too!

Yes, who’d have thought that writing could be good for the brain? Slaving away writing seems to be like practicing sports or music, stimulating the brain to be better. Dr Martin Lotze used fRMI to look at novice and experienced writers’ brains – probably to steal ideas for a new book – and how they worked in different writing activities. Some regions of the brain became active only during the creative process, i.e. not while copying, with brainstorming sessions lighting up the vision-processing regions. It’s possible that they were, in effect, seeing the scenes they wanted to write.

But the two groups differed slightly in how their brains worked whilst being creative. Novice writers activated their visual centres, whilst experienced writers showed more activity in regions involved in speech. “I think both groups are using different strategies,” Dr. Lotze said. It’s possible that the novices are watching their stories like a film inside their heads, while the writers are narrating it with an inner voice. Experienced writers also had a region called the caudate nucleus become active, the part of the brain involved in skills that comes with practice. In the novices, the caudate nucleus was quiet, showing that practice works the brain.

Some other articles to read:

To block or not to block: that is the question.


The internet is a wonderful place to find information on just about any topic you can imagine and few you can’t. From the latest scientific study to the grumpiest cat, from insightful commentary to rule 34: the internet has it all. The problem is that not everyone is rational, logical, nor well informed, and they still have internet connections and the ability to make webpages and comment on social media.

As someone who tries to share science and knowledge with people, I love to engage and discuss topics. If I can help someone understand or learn something about a complex topic, then I feel like I’ve accomplished something. The more science communicators out there doing the same thing, the slightly better the world becomes. This better understanding leads to better decisions, better ideas, better inventions, better cat photos.

The problem is that not everyone appreciates being told that they are mythtaken or wrong. Others are adamant that they aren’t wrong. People will argue against the overwhelming scientific evidence on topics like climate change (real, man-made, we need to do something about it), genetic modification (breeding technique, cool innovation that is more precise and has great potential), modern medicine (seriously!?!), evolution (as solid a theory as gravity), and even the shape of the Earth (yes, flat-Earthers still exist). This anti-science nonsense is thankfully on the losing team, they just aren’t playing with a full deck.

It is these science deniers that are the most frustrating to deal with on social media and the internet. There is no evidence you can show them that won’t be dismissed – often as a conspiracy – and there is no rationality to their arguments. But they can also be very convincing to people who don’t know enough about a topic, which is how myths get started. And that is dangerous, once myths are started they are very hard to get rid of. So it is actually important to make sure that the science deniers aren’t existing in an echo chamber, which the internet has facilitated to some extent – I’m looking at you Alex Jones, Mike Adams and Joseph Mercola!

These science deniers can be a menacing drain of time, effort and inner calm. The easiest way to deal with them would be to block them, excise the wound, possibly burn the evidence of their existence. But then the science deniers have won. Their echo chamber is just that little bit more echo-y. But the echo chamber is going to keep echoing regardless, as discussed above. But won’t somebody think of the children!

I really hate blocking people on social media. The science denier drivel may pollute my newsfeeds, but blocking them also leaves me open to my own echo chamber. Sure, I might think I’m good at picking good information from bad, but if my thinking is never challenged, how can I be confident I’m not falling for confirmation bias? I guess this is the Catch 22 of the modern age, but with more cats.

Is science broken?

With the rebirth of Cosmos on TV, Neil DeGrasse Tyson and the team have brought science back into the mainstream. No longer is science confined to the latest puff piece on cancer research that is only in the media because a) cancer and b) the researchers are pressuring the funding bodies to give them money. The terms geek and nerd have stopped being quite the derogatory terms they once were. We even have science memes becoming as popular as Sean Bean “brace yourself” memes.

Sean dies

This attention has also cast a light on the scientific process itself with many non-scientists and scientists passing comment on the reliability of science. Nature has recently published several articles discussing the reliability of study’s findings. One article shows why the hard sciences laugh at the soft sciences, with the article talking about statistical errors. I mean, have these “scientists” never heard of selection and sample bias? Yes, there is a nerd pecking order, and it is maintained through pure snobbishness, complicated looking equations, and how clean the lab-coat remains.


As a science nerd, I feel the need to weigh in on this attack on science. So I’m going to tear apart, limb by limb, a heavy hitting article: Cracked.com’s 6 Shocking Studies That Prove Science Is Totally Broken.

To say that science is broken or somehow unreliable is nonsense. To say that peer review or statistical analysis is unreliable is also nonsense. There are exceptions to this: sometimes entire fields of study are utter crap, sometimes entire journals are just crap, sometimes scientists and reviewers suck at maths/stats. But in most instances these things are not-science, just stuff pretending to be science. Which is why I’m going to discuss this article.

A Shocking Amount of Medical Research Is Complete Bullshit
#6 – Kinda true. There are two problems here: media reporting of medical science and actual medical science. The biggest issue is the media reporting of medical science, hell, science in general. Just look at how the media have messed up the reporting of climate science for the past 40 years.

Of course most of what is reported as medical studies are often preliminary studies. You know: “we’ve found a cure for cancer, in a petri dish, just need another 20 years of research and development, and a boatload of money, and we might have something worth getting excited about.” The other kind that get attention aren’t proper medical studies but are spurious claims by someone trying to pedal a new supplement. So this issue is more about the media being scientifically illiterate than anything.

Another issue is the part of medical science that Ben Goldacre has addressed in his books Bad Science and Bad Pharma. Essentially you have a bias toward positive results being reported. This isn’t good enough. Ben goes into more detail on this topic and it is worth reading his books on this topic and the Nature articles I previously referred to.

Many Scientists Still Don’t Understand Math
#5 – Kinda true. Math is hard. It has all of those funny symbols and not nearly enough pie charts. Mmmm, pie! If a reviewer in the peer review process doesn’t understand maths, they will often reject papers, calling the results blackbox. Other times the reviewers will fail to pick up the mistakes made, usually because they aren’t getting paid and that funding application won’t write itself. And that’s just the reviewers. Many researchers don’t do proper trial design and often pass off analysis to specialists who have to try and make the data work despite massive failings. And the harsh reality is that experiments are always a compromise: there is no such thing as the perfect experiment.

Essentially, scientists are fallible human beings like everyone else. Which is why science itself is iterative and includes a methods section, so that results are independently confirmed before being accepted.

And They Don’t Understand Statistics, Either
#4 – Kinda true, but misleading. How many people understand the difference between statistically significant and significance? Here’s a quick example:

This illustrates that when you test for something at the 95% confidence interval you still have a 1 in 20 chance of a false positive or natural variability arising in the test. Some “science” has been published that uses this false positive by doing a statistical fishing trip (e.g. anti-GM paper). But there is another aspect, if you get enough samples, and enough data, you can actually get a statistically significant result but not have a significant result. An example would be testing new fertiliser X and finding that there is a p value of 0.05 (i.e. significant) that the grain yield is 50kg higher in a 3 tonne per hectare crop. Wow, statistically significant, but at 50kg/ha, who cares?!

But these results will be reported, published, and talked about. It is easy for people who haven’t read and understood the work to get over excited by these results. It is also easy for researchers to get over excited too, they are only human. But this is why we have the methods and results sections in science papers, so that calmer, more rational heads prevail. Usually after wine. Wine really helps.

Scientists Have Nearly Unlimited Room to Manipulate Data
#3 – True but misleading. Any scientist *could* make up anything that they wanted. They could generate a bunch of numbers to prove that, for an example of bullshit science, the world is only 6000 years old. But because scientists are a skeptical bunch, they’d want some confirming evidence. They’d want that iterative scientific process to come into play. And the bigger that claim, the more evidence they’d want. Hence why scientists generally ignore creationists, or just pat them on the head when they show up at events: aren’t they cute, they’re trying to science!

But there is a serious issue here. The Nature article I referred to was a social sciences study, a field that is rife with sampling and selection bias. Ever wonder why you hear “scientists say X is bad for you” then a year later it is “scientists say X is good for you”? Well, that is because two groups were sampled and correlated for X, and as much as we’d like it, correlation doesn’t equal causation. I wish someone would tell the media this little fact, especially since organic food causes autism.

Other fields have other issues. Take a look at health and fitness studies and spot who the participants were: generally they are university students who need the money to buy tinned beans and beer. Not the most representative group of people and often they are mates with one of the researchers, all 4 of them. Not enough participants and a biased sample: not the way to do science. The harder sciences are better, but that isn’t to say that there isn’t limitations. Again, *this is why we have the methods section, so that we can figure out the limitations of the study.*

The Science Community Still Won’t Listen to Women – Update
#2 – When I first wrote this I disagreed, but now I agree, see video below. As someone with a penis my mileage on this issue is far too limited. That is why it was only when a few prominent people spoke out about this issue that I realised science is no better than the rest of society. It hurts me to say that.

There is still a heavy bias toward men in senior positions at universities and research institutes, women get paid less, women are guessed to be less competent scientists, and apparently it is okay to ogle female scientists’ boobs… Any of these sound familiar to the rest of society? This is gradually changing, but you have to remember what age those senior people are and what that generation required of women (quit when they got married, etc). That old guard may have influence but they’ll all be dead or retired soon where their influence will be confined to the letters to the editor in the newspaper. After seeing the video below, especially the way the question was asked, I think it is clear that the expectations for women create barriers into and through careers in science (the racism is similar and is one I see as a big issue). So it starts long before people get into science, then it continues through attrition.

Fast forward to 1:01:31 for the question and NGT’s answer (sorry, embed doesn’t allow time codes).

Recently there have been a spate of very public sexist science moments. Whether it be telling female scientists they should find a male co-author to improve their science, or Nobel Laureates who don’t want to be distracted by women in the lab, it is clear that women in science don’t get treated like scientists. Which is why I find the Twitter response to the Tim Hunt debacle, #distractinglysexy, to be exactly the sort of ridicule required. Recent events seem to imply at least repercussions are occurring now.

Scientists are meant to be thinkers, they are meant to be smart, they are meant to follow the evidence. They aren’t meant to behave like some cretin who hangs out on the mens rights movement subreddit discussion. Speaking of which, watch science communicator Emily Graslie discuss the comments section of Youtube.

It’s All About the Money
#1 – D’uh and misleading. Research costs money. *This is why we have the methods section, so that we can figure out the limitations of the study.* Money may bring in bias, but it doesn’t have to, nor does that bias have to be bad or wrong. Remember how I said above that science is an iterative process? Well, there is only so big a house of cards that can be built under a pile of bullshit before it falls down in a stinky mess. Money might fool a few people for a while (e.g. climate change denial) but science will ultimately win.

Ultimately, science is the best tool we have for finding out about our reality, making cool stuff, and blowing things up. Without it we wouldn’t be, this article wouldn’t be possible, we wouldn’t know what a Bill Nye smack down looks like. Sure, there is room for improvement, especially in the peer review process and funding arrangements, and science is flawed because it is done by humans, but science is bringing the awesome every day: we have to remember that fact.

Other rebuttals:

I think you’re Mythtaken: Guns #2 – The second armour-piercing round

After a recent discussion about gun myths, I realised that my last blog post hadn’t covered anywhere near enough of the myths that are floating around (this article will mainly be about US guns, but parallels from the resources and science cited can be drawn to other countries). This is obviously because stuff is much easier to make up than to research, just ask Bill “tides go in, tides go out” O’Reilly. One of the big problems with research in the US on guns is that the National Rifle Association has effectively lobbied to cut off federal funding for research and stymieing data collection and sharing on gun violence. As a result there are a lack of hard numbers and research often tends to be limited in scope. Scope: get it? So like a lost rabbit wandering onto a shooting range, or a teenager wearing a hoody, it’s time to play dodge with some of these claims.

Myth: Guns make you safer, just like drinking a bit of alcohol makes you a better driver.

The myth I hear the most often is that guns make you safer;  just like the death penalty is a great deterrent, surveillance cameras stop crime, and the internet is a good source of medical advice. The problem with this myth is that people like having a safety blanket to snuggle. What they don’t realise is that guns don’t make you safer, they make you 4.5-5.5 times more likely to do something stupid to someone you know and love than be used for protection.

I want to be clear here: there’s nothing wrong with going shooting at the range, or hunting vermin. The problem is thinking that you can use a gun for self-defence, when it actually makes the violence problem worse. That gun escalates the violence because people have it there: why not use it? To wit the criminals enter into an arms race and a shoot first policy.

Owning a gun has been linked to higher risks of homicidesuicide, and accidental death by gun. For every time a gun is used in self-defense in the home, there are 7 assaults or murders, 11 suicide attempts, and 4 accidents involving guns in or around a home. 43% of homes with guns and kids have at least one unlocked firearm, and in one experiment it was found that one third of 8-to-12-year-old boys who found a handgun pulled the trigger, which is just plain unsafe.

As for carrying around a gun for self-defence, well, in 2011, nearly 10 times more people were shot and killed in arguments than by civilians trying to stop a crime. In one survey, nearly 1% of Americans reported using guns to defend themselves or their property. However, a closer look at their claims found that more than 50% involved using guns in an aggressive manner, such as escalating an argument. A Philadelphia study found that the odds of an assault victim being shot were 4.5 times greater if they carried a gun. Their odds of being killed were 4.2 times greater.

It is even worse for women. In 2010, nearly 6 times more women were shot by husbands, boyfriends, and ex-partners than murdered by male strangers. A woman’s chances of being killed by her abuser increase more than 7 times if he has access to a gun, and that access could be the woman keeping one around just in case her attacker needs it. One US study found that women in states with higher gun ownership rates were 4.9 times more likely to be murdered by a gun than women in states with lower gun ownership rates; funny that.

There is also the action hero delusion that often gets trotted out when talking about guns for self-defence. The idea is that everyone is a good guy, so give them a gun and you have a bunch of action heroes ready to fight off the forces of evil. This has worked so well that all governments are thinking of getting rid of the military….

The reality is that the average person is not an action hero and would fail miserably in a high stress situation with actual bad guys. You only have to look at the statistics:

  • Mass shootings stopped by armed civilians in the past 30 years: 0
  • Chances that a shooting at an ER involves guns taken from guards: 1 in 5

I’ve seen several examples cited of “citizens” shooting someone who looked intent on killing everyone they could (with a gun…). But in every instance the “citizen” was actually an off-duty police officer, or a person in law enforcement, or someone in the military. In other words, the people who stop mass shootings or bad-guys with guns, are trained professionals.

There have also been a few studies done that claim X million lawful crime preventions, therefore guns must be good; notably by researchers Lott and Kleck. To say that their research is flawed is like saying Stephen King has sold a few books. Lott’s work has been refuted for extrapolating flawed data. Kleck’s research has similarly been refuted by many peer reviewed articles:

Myth: Guns don’t kill people, people kill people, quite often with a gun, because punching someone to death is hard work.

If this myth were true we wouldn’t send troops to war with weapons. I get where people are coming from with this myth, because the gun itself is an inanimate object and is only as good or bad as the person using it. Yes, I did just quote the movie Shane: thanks for noticing. But here is the thing, in a society we are more than just a bunch of individuals, we are a great big bell-curve of complexity. So when you actually study the entire population you find that people with more guns tend to kill more people—with guns. In the US, states with the highest gun ownership rates have a gun murder rate 114% higher than those with the lowest gun ownership rates. Also, gun death rates tend to be higher in states with higher rates of gun ownership. Gun death rates are generally lower in states with restrictions such as firearm type restrictions or safe-storage requirements.

PediatricsCenters for Disease Control and Prevention

Gun deaths graph: The three states with the highest rate of gun ownership (MT, AK, WY) have a gun death rate of 17.8 per 100,000, over 4 times that of the three lowest-ownership states (HI, NJ, MA; 4.0 gun deaths per 100,000).

The thing is that despite guns being inanimate objects, they affect the user/owner’s psyche. It’s like waking up one morning with a larger penis or bigger boobs: you not only want to show them off, you act differently as a result. Studies confirm this change in behaviour. Drivers who carry guns are 44% more likely than unarmed drivers to make obscene gestures at other motorists, and 77% more likely to follow them aggressively. Among Texans convicted of serious crimes, those with concealed-handgun licenses were sentenced for threatening someone with a firearm 4.8 times more than those without. In US states with Stand Your Ground and other laws making it easier to shoot in self-defence, those policies have been linked to a 7 to 10% increase in homicides.

Now people also like to try and red herring the argument against guns by pretending that video games or mental health is the problem. The NRA tried to claim video games were to blame after the Newtown shootings. Of course we’d be able to see this relationship by looking at gun ownership versus video game playing, like by comparing the USA to Japan.

United States Japan
Per capita spendingon video games $44 $55
Civilian firearmsper 100 people 88 0.6
Gun homicidesin 2008 11,030 11

Sources: PricewaterhouseCoopersSmall Arms Survey (PDF), UN Office on Drugs and Crime

The thing is controlling guns has been shown to work, although there are other factors in play, and policing is still key. But when gun control has been shown to reduce firearm deaths by 1-6 per 100,000 then the case is pretty much closed.

Myth: They’re coming for your guns to stop our freedom and tyranny and democide and Alex Jones said so and aliens made me do it!

As I stated above, the statistics on guns and gun violence is hazy. No one knows the exact number of guns in America, but it’s clear there’s no practical way to round them all up (never mind that no one in Washington is proposing this). Those “freedom” loving gun owners – all 80 million of them – have the evil government out-gunned by a factor of around 79 to 1. If government were coming for the guns, you’d think they’d have done so before being this grossly out-gunned.

guns-owned630Sources: Congressional Research Service (PDF), Small Arms Survey

Yes, 80 million gun owners is a minority! I find it interesting that from 1989 to 2000 there was a decline in gun ownership of 46% to 32%. Now the decline in ownership rebounds to hover between 34 and 43% for 2000-2011 (notably the high point in 2007 was after the Virginia Tech shooting which the NRA did a lot of campaigning around), which shows why the decline didn’t continue. Now compare those rates of ownership to the recent report from the US Bureau of Justice Statistics sums up the rates of gun violence. You can clearly see a decline in gun violence from 1993 to 2000 before a plateau that has pretty much held since. This is confirmed by other studies. This is an important take home point: all the research shows violence and gun violence is on the decline. The idea that people need a gun for protection is becoming more and more ridiculous. This is despite the global decline in violence, and trends seen in countries like Australia (more Aussie stats here). On a side note, in the last lot of statistics you see that the more female, educated, non-white, and liberal you are, the less likely you are to own a gun. 

So scare campaigns may work to boost sales of guns for a while, but overall, most people don’t want or need a gun. The long term trend has nothing to do with the government coming for the guns and everything to do with people realising they don’t need one and prefer to read a good book, or watch a movie, instead of going to the range.

The simple fact is that more guns in society is the best predictor of death, thus it is time to rethink the reasons for owning a gun, especially if that reason is in case you have to John McClane a situation.

More mythbusting gun articles:







More science:


Talent, ability and being awesome

born writer

Born to write? Born to be an athlete? Born to be a rocket scientist? People love to talk about “natural” ability or talent as the be all and end all of achievement. Since I actually own a genetics text book – it props up my DVD collection on the shelf – and once watched someone do manual labour, I feel qualified to comment on the talent vs. work debate.

Genetics is a big, complicated, topic, so I’m going to provide a facile overview of it. Genetics is that thing that means some people have higher baselines, are higher responders to training/learning, and are likely to achieve more (see this and read this for sports examples). For some the opposite is true, they have low baselines, don’t respond well to training/learning, and are likely to suck no matter what they do. There isn’t much you can do about your genetics, unless you happen to have a time machine and can play matchmaker to get better parents.

But that isn’t to say that you shouldn’t try to get good at stuff. Until you are tested and start training, you don’t really know what your “ability” is. And just because you might continue to suck, you will suck less than you did before, which means you will be better than those around you who didn’t even try. Take an example from sports – because people actually do science on athletes, the arts talk about their feelings too much – athletes tend to live longer than normal because they are more likely to be fitter, which lowers cardiovascular mortality. You don’t get fit sitting on a couch, watching TV, snacking on corn chips, in your underwear: you have to train.

So let’s take this into the writing field. You may have been born with a massive brain, nimble fingers, and an imagination that rivals college students tripping on acid, but that doesn’t mean much if you never learn to read, or write, or are too poor to have access to materials for writing, or the persistence to share that writing with the world. All that talent and ability counts for nothing if you don’t do something with it. You have to train. The difference between the talented individual and the untalented individual can often just be a lot of hard work by the untalented. I mean, who has sold more books: James Paterson or any of the Booker Prize winners?*

But let’s not get carried away. We have to acknowledge that any “talent” is a GxE interaction (genetics by environment interaction). Genetics, or that innate ability, is still a factor that we can’t dismiss, but so is the environment. So all of that skill development and training will come more easily, more quickly, and possibly progress further for some, but that isn’t an excuse for not doing the hard work.

See also: http://emilyjeanroche.blogspot.com.au/2014/02/WritingSkills.html

* Not that I’m insinuating that winning a Booker Prize actually makes you a talented or good writer. I actually use those prize lists to figure out what not to read.

Sony exits ebook biz

I don’t know if you’ve heard, but there are these things called electronic books now, e-books for short. Now these are brand new (invented 1971, possibly as early as 1949) and understandably the devices to read them are even newer (first e-reader released 1998). So it may come as a shock to many of you that quite a few people read e-books on e-readers now instead of paper books. It will come as even more of a shock to you that the Sony e-reader has become a thing of the past.

That’s right my fellow book lovers – lovers in the adoration sense, not in the brace yourself, oh yeah, uh-huh, uh-huh, chikka bow-wow, sense – it appears that Sony has decided it doesn’t want a dedicated e-reader, in fact it doesn’t even want an e-book store. They have announced that they are pulling out and customers are being transferred to the Kobo store.

Of course, I don’t think anyone is particularly surprised by this decision. Raise your hand if you’ve ever actually seen a Sony e-reader. Now keep it up if you’ve actually owned one. If you can see anyone with their hand still raised, I’d question how you manage to turn people’s web cams on. Sony has been playing at the bottom end of the market for e-readers and e-books for quite a while now. The chart below from Goodreads shows Sony were picking up Kobo’s scraps in the market.

So what does this mean for us readers? Well, it means the big dedicated e-readers remain, the Kindle and Nook. It also means Kobo could pick up a bit more of the e-reader and e-book market. But that isn’t particularly interesting to me, I’ll discuss why in a moment. What is interesting is the Sony e-reader is probably the victim of the modern device market.

I read an interesting tech article that was discussing mobile phones. They pointed out that the companies making money on phones weren’t actually making money on the phone sales, especially at the mid to lower price points, but instead cashing in on the app stores and downloads. The phone is a loss leader for the software business they run. Nokia and their deal with Microsoft is a classic example of this, with Nokia battling to compete for market share and profits.

Translate that to e-readers and the same thing applies. It was even worse for Sony, as the other competitors were/are selling their Kindle, Nook, Kobo, etc, as a loss leader to get people using their store or affiliates. This meant that the big stores attract the users, who buy the associated tech, which locks them to the stores (to some extent at least), leading to e-book sales profits. Terrific! As long as you don’t think too hard about the slave labour making the devices.

The reason I don’t find the market positioning of the e-reader devices of much interest is down to a few things. The first is a little statistic that has been showing up in surveys from Goodreads and The Pew Institute; namely that 29-37% of people read books on their phone (23% on a tablet). A dedicated reading device is only really in the book space now because the e-reader screen has less eye fatigue. At the moment! Watch this bubble burst as phones and tablets eat away at the readability technology, such that e-reader screens become redundant. Mobile devices also don’t have to be linked to any one e-book store, so interesting times are on the horizon.

Another view on e-readers future: http://techland.time.com/2013/01/04/dont-call-the-e-reader-doomed/

Nye vs. Ham: science vs. nonsense

nye vs ham

There is a general rule in arguments: don’t argue with stupid people, they drag you down to their level and beat you with experience. That is pretty much the problem scientists and experts have when debating anti-science proponents – such as creationists, anti-vaccinators, anti-GM campaigners, climate change deniers, etc. Yet Bill Nye the Science Guy decided that, in the interest of science and education, he would debate a creationist.

The debate started with Bill Nye and Ken Ham stating a 5 minute opening piece. Then Ken went into his 50 minute argument, which is when my cushion really started to earn its keep protecting my desk from damage.

I really find it hard to fathom how anyone can be credulous of Ham’s statements. In his 50 minutes he used all sorts of logical fallacies, most notably his videos of “creationist scientists” as argument from authority. But it wasn’t this that really got the lump on my forehead rising, it was the use of “evidence” for his argument that simultaneously refuted the arguments. One example was the phylogenetic tree for dogs. Ham argued that the rise of Canis lupus familiaris from a wolf (yeah, just one, let’s just let that one go through to the keeper) was what you would see from biblical predictions of dogs speciating after the global flood 4,000 years ago. Just one problem. Teeny tiny. The figure showed dogs evolving from a group of wolf ancestors over the course of 14-15,000 years.

He didn’t just do this once, he did it repeatedly. Another example arose when he was talking up one of his creationist pals who helped design a satellite (or something, didn’t really care because it was irrelevant). He used the example of how scientists had been debating how old the universe was: they couldn’t agree on the age. The part he left out about that particular debate was that the age of the universe was somewhere around about 13.8 billion years old (+ 37 million years), and they had a bunch of data they were trying to make sure they had the errors accounted for. The debate was about the difference in the confidence range (or error margin) between the Planck satellite measures and the Wilkinson Microwave Anisotropy Probe measures. The error margin is 6,000 times greater than the age of the Earth that Ham claims. The Earth’s age is still 2 million times older than Ham’s claim, yet he uses this example as if to give credence to his claims.

Now Nye did his best in his 50 minutes to show that Ham’s claims were flawed, but also how evidence and scientific observation and prediction work. Others have claimed, and I agree to an extent, that Nye’s mistake was to try and cover too much ground. If he was talking to a receptive audience he would have destroyed Ham and had the crowd eating out of his hand. But at a creationist museum, with a bunch of science deniers, it would come across as too much information and too confusing. Although Nye’s last couple of minutes pretty much killed the entire debate, with trees, rocks, size of the universe, distance from stars but limits of how fast the light can travel, all showing that the Earth and Universe are much much older.

The first rebuttal saw Ham carrying on about “you weren’t there so you don’t know.” Brian Dunning had a great take on this particular argument:

There is a rumor that Bill Nye @TheScienceGuy debated evolution with Ken Ham. Not true. It did not happen, because you weren’t there.

In this first rebuttal, Ham again used evidence that rebutted his own claims, especially when talking about radio-carbon dating. Showing that measurements have error margins, or can be somewhat imprecise, doesn’t negate the fact that the measurements are still many orders of magnitude outside of the age of the Earth claimed by Ham. Then he moved onto saying that the bible is right, everything else is wrong (let’s just ignore that the bible isn’t even consistent with itself, let alone the fact that it is a translation of a translation, thus literal interpretation isn’t supported by biblical scholars).

Nye then rebutted Ham’s statements. His classic put down was for the claim that every animal and humans were vegetarian until they got of the ark: lion’s teeth aren’t really made for broccoli.* Ba-zing!

Next Ham tried to point out that creationism isn’t his model (then he blames secularists for scientists). This is true, there are other nutters who came up with this crap. But Ham tried to pretend that “scientists” came up with the various creation models (NB: just because a scientist said something, doesn’t make it science or scientific). Then he talks about species and kinds and how Nye was confusing what a kind was. Easy to do when the idea of a kind is bullshit and unsupported by any actual science.

Nye then tore apart the claims about the rise of species from kinds using the basic math involved. He also called bullshit on the ship building skills of ancient desert people. The main point in this rebuttal was that Ham hadn’t addressed Nye’s point adequately, and that Ham’s claims aren’t supported by the majority of religious people, let alone scientists.

My desk and forehead had had enough by this stage, so I didn’t watch the Q&A section, but it can be viewed here.

The point I wanted to make from this was that Ham had a huge advantage in this discussion. I’m not talking about the home team venue, nor the credulous crowd, I’m talking about the lack of need for evidence. All Ham had to do, and pretty much what he did, was seed doubt in science and then declare “creationism wins” (which might as well be “God did it”). This is the problem with any debate with anti-science: the scientists have to prove their case with evidence and logical reasoning; the anti-science side only has to sow some doubt. And that doubt can vary between legitimate claims through to flat out lies, it doesn’t matter. So Nye shouldn’t have taken the debate.

But Nye was right to take the debate.

Hang-on. Have you hit your head against your desk a few too many times during that debate?

No. Bill Nye is a well known and respected science communicator. He went into the belly of the beast to stand in the echo chamber and sow some doubt (how’s that for a metaphor-fest?). As he stated himself, Nye knows that America (and the world, but let’s allow him his patriotism) needs science and innovation for the future of society. Creationism and other anti-science nonsense undermine this. If no-one challenges the group-think and echo chamber of the creationists (et al.) then they will continue to be mislead and misinformed by people like Ken Ham. You can’t have someone reject evolution yet rely on germ theory for modern medicine. You can’t have someone reject radio-carbon dating yet use medical imaging. That is incompatible, that is a rejection of reality, and it leads to stupid stuff happening that curbs development of new technologies and advancements to society.

Other opinions on who won:
Shane proposes that Nye needed to pick a couple of points to hammer home. This feeds into science communication research that shows you can get distracted from the main narrative with too many points. 

Christian Nation have Bill Nye winning the debate 92% to 8%. 

Update: Richard Linski has blogged about the debate and Ham’s use of his E. coli evolution work. Not surprisingly, Ham completely misrepresented the work. As I said above, Ham did this with many examples in his presentation. It is important that people realise just how deceptive Ham’s statements and claims are.

Update: It is clear that many of Ham’s supporters were not listening to Bill Nye and are wilfully ignorant. This Buzzfeed article (yeah, I know, Buzzfeed) brings up a lot of the points that Nye addressed, explained clearly and simply, showing they didn’t listen to Nye, and slept through school.

Update: This article makes a nice statement that ties into some of my points about why Nye took the debate. To quote:

It brought new attention to YEC (Young Earth Creationism) to exactly the people we need to see it- the large swath of Christian and other religious parents who think of Intelligent Design or Guided Evolution or some other pseudo-scientific concept when they imagine “teaching the controversy“. These people are embarrassed by people like Ken Ham. They know the earth isn’t 6000 years old, and they understand just how impossible it is to square that belief with observable phenomena.

Update: I quoted Brian Dunning above and he wrote an article for Skeptoid about not debating anti-science people. I agree and disagree with his points as you will see from what I’ve written here and what Brian has written in his article. We can’t just preach to the choir, but we can’t provide legitimacy to nonsense either.

Update: The ever awesome Potholer54 just posted a video on one point about evolution and Ken Ham’s rebuttal of his own arguments. Worth watching.

* Okay, not the best point to make, as teeth aren’t definitive of diet, but if the comment is viewed as being representative of animal physiology overall, then it is a very valid putdown of the vegetarian claims.

Mythtaken: Shark Attack Deaths

Ever since Spielberg made us scared of seeing any more Indiana Jones films, people have felt better about blaming him for the hysteria around sharks.

shark tears

Recently in my home state of Western Australia there has been a decision made to cull sharks because some people have been killed by them. Clearly we should blame sharks for just wanting a hug and not humans for dressing up like shark food. This is a stupid decision and I’m about to outline why we can’t even tell if there have been more shark deaths, let alone whether a cull would actually work, let alone whether you’d know if the cull does anything. It all comes down to statistics. Well, that and media beat-ups to sell advertising space.

You’d honestly think that there had been a change in the number of people dying in Australia from shark attacks in order to justify a shark cull. Well, the official stats show there hasn’t been an increase in deaths from shark attacks. In fact the deaths are so low the noise around the long term average of 1.38 deaths per annum (2000-2012), that any increase or decrease in deaths are impossible to assign any significance to (see chart below). Three deaths in a year (2000): could be an anomaly. Zero deaths the year after (2001): likely to be regression to the mean. Number of deaths from the most ferocious animal on the planet: bees; 10 per year.

Graph of Aussie shark attacks 2000-2012. Blue is total encounters, yellow is non-fatal, red is fatal. Trend lines for total and fatal.

Graph of Aussie shark attacks 2000-2012. Blue is total encounters, yellow is non-fatal, red is fatal. Trend lines for total and fatal.

What you do see in the data is a slight increase in the number of attacks. If you look at the number of attacks and fatalities since 1900, there has been a general increase in the number of shark attacks, but a decrease in the fatalities from shark attacks. It’s almost as though there are more people in the world and more of them bobbing up and down in the ocean in seal costumes, possibly on a tasty cracker.

graph 2 shark attacks since 1900 by decade

International Shark Attack File data, Florida Museum of Natural History

Now this is interesting for the world and Australia, as it appears that despite our best efforts as humans, sharks aren’t taking revenge for the 100 million of them we kill each year. But this is about a shark cull in Western Australia: what’s happening there? Well, these tables say it all really:

Unprovoked Cases Since 1791:

State # Cases Fatal Injured Uninjured Last Fatality
NSW 243 68 (27.9%) 120 55 2013 Coffs Harbour
QLD 251 82 (32.7%) 151 18 2011 Fantome Island
WA 92 20 (21.7%) 57 15 2013 Gracetown
SA 48 18 (37.5%) 23 7 2011 Coffin Bay
VIC 45 9 (20%) 27 9 1987 Mornington Peninsula
TAS 15 3 (20%) 8 4 1993 Tenth Is, Georgetown
NT 10 2 (Duh) 6 2 1938 Bathurst Island
Total 704 202 (28.7%) 392 110 (Revised  28/1/2014)

Provoked Cases Since 1832:

# Cases Fatal Injured Uninjured
Total 190 15 129 46

Western Australia accounts for ~13% of shark attack deaths. When we look at 2012 data we see that WA is having a greater proportion of the Australian attacks and accounts for all the fatalities in Australia. The terms “bigger population”, “longer coastline”, “more cashed up bogans come to mind.

Australian Shark Encounter Statistics for 2012:

State Cases Recorded Fatal Injured Uninjured
NSW 5 0 3 2
QLD 1 0 1 0
SA 1 0 1 0
WA 5 2 2 1
VIC 1 0 1 0
TAS 1 0 1 0
NT 0 0 0 0
TOTAL – Unprovoked 14 2 9 3
TOTAL – Provoked 8 0 5 3
All Cases 22 2 14 6

So there is no actual proof that there are any more deaths occurring from shark attacks, definitely no trend toward more deaths, but a significant increase in the number of media reports on those deaths (citation needed). Even on a state by state basis there isn’t any death trend. But there is a trend towards more shark incidents. What we are actually seeing is an increase in the number of people dressing up like seals/shark food (scuba divers and surfers).

Circumstances affecting shark / human interactions:
The number of shark-human interactions occurring in a given year correlates with human population increases and the amount of time humans spend in the shark’s environment. As Australia’s population continues to increase and interest in aquatic recreation rises, it would realistically be expected that there will be an increase in the number of shark encounters.

Let’s put that in perspective, Australians have a 1 in 3,362 chance of drowning at the beach and a 1 in 292,525 chance of being killed by a shark in one’s entire lifetime. In Australia there are 1.38 deaths per year from sharks, 121 deaths per year from drowning at the beach, and 1,193 deaths per year from driving. We’re more likely to die from all the stupid shit we do, than from sharks. So why have a shark cull?

There is no real reason to have a shark cull. We already kill 100 million of the things annually anyway. What we actually need to do is look at where the sharks are looking for food, has their food moved, if so due to what, and are we seeing less shark food available such that sharks are looking for alternate foods. The shark cull with drum lines and nets is actually likely to kill off dolphins, turtles, rays, and endangered shark species, which is why fisheries researchers don’t support the cull.

Update: I neglected to mention that other states in Australia have been using baiting and nets, in the case of Queensland, since 1962, and since 1937 in New South Wales. Reports are not complimentary of the Queensland nor New South Wales programs. To quote:

…the Fisheries Scientific Committee is of the opinion that the current shark meshing program in New South Wales waters’ adversely affects two or more threatened species, populations or ecological communities and could cause species, populations or ecological communities that are not threatened to become threatened.

And (okay, I’ve cherry picked this a bit, read the whole report on how we are overfishing, killing shitloads of sharks, destroying the fisheries and adding baiting on top of this):

The main pressures on grey nurse sharks appear to be fishing activities and shark control programs……. The biological susceptibility of sharks to over fishing, evidence for increasing fishing pressure and lack of information have given rise to increasing concern about the sharks and rays of the Reef.

Essentially shark baiting, whilst paling in comparison to the 100 million sharks killed for their fins annually, is another pressure that endangered species don’t need. Especially when the baiting is still killing other endangered animals, not just sharks.

For more, read these articles:

Can’t we all just get along: DTB vs E-book

Print vs ebook infographic
The big take home from this infographic is that readers are more interested in reading, not on the format it comes in. I also found it interesting that people read slower on an e-reader (which I’d guess is because the screen is smaller and requires more ‘page turns’ which breaks reading flow) yet those using e-readers read an average of 9 more books per year (24 vs 15).

In summary: reading is good, go and enjoy a good book.

Post Navigation

%d bloggers like this: