Have you ever heard a scientist talk and wondered what the hell they were saying? Did they use the word theory to mean something other than “I reckon”? Well, you’re not alone.
Language is very important to scientists. Without precise language there would be no way for them to write peer reviewed papers that could send an insomniac to sleep. Communicating science is all about letting everyone in on the data and knowledge that is being accumulated in the endless march forward into the unknown. But because scientists are marching into the unknown, they prefer to make their statements as vague and non-committal as possible. This way, if they are correct they have cautiously alluded to the right answer, and if they are wrong they can pretend their statement was hinting at the correct answer all along.
In keeping with my previous explanations of music reviews and book reviews I have found a chart explaining science terms. This list has helped me, I hope it helps you too.
There is something about music that we all love. By “music” I mean I’m going to discuss the popular stuff that people love to criticise. By “we all” I mean some people, since not everyone likes music, and even music lovers have tastes that differ from the norm. And by “love” I don’t mean the squishy kind. As a music fan, I feel the need to defend modern music, since I quite like some of it.
Recently there have been a number of people disparaging modern music. E.g.:
This isn’t a new argument. Much like the kids these days argument – wave your Zimmer Frames at the sky now – the modern music sucks argument is based around a number of cognitive biases. Survivorship bias is one part, in that we only remember the music that lasts, and we certainly don’t remember the bad stuff. One of the more interesting parts of our biases is how our musical tastes are formed in our teens and early twenties (14-24). In part, this is when our brains are developing and we are creating our identity. Another part is that everything is still new and exciting, so we get a rush from experiences that we won’t later in life. So everything after that short time period seems strange and against the natural order of things.*
Pubertal growth hormones make everything we’re experiencing, including music, seem very important. We’re just reaching a point in our cognitive development when we’re developing our own tastes. And musical tastes become a badge of identity. – Professor Daniel J. Levitin (Source)
But of course, rather than discuss the interesting dynamics at play, the discussion has instead latched onto a study that provides “objective proof” that modern music sucks. Rather than directly cite the study, the vitriolics have found a Youtube video that misrepresents the study to suit their preconceived ideas.
So what does the objective proof study actually say? Well, after a quick search – seriously, how hard is it for these whiners to link and read the damn study – I found the original study. But rather than provide proof that music has gotten worse since the 1960’s, it instead directly states:
Much of the gathered evidence points towards an important degree of conventionalism, in the sense of blockage or no-evolution, in the creation and production of contemporary western popular music. Thus, from a global perspective, popular music would have no clear trends and show no considerable changes in more than fifty years. (Source)
Kinda the opposite of the claim, huh! As a general statement, music hasn’t gotten better or worse, it has pretty much stayed the same over the last 50 years. Nobody has ever noticed that…
Other studies have looked into changes in music over time. A more recent study found that styles of music have changed, often becoming more complex over time. But it isn’t quite that simple. The more popular a style of music becomes the blander it becomes.
We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in. (Source)
In other words, music sucks because it tries to be popular. And it works.
So saying that modern music sucks is nonsense. What is bland and generic is popular music. Always has been, probably always will be. There is good music being made all the time, you just aren’t going to find it without looking.
I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.
I’ve been a fan of martial arts for as long as I can remember. While I’m not a fighter (I’m a pussy) I have great respect for the athletes that beat the crap out of each other for our entertainment. I also love a bit of choreographed hijinx in films as well.
But for some reason there are people who don’t share my love and respect for people punching each other in the face until someone carts them off on stretchers. They decry boxing and MMA as bloody and violent sports that should be banned – won’t somebody please think of the children! At the same time they blithely ignore the injury and deaths from good old harmless football et al.
So I thought that I would run through a few of the statistics and studies on those violent sports to see if the claims stack up. Yeah, you know what’s going to happen: don’t you!
Let’s start by looking at boxers and MMA fighters: just how likely are injuries and knockouts? Well, a study of 1181 MMA competitors and 550 boxers found that boxers were less likely to suffer the cuts and bruises of MMA fighters, but they were more likely to be knocked out.
Boxers were significantly more likely not to experience injury (49.8% vs 59.4%, P < 0.001), whereas MMA fighters were significantly more likely to experience 1 injury (typically contusion/bruising, P < 0.001). Boxers were more likely to experience loss of consciousness (7.1% vs 4.2%, P = 0.01) and serious eye injury (1.1% vs 0.3%, P = 0.02).
This makes sense given that there are more ways to win an MMA bout than by points, KO, or bookmaker arranged dive. Also the overall injury rate in MMA fights of 8.5% is surprisingly low for two people beating the crap out of one another.
The overall injury rate was 8.5% of fight participations (121 injuries/1422 fight participations) or 5.6% of rounds (121/2178 rounds). Injury rates were similar between men and women, but a greater percentage of the injuries caused an altered mental state in men. Fighters also were more likely to be referred to the ER if they participated in longer bouts ending in a KO/TKO.
Other studies have found higher rates of injury, 28.6%, but have similar conclusions regarding the types of injuries – facial cuts and bruises – being higher than boxing, but knockouts being lower.
Part of this is down to the small, fingerless gloves used in MMA. Less padding, that is mainly there to protect the hands from breaking with every punch, leads to a different force being applied to the opponent’s face.
All padding conditions reduced linear impact dosage. Other parameters significantly decreased, significantly increased, or were unaffected depending on padding condition. Of real-world conditions (MMA glove–bare head, boxing glove–bare head, and boxing glove–headgear), the boxing glove–headgear condition showed the most meaningful reduction in most of the parameters. In equivalent impacts, the MMA glove–bare head condition induced higher rotational dosage than the boxing glove–bare head condition. Finite element analysis indicated a risk of brain strain injury in spite of significant reduction of linear impact dosage.
Okay, so how do these nasty violent sport stats compare to less violent sports? What is the chance of dying in MMA or boxing compared to, I don’t know, horse riding? Well, a 2012 study from Victoria found motor sports, fishing, equestrian activities, and swimming all led to more deaths in a year than boxing. That’s right, riding a horse or going fishing is deadlier than standing in a ring getting punched in the face. That brutal and nasty boxing didn’t even make it into the top ten.Hell, even real life is more dangerous, as another study found motor vehicle accidents and falls were far more likely to kill people than boxing or any other sport. It’s almost as though the controlled forum of a boxing ring or MMA octagon are somehow stopping things getting out of hand.
The Victorian study is only looking at one state in Australia, so hardly representative of the entire world, and only looked at 2001-2007, which isn’t a huge time span, but the results are still very interesting:
There were 1019 non-fatal major trauma cases and 218 deaths. The rate of major trauma or death from sport and active recreation injuries was 6.3 per 100,000 participants per year. There was an average annual increase of 10% per year in the major trauma rate (including deaths) across the study period, for the group as a whole (IRR 1.10, 95% CI, 1.06-1.14). There was no increase in the death rate (IRR=0.94, 95% CI, 0.87-1.02; p=0.12). Significant increases were also found for cycling (IRR 1.16, 95% CI, 1.09-1.24) off-road motor sports (IRR 1.10, 95% CI, 1.03-1.19), Australian football (IRR 1.21, 95% CI, 1.03-1.42) and swimming (IRR 1.16, 95% CI, 1.004-1.33).
Did you take that in? I’ll let the authors summarise:
The rate of major trauma inclusive of deaths, due to participation in sport and active recreation has increased over recent years, in Victoria, Australia. Much of this increase can be attributed to cycling, off-road motor sports, Australian football and to a lesser extent swimming, highlighting the need for coordinated injury prevention in these areas.
But is this representative? UFC boss Dana White likes to compare his sport to NFL, as MMA fighters are kept sidelined after concussions for longer than their football (should be hand-egg, but let’s not quibble) counterparts. According to a report made by One Sure Insurance, the fact remains that under all that protective gear used to play rugby, NFL players are hitting each other with the (padded) equivalent force of a car crash. Studies of brains show that all contact sports are bad for the brain. Even Soccer (or is that Football?) players are at risk of brain injury. MMA like to keep their fighters healthy, whilst most sports want their players back next week to go again.
I keep seeing these claims about MMA or boxing being dangerous to health. Meanwhile, football, rugby, gridiron, that skating sport that Canadians jizz over, all seem to have just as much chance of injury or death. Essentially, we can easily list a dozen sports more dangerous than fight sports (seriously, cheerleading: WTF!). But that doesn’t really matter. The main thing is to know the actual risks so that athletes (and spectators) are making a well informed decision. Because as much as horse riding is bad for your health, it is also boring to watch (NB: personal opinion and quite a snobby one at that) so people won’t really care about another death in that sport. Whereas a death in an exciting sport like MMA is much more visceral and likely to have spectators on hand. Hard to compare horse riding to MMA, unless we had Kentucky Thunder step into the octagon.
The main problem I see with the “MMA is violent and dangerous” or “Boxing is a brutal sport” and “They should be banned” (please, think of the children!) is that it assumes fighters are unaware that being punched in the head is bad for their health. Do people really think that fighters love being knocked out or injured, instead of just spar that vast variety of dummies (e.g. these mmalife.com/the-6-best-grappling-mma-dummies/)? Even UFC and Boxing acknowledge that they need to understand the risks of a career of head-butting people’s fists.
It could be argued that young athletes are unaware of the risks of being an athlete, what with the naivety of believing they are bulletproof and will be young forever – don’t worry kids: you’ll be cool your entire life. People do have a fascinating ability to ignore long term risks in favour of short term gains. UFC champion George St Pierre reportedly retired from MMA due to persistent headaches (maybe). So it is important that athletes are made aware of the risks of injury and long term debilitation, with further research in this area being essential – yes, there is an echo in here. But it also has to be acknowledged that athletes aren’t exactly unaware of the issue. George Foreman was aware of the risks of eponymous naming of kitchen appliances, but the money was good. He was also aware of the risks of being a boxer, and named his kids George so he wouldn’t forget them – “You have to plan for memory loss in boxing.”
Then there are those that see fighting as entertainment for lowlifes and thugs. That somehow only the uneducated or the uncivilised enjoy seeing two people belt each other around the head. This is, of course, just more of the “I don’t like it, therefore it is bad and only poo-poo heads like it” argument that snobs like to make. Nothing like playing the moral and intellectual superiority card to denigrate something. Ignorance is always funnier when someone thinks they are superior.
Some argue, as the AMA does, that the intent of boxing and MMA is to belt each other senseless. If all you see in fighting is two people trying to kill one another, then you aren’t watching. You’re distracted by the superficial aspects of the events. Insights that shallow just show an ignorance of what is happening in the ring. In MMA and boxing there are many ways to win a fight, as already alluded to above. Take for example this famous clip (more here from my friend Stick):
Now the superficial view of the video has us watching Ali wailing on a guy against the ropes. Obvious, but not the reason this is classic boxing footage. Boxing fans would point out Ali’s footwork, the athleticism and skill involved, the amazing speed, and the fact that his opponent is seriously outclassed. Boxing isn’t just about punching your opponent. Watch what happens when someone tries to reverse the tables with a flurry of punches thrown at Ali:
This is athleticism defined. This is why Ali is still regarded as such a great fighter, as it takes far more than turning your opponent’s brain to mush to win a fight. And that is what non-fight fans don’t understand. They can’t get past the superficial to see the sport. They are so caught up in being snobbish and outraged that they missed the amazing athletes doing amazing things.
I’ve written before about plots and how there aren’t as many of them as you’d think – somewhere between 1 and 36 depending upon how you want to break them down. Recently there was some research published that analysed 1,737 fiction novels to figure out how the story arcs are constructed. Let’s pretend there is a big difference between a plot anda story arc…
The study used Project Gutenberg – i.e. public domain works – and the results suggest that there are only really six story arcs:
Fall-rise-fall: ‘Oedipus Rex’, ‘The Wonder Book of Bible Stories’, ‘A Hero of Our Time’ and ‘The Serpent River’.
Rise-fall: ‘Stories from Hans Andersen’, ‘The Rome Express’, ‘How to Read Human Nature’ and ‘The Yoga Sutras of Patanjali’.
Fall-rise: ‘The Magic of Oz’, ‘Teddy Bears’, ‘The Autobiography of St. Ignatius’ and ‘Typhoon’.
Steady fall: ‘Romeo and Juliet’, ‘The House of the Vampire’, ‘Savrola’ and ‘The Dance’.
Steady rise: ‘Alice’s Adventures Underground’, ‘Dream’, ‘The Ballad of Reading Gaol’ and ‘The Human Comedy’.
Rise-fall-rise: ‘Cinderella’, ‘A Christmas Carol’, ‘Sophist’ and ‘The Consolation of Philosophy’.
The most popular stories have been found to follow the ‘fall-rise-fall’ and ‘rise-fall’ arcs.
Or for those that prefer to read graphs because it makes them feel intellectual:
For those that just saw a bunch of squiggles in those graphs, what you are looking at is the story arc plotted over time for each story analysed. They’ve broken these into similar groups then added an average (the orange line). You can see how some of the story arcs follow the average more, whilst some types vary more. To see an individual story arc, they picked out Harry Potter as an example in the paper, but have the rest archived here (Project Gutenberg books) and here (a selection of classic and popular novels). As they note:
The entire seven book series can be classified as a “Rags to riches” and “Kill the monster” story, while the many sub plots and connections between them complicate the emotional arc of each individual book. The emotional arc shown here, captures the major highs and lows of the story, and should be familiar to any reader well acquainted with Harry Potter. Our method does not pick up emotional moments discussed briefly, perhaps in one paragraph or sentence (e.g., the first kiss of Harry and Ginny).
This is all nice and good, but why is this interesting? Well, aside from using my favourite statistical technique – principal components analysis – this study shows that authors create, and the audience expect, structures that are familiar. The fact that two of the story arcs (rise-fall and fall-rise-fall) are the most common emphasises this point. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful and familiar to us. There is scope to play within the formula, but ultimately we desire stories that fit conventions.
So yes, there is no original art being made.
Update: Vonnegut’s take on plots is a good addition here.
Tech and science acceptance isn’t really a political thing, it is more about your ideology. Ideology creates idiots out of everyone, no matter their political leanings. For example, if tech were solely the domain of, or even dominated by, liberals, then you wouldn’t have Donald Trump using his smart phone to tweet this on Twitter:
It is quite interesting that whilst disagreeing with 97% of experts on climate change Trump has managed to propose a xenophobic conspiracy whilst preaching nationalism and conservative ideology on an iPhone.* He really is a master of manipulative language. Of course, that isn’t the only brain dropping of anti-science nonsense from the Republican Presidential nominee. It is probably easier to list the science Trump and his supporters do believe** than cover all of the topics he has tweeted denial of. I will now list the science Trump has endorsed:
We’d be mistaken to assume that science and technology denial or rejection are the sole domain of conservatives. On the liberal side the Greens presidential nominee, Jill Stein, has taken several anti-science stances, such as supporting not-medicine, and opposing genetic engineering (e.g. GMOs) and pesticides in agriculture. Often people like to divide science denial into conservatives denying climate change and evolution, whilst liberals deny vaccines and GMOs. But, as with most things, it isn’t quite that cut-and-dried. Take for example the topic of GMOs:
This really highlights that anti-science numpties are across the political spectrum and deny the scientific consensus for very different reasons. Some deny it because they find corporations scary (Greenpeace), some deny it because they are selling something (Joseph Mercola), some deny it because they are arrogant bloviators (Nicholas Taleb).
On the topic of climate change this spectrum also exists. We keep hearing about how liberals are all climate change supporters and how conservatives are all climate change deniers… Except that isn’t true.
You can see that there isn’t 100% agreement or disagreement from either side of US politics. You don’t even got 100% agreement from climate scientists (97% consensus), despite the overwhelming body of evidence. The Pew Research Centre has similar figures for other countries. Politics isn’t the real predictor because it is too simple. At the hard end of conservatism, the above chart suggests you would be wrong half of the time if you were to call a conservative a climate denier. Even if you call fence sitters deniers as well you are still going to be wrong over a third of the time. And that’s with all the misinformation that the conservative media pumps out (USA, Australia).
If we were to look at a proper political compass that didn’t oversimplify into left vs right, or were to take into account some other factors, then politics could be a better predictor. For example, free market ideology can be a good predictor of climate change denial (67% confidence). The ideology of the free market isn’t going to allow people to admit the market’s failure to account for the externality of carbon emissions. Similarly the ideology of anti-corporatism isn’t going to allow people to admit that companies might make life saving vaccines or develop safe biotechnology food.
The only thing political affiliation can really do is give you a general idea of why or how someone will be biased toward/away from certain technologies. It is definitely not the whole story.
A version of this post originally appeared on Quora.
*interestingly Trump may actually be anti-technology despite having embraced social media. Although, his ego probably doesn’t allow him to not use social media, so of course he has a work-around.
With pork-barrelling season in full swing, we will be seeing plenty of politicians hitching their wagons to prominent sports and sporting teams. The proclamations that sports are True-blue, dinky-di, Aussie will come to win over voters, with a little somethin’ somethin’ in the budget to sweeten the deal. Because sport is king in Australia, right?
Aussies are routinely described as sports mad, sports addicts, and that we love watching and playing sports in sporty sports ways. But how many of us actually play sports? How many of us actually watch sports? Given that you could describe weekly matches of football as repeats of the same teams doing the same thing for months on end annually, it is worth taking a look at a few of our assumptions about the claims.
Let’s start with a look at how many Aussies play “sports”. Inverted commas around sports? Yes, because when people say that 60% (11.1 million) of Aussies play sports – down 5% compared to 2 years previous – what they actually mean is that we’re classifying walking and generally not sitting on the couch watching TV as sport. Let’s make it fairer on sports and subtract the walkers from being classified as sport participants. And let’s not succumb to temptation and call golf just more walking with intermittent cursing. That means that our 11.7 million “sports” participants is suddenly 7.5 million, which is 41.4% of the population (and falling with the ageing population). That figure sounds impressive until you realise that figure is participation of at least once in the past year and doesn’t account for the regularity of participation. How regularly someone is involved in sports is a much better indicator of our interest and love of sports. As opposed to accounting for that time you went to the gym because of a New Year’s resolution or because the doctor ordered you too out of concern for being dragged into an orbit around you at your next visit. The reality is that less than half of the population engage in regular (3 times per week on average) physical activity, with roughly a third of those people being gym junkies (NB: young men are more likely to play a sport, that drops with age and isn’t replaced with other activities, whilst women are more likely to be involved in non-organised sports and remain doing so).
The Top 20 most popular physical activities are dominated by fitness activities like the already mentioned walking, aerobics/fitness, swimming, cycling, and running. One of the big name sports, AFL, ranks 16th on the list behind yoga. When yoga beats football for popularity it must only be a matter of time before the PM declares it the most exciting sport. For those wondering where rugby is on the list, the rest of Australia says ‘hi’.
Of course, this is only looking at sports. How does sports participation compare to other activities? Well, ABS figures show that we spend roughly 23 minutes a day reading, versus 21 minutes on sports and outdoor activities (NB: this varies between genders and age groups). The US figures show similar results with more time reading than playing sports, but they also spend less of their day on both activities. So at least we are still better read and fitter than Americans in the low barmetrics.
Obviously sports aren’t all about participation and most would regard themselves as avid armchair sportspeople. It could be argued that the best way to stay injury free in sports is to participate from the comfort of the couch in front of the TV at home. The other option is to attend a sporting stadium dressed in clothes made from random assortments of gaudy colours to cheer on a team who are wearing similar clothes but are less inebriated. Or would the most appealing option be to go to a movie, concert or theme park? The correct answer is that people would prefer to attend a movie (59%), a concert (40%), or a theme park (34%). Live Comedy (31%) was more popular than Football (30%), Cricket (29%) and Rugby (25%).
Of course, someone is bound to point to spectator numbers for AFL, A-League, and NRL that look very impressive. With average match attendances in the tens of thousands, and millions annually, sports are clearly important.
At a glance the figures look mildly impressive, but much like enhancement pouch underwear, things aren’t nearly as impressive when you look at the attendance figures in the cold light of day.
Even if we disregard the doubling up and totalling of attendance occurring in the stats, it is easy to see that even the most popular sport in Australia would rank behind visiting Botanic gardens, zoos and aquariums, and libraries. They aren’t even in the same ballpark as cinema attendance. But we can go deeper on the reading, library and cinema figures, even getting frequency statistics so we can tell the difference between the people doing something “at least once” versus people doing something regularly in the past year. 47.7% of people are reading a book weekly, 70% of library attendees (mostly women) visited at least 5 times in the past year, 65% of Australians are (computer) gamers, and 65% of Aussies go to the cinema an average of 6-7 times a year. And yet sport has a segment in news broadcasts whilst reading, gaming, and parks and zoos battle to get media coverage. Technically if we wanted to be fair then the sport segment would be cut to make way for movie news and a live cross to the local library.
What about the economy? How much are households spending on sports? That’s a great question and a great segue into a discussion of howtrickle-down economicsdoesn’t work in sports either. I mean, funding sports that way when it hasn’t worked in the economy must be a no-brainer, right? [Insert low IQ athlete joke here] Or we could stay on topic and discuss the $4.4 billion sports and physical recreation spend by households annually. Let’s not complicate things by talking about the buying of stuff like footwear, swimming pools, and camper vans. Seriously, camping is in the sport spending category? Either way, $4.4 billion sounds like a lot of money, until you realise that gaming is a $3 billion industry, and that households spend $4.1 billion on literature and $4.7 billion on TV and film.
We allow governments to spend a lot of money on big sports and big sporting events. Think that hosting the Olympics will encourage people to play sports? Nope. Actually, seriously, nope. One report described this idea as nothing more than a “deeply entrenched storyline”, sort of like a fairy tale handed down from one Minister for Sport to the next. Part of the problem is that we buy this narrative hook, line, and sinker, such that the sports themselves (and surrounding data agencies) never really bother to keep statistics to prove the claims. But they make for great announcements and ribbon cutting events on the election campaign trail, so the myth keeps on keeping on.
Ultimately the argument isn’t that sports are unpopular or bad but rather that we spend an inordinate amount of time pretending we like them far more than the reality. And that is impacting our elected officials more than a chance to wear a high-viz vest at a press conference. Maybe it is time to rethink what media and funding we throw at sports, and perhaps consider a gaming segment on the news.
So this pork-barrelling season look forward to the announcement of a new multi-million dollar yoga stadium in a marginal electorate near you.
Update: Charlie Pickering and The Weekly team cover some similar points for the Grand Prix events in Australia.
For some reason the world of writers is filled with technophobic troglodytes intent on proving that their old-fashioned way of doing things is better. I’ve written previously about how older people’s favourite hobby since the dawn of time has been complaining about kids these days. This is also true of changes in technology, with people intent on justifying not learning to use a computer or e-reader. Because cutting down trees is the future of communication!
Once again I’ve stumbled across another article that misrepresents scientific studies to try and convince people that we need to clear forests, pulp them, flatten them into paper, cover them in ink, and act as snooty as possible. This time they – the nebulous they: my nemesis!! – are trying to pretend that taking notes with a pen is better than using a keyboard.
When will people learn that paper isn’t the medium we should be promoting? We need to be going back to scratching on rocks and cave walls. When was the last time a paper book lasted more than a hundred years out in the rain, snow, and blazing sun? That doesn’t even begin to compete with the longevity of the 50,000 year old cave paintings. Data retention for rock far surpasses the much inferior paper.
This isn’t the first article I’ve seen on The Literacy Site misrepresenting science. Hopefully they will acquire come scientific literacy soon and overcome their biases. If I turn blue and pass-out, try to act concerned. Let’s dive in.
New Research Explains How The Pen Is Mightier Than The Keyboard
It’s great when articles improve on the titles of science papers. I mean, who wants to read the science paper The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking? Pity that both titles misrepresent the actual findings. Also, is 2014 still regarded as new?
In her graduate assistant days, psychological scientist Pam Mueller of Princeton University used to take notes just like everyone else in the modern age: with a computer. One day, Mueller forgot her laptop and had to take notes the old-fashioned way. Rather than being held back by pen and paper, Mueller left class feeling as if she’d retained far more information than usual on that day. She decided to create a case study that could prove her hunch that writing longhand was actually better for comprehension than typing.
This is actually a good little story and illustrates how a lot of hypotheses are formed in science. This is the anecdote or observation that scientists want to turn into a hypothesis to create actual knowledge. But remember, this is an anecdote, which has as much value as used Easter egg wrappers that have been stuffed between the couch cushions. Putting anecdotal stories at the start of an article can set the audience up to not think too hard about the rest of the article, as you have given them the conclusion in a nice little story.
The study she created, published in Psychological Science, indicated that taking notes by hand is a more effective method than typing them on a laptop when it comes to processing information conceptually.
And here we jump straight off the rails, over the side of the bridge, and careen into the waiting river below. Sure, The Literacy Site is just quoting the press release, but that is lazy. The study itself has this line in the abstract that show how this claim is a misrepresentation of the findings:
We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.
In other words, the findings were that people spend all their time typing and no time actually listening and comprehending the lectures. Because the pen is an archaic device that is unwieldy and slow compared to the keyboard, students using a pen only write down notes after they have listened, picked out the key points, and conceptualised that information into a note. But don’t take my word for it, the press release on the University of Michigan website has a few recommendations including:
To interrupt verbatim note-taking on laptops, break up your lectures with short activities that encourage deeper processing of information.
Have students use laptops or other technologies to process–not just record–information.
Now it is time to discuss the study details a little bit, because someone might be interested in the methods section. I’m sure those people exist. Somewhere. Interested is probably the wrong word.
In the first of a series of studies led by Mueller, 65 college students watched various TED Talks in small groups, and were provided with either pens and paper or laptops for taking notes. When the students were tested afterward, the results were overwhelming. While the groups performed equally on questions that involved recalling facts, those who had taken longhand notes did significantly better when it came to answering conceptual questions.
Sorry, I need to catch my breath. I’m so shocked at the massive sample size. This is definitely enough people to represent the rest of society. Conclude away I say!
Anyway, these overwhelming results are just a tad whelming.
As you can see the performance on retaining facts was the same, with error bars that suggest 65 people is probably not enough to draw conclusions from. Not that anyone would be trying to claim this study is proof of anything, right? The next thing you see is the benefits of using a pen…. as long as you ignore those error bars and just accept the p-value tells us something of value. Given that those error bars overlap for the two groups, I wouldn’t be drawing conclusions from a p-value. Also, I’m not exactly sure why an ANOVA was used when there were only two groups to compare. KISS principle applies to statistics as well.
Now the study realised that 65 people wasn’t enough, so they repeated the study with a few variations twice more. In the second and third tests they had 151 and 109 people take notes. Each test had the typists writing between 250 and 550 words, whilst the pen wielders wrote roughly 150 to 400 words. Interestingly the note takers were writing verbatim 12-14% with their laptop but the pen users only managed 4-9% verbatim. This shows why the conclusions I’ve quoted above were drawn.
Out of interest, here are the results from the other two tests that were more convincing for that conceptual finding.
The second test with 151 people were tested with pen, laptop, and laptop with a lecture from the tester about how they really should pay attention. With 50 people per group you’d hardly jump up and down about the significance of this test, but clearly telling people to pay attention doesn’t… hey look a squirrel.
The third test with 109 people again tested for pen vs keyboard, but this time they allowed revision of notes before being questioned. This makes the groups even smaller, and again I’d question the significance of such a small sample. But the researchers summed up the results with this erudite paragraph:
However, a more nuanced story can be told; the indirect effects differ for conceptual and factual questions. For conceptual questions, there were significant indirect effects on performance via both word count and verbatim overlap. The indirect effect of word count for factual questions was similar, but there was no significant indirect effect of verbatim overlap. Indeed, for factual questions, there was no significant direct effect of overlap on performance. As in Studies 1 and 2, the detriments caused by verbatim overlap occurred primarily for conceptual rather than for factual information, which aligns with previous literature showing that verbatim note taking is more problematic for conceptual items.
In other words, doing lots of writing, particularly just copying what was said verbatim, makes you suck at understanding what the hell is going on. Oh, and study before the test. Apparently it helps too. Made that mistake at university.
So back at The Literacy Site they are skipping the other tests and just heading to the conclusions:
Mueller found that this was the result of laptop users trying too hard to transcribe the lecture rather than listening for the most important information and writing it down by hand. It may be an era where computers have made handwriting seem useless, but Mueller isn’t the only believer in the importance of longhand.
Notice the nuanced difference that seeing all three tests provides? We could be led to believe that there was overwhelming evidence for the pen, but what we see is that note takers need to readdress their methods of taking notes. Or they could just wing it.
An article in TIME discusses Karin James, an Indiana University psychologist, who published a 2012 study indicating writing is particularly important in the cognitive development of pre-literate children five and under. While using a computer for note-taking in some situations makes sense, it’s important not to overlook the longhand method.
It’s great that the article tries to incorporate some extra research. Citing one study with a small sample size is hardly compelling, certainly not worth writing an article about. But again the research is being misrepresented:
…the benefits of writing: increased creativity, better critical thinking, boosted self confidence, and a correlated improvement in reading capability with writing prowess.
But are these benefits real? The short answer: Mostly not. “There’s lot of caveats in handwriting research,” says Karin James, a psychologist at Indiana University
Curse those damn caveats! Why can’t we have a control group of kids we don’t teach to read and write?!
Which brings me to a final point about these old technologies vs new technologies articles: stop jumping the gun! We’re in a transition phase. This isn’t 1970s velvet suits with platforms versus 2010s hipster atrocities. This is typewriter hipster texting on his phone. Technology is changing and we’re still learning how to use it properly. The studies that are cited in many of these articles have very limited scope, test very few people, and are comparing new and established things. Has anyone taught laptop users to take notes effectively for the new medium? Do you actually need to take written notes at all in this modern age? We need to see more science done on the changes taking place, and we need the articles discussing the science to do more than discuss (one study from) one paper, and highlight the limitations. Well, unless you have already made up your mind about a topic and just want some links to throw at people in an argument. Screw being right!
This blog post is being shared online, in print, and carved into a cave wall. Comment below which format you preferred receiving it in.
Cool infographic comparing the destructive power of sci-fi weapons from Foundation Digital and Fat Wallet. And yes, the Smart Disk is probably more accurately called a Smart Chakram (if I learnt anything from watching Xena Warrior Princess).
Do you love the smell of books?
Do you prefer the feel of paper?
Do you feel slightly superior to others because you paid for the hardcover?
Do you grasp at any excuse to deride e-books and the people who read them?
Well, I have found the article for you!
Recently on Mental Floss an article entitled “5 Reasons Physical Books Might Be Better Than E-Books” sought to comfort snooty readers who wanted ammunition to fling at e-book readers. In the proud tradition of deriding any new technology as bad (see e-books, e-cars, driverless cars, etc), this article introduces us to some research that is wonderfully out of context for the intent of the article’s argument. Let’s dig in.
Though e-book readers have become a more common sight around town, traditional books still have their evangelists. According to The New York Times, e-book sales have been falling in 2015. Print definitely isn’t dead. In fact, according to some research, it may actually be a better choice for some readers. While scientists are still trying to tease out exactly how digital reading affects us differently, here are five ways e-books might be inferior to their dead-tree cousins.
When deriding things it is always best to reference another article that derides the same thing. In this case the article references the wonderfully misleading NYT piece on e-book sales slipping. Pity that the sales didn’t slip… That’s right, the NYT misrepresented a slowing in e-book sales growth as a drop in sales. And did they mention why readers were stating a preference for paper? Yes. Hidden in the article is a little quote about how publishers had been protecting their paper sales by inflating e-book prices. Now, my economics is a tad rusty, but I’m pretty sure making something more expensive when there are direct substitutes on offer results in a decrease in sales of that item and an increase in the sales of the substitution item. At least, that’s what I’ve heard…
1. E-BOOKS CAN REDUCE READING COMPREHENSION.
In a study of middle schoolers, West Chester University researchers found that students who read on iPads had lower reading comprehension than when they read traditional printed books. They discovered that the kids sometimes skipped text in favor of interactive features in the e-books, suggesting that certain multimedia in children’s e-books can be detrimental to the practice of reading itself. However, the researchers noted that some interactive features in e-books are designed to enhance comprehension, and that those might be more helpful than game-type interactive graphics.
This is a fantastic study in how multitasking is terrible for concentration and thus impacts reading comprehension. iPads have all sorts of cool stuff on them, including little notifications telling you that your friend just liked your latest picture of your meal. And building those distractions into the book being read: sounds like a great idea! What this study doesn’t do is support the idea that e-books reduce reading comprehension.
2. YOUNG KIDS CAN GET DISTRACTED BY E-BOOKS.
Similar results were found by a small study by the Joan Ganz Cooney Center that consisted of 32 kids reading e-books and print books with their parents. It found that “enhanced” e-books might be distracting. Kids who read enhanced e-books—ones with interactive, multimedia experiences—were more engaged with them physically, but in the end they remembered fewer narrative details than those who read print books or basic e-books [PDF].
Don’t read the link. Don’t read the link. You read the link: didn’t you. Leaving aside the tiny study size for a moment (a point the study authors acknowledge), the study itself supports the points I made above about being distracted whilst reading. And if you look through the study you see a great little chart that showed the comparison of reading comprehension – expressed as story details recalled – was actually superior in basic e-books than in print books or enhanced e-books.
The findings of the study were literally stated as:
The enhanced e-book was less effective than the print and basic e-book in supporting the benefits of co-reading because it prompted more non-content related interactions.
Odd that the “e-books are bad” article failed to highlight this finding…
3. YOU REMEMBER LESS ABOUT A BOOK’S TIMELINE. Another study of adults also found that e-books can be hard to absorb. The researchers asked 25 people read a 28-page story on a Kindle and 25 to read the story in paperback, then asked the readers to put 14 events from the story in chronological order. Those who read the story on a Kindle performed worse on the chronology test than the book readers, though they performed about the same as print readers in other tests. Earlier research by the same scholars, from Stavanger University in Norway, found that Norwegian 10th graders also remembered more about texts if they read them in print rather than on a computer screen [PDF].
Finally we come to a study on actual e-books on an actual e-reader versus their dead tree counterparts. Of course I’m again blown away by the sample size of the study, a massive 50 people. That should easily extrapolate to the rest of humankind. The linked article doesn’t give us much information, but I found a better one, and it has this summary:
In most respects, there was no significant difference between the Kindle readers and the paper readers: the emotional measures were roughly the same, and both groups of readers responded almost equally to questions dealing with the setting of the story, the characters and other plot details. But, the Kindle readers scored significantly lower on questions about when events in the story occurred. They also performed almost twice as poorly when asked to arrange 14 plot points in the correct sequence.
I’d link to the original paper, but it is behind a paywall. Suffice to say that the error margins were pretty big (even the paper readers got 34% of the plot points in the wrong order). And this was a short story, something that shouldn’t be that difficult for any reader. So this probably says as much about the story as anything. They’d need far more stories and participants to get a good idea of what is going on. But I will concede that reading on paper vs e-reader vs screen is definitely a different experience and has an influence. What that influence is, positive, negative, or just different, needs more research.
Interestingly the study of reading PDF texts on a screen vs paper texts in high school students showed why scrolling is a terrible way to read anything. Scroll down to read more about PDFs sucking.
4. THEY’RE NOT GREAT AS TEXTBOOKS.
While e-book textbooks are often cheaper (and easier to carry) than traditional door-stop textbooks, college students often don’t prefer them. In some surveys of college kids, the majority of students have reported preferring print books. However, a 2012 study from the UK’s National Literacy Trust of kids ages 8 to 16 found that more than 50 percent of children reported preferring screen reading [PDF].
It is odd to start a point and then go on to disprove it. E-book textbooks being cheaper, easier to carry, and in some surveys preferred by the majority of respondents, seems to me to be the opposite of “not great”. The preference for paper textbooks claim comes from a survey of 527 students, yet is immediately refuted by the UK survey of 34,910 students. I wonder which one is more representative of how students feel about textbooks?
In the comments of the Mental Floss article, someone made a good point in regard to the format of textbooks. Oftentimes the textbooks are PDFs, which brings us back to the point about scrolling, and adds the problem with taking notes. Clearly the format of the e-book plays a big part in how people feel about them.
5. THEY’RE TIRING.
Staring at a lit screen can be tiring for the eyes and the brain. A 2005 study from Sweden found that reading digitally required a higher cognitive workload than reading on paper. Furthermore, staring at LED screens at night can disrupt sleep patterns. A 2014 Harvard study found that people who used e-readers with LED screens at night slept worse and were more tired the next day. So, if you’re going to go for an e-book, go for one without the backlight.
Now let us talk about how bad e-books are for your brain…. Sorry, did I say e-books when I meant LED screens like your iPad and computer? Silly me. Having bright light, especially from white background screens, shining in your eyes at night isn’t a good thing. But that is about as related to e-books as X-Factor is to talented singers. So the message about changing your screen setup for night viewing only really applies to readers if they utilise a backlit screen for reading.
And now that we are at the end of the article, let’s throw in some information for the pretence of balance in the hopes you will ignore the headline and main article points:
BUT DON’T THROW AWAY YOUR E-READER JUST YET.
However, all this may not mean that reading on a Kindle is really going to melt your brain. For instance, reading an e-book on a computer is a much different experience than reading on a Kindle, which is specifically designed for consuming books. So, too, is playing with an interactive e-book on an iPad, compared to using a simpler e-book device that only presents the text, with no opportunities to click away into digital distractions.
This really does appear to be information that would have been better presented in the context of the “e-books are evil” points above; doesn’t it. Throwing in this sort of context at the end rather than in the discussion of the study findings is a cheap tactic, a ploy that sees important information left until after you have already formed your opinion on a subject, or just plain stopped reading the article. This information has far less chance of being retained than the others points made earlier in the article, thus the article has created the bias they were after (deliberately or otherwise).
And some studies have found that part of the difference between the way people absorb information from e-books versus paper might be due to approaching e-books differently—in one test, participants didn’t regulate their study time with digital books like they did with paper texts, leading to worse performances. It’s possible that our expectations of e-book reading—as well as the different designs of the digital reading experience on a computer or iPad or Kindle—might affect how we approach the text and how much effort we put into studying them. As generations of e-book readers evolve, and people become more accustomed to the idea of sitting down with a digital textbook, these factors could change—for better or for worse.
These are all good points, again made at the end of the article rather than at least being hinted at throughout. And unlike the main points in the article, these are unreferenced. Are these points from the studies already referenced (some are) or some other studies that aren’t worth mentioning? In the former, you would expect these points to have been raised earlier in the article in the proper context, in the latter, this feels like an attempt to downplay the statements as less important than the referenced points above. Either way we are left with the sentiment “change is scary” rather than “change is change”.
Hopefully this breakdown of the Mental Floss article shows just how disingenuous many of these anti-technology articles are, especially the “e-books are evil” articles. I’m not trying to say that e-books are what everyone should be reading, or that our forests are now saved from Dan Brown. There is clear evidence that our changing technology is changing the way we read and absorb information, and this transition period is still a learning phase as to how and if we will change our reading preferences. But negative preconceived ideas about e-books (or technology) don’t help in communicating about the change that is happening.
Update: This study compared reading on paper and screens and found stark differences. The sample size was again small, but the study appears to have been better conducted than the others I’ve discussed above. The conclusions from the paper suggest, as I have, that we need to look at teaching/learning how to read e-books and utilise e-readers.
To sum up, the consistent screen inferiority in performance and overconfidence can be overcome by simple methods, such as experience with task and guidance for in-depth processing, to the extent of being as good as learning on paper.
In a recent post I discussed some points about how to spot anti-science nonsense. Pick a subject, any subject, and there will be someone – probably Alex Jones – making an outrageous claim about it. But don’t worry, they’ll solve the problem with items available from their reasonably priced store: $1440 per litre is a bargain price for something you don’t need and doesn’t do as claimed.
Obviously scammers are gonna scam, and anti-scientists are going to not-science. The thing is once you understand that something is wrong you have some responsibility to make sure the misinformation doesn’t spread like a leaky diaper. With great power knowledge comes great responsibility. Which means you have to start discussing science with science deniers. Don’t forget to place a cushion on your desk and wear padded gloves.
Despite having the advantage of science/facts in the argument against science deniers, you have the decided disadvantage that you can’t just make stuff up (despite how tempting and financiallyrewarding it is). In fact you have to be better informed about not only your side of the argument but also about the science denier’s arguments.
Sounds odd, doesn’t it? You have to learn nonsense to talk about science. That makes as much sense as being pro-life and pro-death penalty. Bear with me here. Take this example of climate change denier Bret Stephens arguing against Bill Maher on Real Time:
Bret sounds convincing, doesn’t he? Bret sure thinks so. He makes some vague references to headlines from the 1930s and 1970s as dismissals of current concerns about oceans. Then he references an economic study on environmental policy priorities, all whilst looking very smug and sure of himself. These statements leave Bill at a stumbling point because he has to admit he doesn’t know what the hell Bret is talking about. The video edited out the pant-less victory lap Bret did of the studio, complete with crotch gyrations in Bill’s face, as he screamed “Take that liberal media!”
Now it isn’t a bad thing to admit you don’t know stuff. Nobody knows everything, it is arrogant to act like you do. Arrogance is of course the result of being surrounded by Knowitalls, an invisible mythical creature that looks like a cross between a unicorn and Bill O’Reilly. Anyway, I’m glad Bill Maher admitted he didn’t know about the study; if only he would do the same with his position on vaccination and GM/GMOs. But the admission did make him appear less convincing as he couldn’t directly rebut the points made.
And here is why you need to know what the anti-science people “know”. Take the first points Bret makes about the oceans dying. His two dates mentioned are actually making reference to points unrelated to the issue of climate change causing ocean acidification. The first date was reference to the Overfishing Conference in 1936 about whaling and fishery management (as far as I can ascertain), issues that were addressed by introducing catch sizes, fishing licenses, and the phasing out of whaling. So Bret is trying to justify inaction on climate change to save ocean damage by referencing an environmental concern that was acted upon. What a great argument!
His second date was the 1975 Newsweek and New York Times (and others) article about global cooling. This is a well worn climate change denier talking point/myth that has been thoroughly debunked yet has evolved beyond a PRATT point and become a zombie point. Some myths just won’t die and are constantly in search of brains to infect/affect.
We then hear Bret reference a Bjorn Lomborg study on best use of resources and where climate change ranked. Very convincing, aside from the fact that it was complete and utter nonsense. See, Bjorn doesn’t accept the actual risks and actual current changes that have occurred due to climate change. So his entire analysis and argument started off from a completely flawed position and was thus doomed to fail to draw any worthwhile conclusions. Actual experts have torn apart his work, particularly his “conference”, here, here and here. But Bill didn’t know this, thus the points made stand unchallenged and as a sort of “valid” evidence.
And this is why it is important to know your enemy. If you know the arguments they are likely to raise, then you can have rebuttals ready. In the case of citing Lomborg’s work you can point out the failings before people have a chance to take it seriously. In the case of old magazine articles, you can point out you only read them for the pictures. But it means you don’t just have to know the science, you have to know the anti-science.
It is also worth noting that Bret reeled off a string of statements that were essentially nonsense dressed up as facts. That is a tried and trusted debating tactic known as the Gish Gallop, and it is very hard to argue against. It takes a lot more energy to redress the nonsense than they take stating it, not to mention time wasted not making your own points. Also helps that science has to have facts on its side, anti-science can make it all up on the spot.
Of course the obvious thing to say here is that the anti-science movement often don’t see themselves as anti-science and will use similar tactics. They will familiarise themselves with the science in order to dismiss it. This is possibly the most annoying part of science communication, those imbedded in anti-science positions aren’t ignorant of the facts, they are wilfully ignorant of their fact-ness.
Memes fly around the internet like quantum accelerated particles. Some are fun, some are informative, others are utterly ridiculously wrong. Unfortunately people get caught up in pretty pictures with inspiring – or is that insipid – quotes printed on them, so they start following someone on social media, someone who spreads as much nonsense as inspirational quotes.
Take for example this quote from Mark Twain:
At face value there is a great message from Twain about not storing up emotional baggage. Let’s just ignore the scientific inaccuracy of how acids work and how the materials of the respective containers and the Ka (acid dissociation constant) of the acid are going to be the deciding factors in how much damage the acid does. But once you move past the quote and pretty picture you start to notice certain things about the picture, namely that there is some weird design stuff going on it. There’s some spacey looking stuff in the background, there’s a person with no skin, and some sort of lattice work design: what the hell is this stuff? That’s called the Flower of Life, something that has been incorporated into Sacred Geometry, a load of nonsense that would have Mark Twain penning scathing insults toward; Twain loved science.
Let’s take a look at another meme:
Again we have a bit of text that implies that good relationships are much deeper than the shallow, fleeting, physical attraction. This one is, however, more obvious in its ridiculousness. In amongst the rainbows and pretty city the two outlines of people are hovering above, there are glowing lights in the bodies of the people. Take a guess at what they are meant to be. Chakras. That’s right, we’ve gone all new-agey nonsense right out in the open. So once you spot the new-age nonsense you realise the word “soul” isn’t being used in the allegorical sense but in the “I believe all sorts of rubbish” sense.
And now we descend into health nuttery:
This is a typical health meme that these sorts of social media pages post: half truths, misconceptions, lies and nonsense.
Let’s start at the top: there are no pus cells in milk. The meme seems to be referring to the somatic cell count of milk, which is not the same thing, and just part of the biology fail on display here. The 135 million figure is from the detection levels for mastitis in cows, which says that uninfected cows will have less than 150,000 cells/mL (they’ve clearly scaled up to a litre of milk in that glass, which doesn’t look like a litre glass to me).
Growth hormones: misleading at best. Food has hormones in it, produced by the food, be that plants or animals. Remember how soy is meant to be good for menopausal women? Yep: plant hormones. So milk will have naturally occurring hormones in it. Some countries have limited/banned the use of growth hormones in animal production, others have allowed it. And this brings us to one of the many reasons pasteurisation is used in milk production, as it breaks down most of the hormones.
Feces: again this is misleading, and also one of the main reasons for pasteurisation. You aren’t so much going to end up with feces in the milk as the bacteria associated. So it is important to kill the nasties and why raw milk is considered dangerous.
Fat: Again, I’m not sure why food having fat in it is bad.
Acidic protein: This one is quite funny because there are a lot of acidic proteins. And obviously these acidic proteins leaching calcium from bones is one of those things that “mainstream medicine is ignoring” – aka the rallying cry made by purveyors of nonsense. Pity that dietary protein (which can include dairy) has actually been shown to be good for bones. The issue here is actually a couple of health myths. The first is the acid/alkaline dietthat isutter nonsense. The second is the overstating of health benefits of milk, specifically as they relate to bone health and osteoporosis development.
Now I’m not saying that milk is bad for you, but it also isn’t the most awesome drink ever made – that would be whiskey. Milk should be like whiskey: consumed in moderation.
The point about memes is that they are only as good as their creator. The intention of the above memes is clearly to help people, inspire them to lead better lives, even if it is by showing them some pretty pictures with brain droppings written on them. But sadly it is obvious that these memes were created by someone who is not in touch with reality, which makes their health advice something to be avoided. Beware the meme: it could be nonsense!
Just recently I was asked a question on one of my climate change posts. The question, whilst not about climate change nor climate science, was about similar anti-science nonsense that acts to confuse and befuddle those who aren’t familiar with the field. The comment in full:
I like your writing, I wish more would understand your logic when they spout facts and relationships. If you have time please, an article (though imperfect) comments,
“Bacteria…and plants use a seven-step metabolic route known as the shikimate pathway for the biosynthesis of aromatic amino acids; glyphosate inhibits this pathway, causing the plant to die…. Monsanto says humans don’t have this shikimate pathway, so it’s… safe……however, that our gut bacteria has this pathway, and these bacteria supply our body with crucial amino acids. Roundup …kills bacteria, allowing pathogens to grow; interferes with the synthesis of amino acids including methionine, which leads to shortages in critical neurotransmitters and folate; chelates (removes) important minerals like iron, cobalt…”
I would love to know your take on that possible cause and affect.
Thank You for your Time !
Dennis has asked how likely it is that this sciency sounding article is correct. The short answer is that you are more likely to get this week’s lottery numbers from one of these articles than any reliable facts. How can I be so dismissive? Well, the thing is I’m not being dismissive, it just sounds like that because my skeptical science eye has spotted many holes in the quote and article. So let us go through them like a rugby player at an all you can eat buffet.
The first thing to note is the source of the article and the “expert” cited within. There are some tell-tale signs that a webpage may be unreliable, such as when they use terms such as “truth”, “natural”, “alt” as a prefix to any word, and “health” as their names. Health Impact News isn’t the giveaway here, it could be a legitimate source of information. In this case, the giveaway is the byline “News that impacts your health, that other media sources may censor.” See: it’s a conspiracy!!! (Font = sarcasm) And conspiracy claims are always reliable (/sarcasm).
If you check out Web of Trust you can see that Health Impact News perpetuates a number of dubious and fraudulent claims, such as vaccine myths from the anti-vaxxer nutters. Which means that the slant the website is running is one that doesn’t respect scientific evidence. Not that this alone is enough to dismiss the claims.
The other source is the “expert” cited, one Stephanie Seneff. To say that this computer scientist is out of her depth in the field of health, genetics and chemistry is like suggesting Justin Bieber’s music is appealing to people with taste. She makes all sorts of wacky and unfounded claims about herbicides, GMOs and Monsanto, so calling her an expert or citing her work should get you laughed out of any room you are standing in.
What the article claims is really the crux of the dismissal. If someone claimed to have seen bigfoot doing lines of blow with someone other than Charlie Sheen, we’d be immediately suspicious since we know that greater than 90% of all cocaine is snorted in the company of Sheen. Similarly when someone claims that the most extensively tested herbicide of all time, the safest agrichemical ever made, the most widely used agrichemical on the market, is responsible for [insert health consequence here, in this case, autism] then you should be a tad suspicious.
Let’s ignore the fact about the extensive safety testing. Let’s also ignore the fact that autism seems to be the disease de jour of the alt-health fear-mongers, linked to everything from GMOs to vaccines. Let’s also ignore the fact that agrichemical safety and efficacy have virtually nothing to do with the safety and efficacy of individual GMOs (GM and GE being another kettle of fish entirely), despite what the article tries to imply. Let’s also ignore that glyphosate binds tightly to organic matter and is rapidly broken down in the environment so actual levels consumed will be negligible, and those amounts won’t be doing anything in the digestive tract. Let’s just assume that glyphosate is getting into our bodies and causing damage at huge levels: what evidence is there to suggest it is glyphosate and not any other agrichemical or environmental toxin that has increased during the same time period (e.g. coal pollution)? What evidence is there to suggest there has actually been any rise in maladies that aren’t as a result of something else (because everyone knows that fat people got fat whilst only eating celery sticks)?
The reference material or evidence.
Big claims require even bigger evidence. Solid evidence. One thing I hate about news sites is that they so often make oblique references to a study that may or may not have been published in a reputable journal, rather than just link straight to the journal and paper in question. In this case, there is no link to a journal, reputable or not, just links to other unreliable sites such as The Mind Unleashed and The Alliance of Natural Health USA webpage, as well as a Youtube video. So far I’m underwhelmed.
Remember, this article is reporting on Seneff’s claim that half of all people will be autistic by 2025 thanks to herbicides. Half!! This is a condition that has a median occurrence of 62 cases per 10,000 people. The spectacular rise in autism that we should expect in the next decade for a herbicide that has been in wide use for many decades already would require a bit more evidence than “well, we reckon.” Seneff claimed a correlation between glyphosate use and a rise in autism. She clearly didn’t compare the rise in autism to organic food.
Well, if you dig further into the reference of the reference (seriously, how hard is it to cite your sources properly!?!) you will find an actual journal paper by Seneff and Samsel in a journal called Entropy. Have you heard of Entropy and is it recognised as a go-to journal for science on the topic of, well, anything? Nope. And what about the study itself which claims that just about every malady you can think of is linked to glyphosate, what evidence does it present? Well, pretty much none. To quote this article:
The evidence for these mechanisms, and their impact on human health, is all but nonexistent. The authors base their claim about CYP enzymes on two studies, one of liver cells and one of placental cells, which report endocrine disruptions when those cells are exposed to glyphosate. Neither study is CYP-specific (The effect of pesticides on CYP enzymes, by contrast, has been studied specifically.) As for the gut bacteria, there appears to be no research at all on glyphosate’s effect on them.
Samsel and Seneff didn’t conduct any studies. They don’t seem interested in the levels at which humans are actually exposed to glyphosate. They simply speculated that, if anyone, anywhere, found that glyphosate could do anything in any organism, that thing must also be happening in humans everywhere. I’d like to meet the “peers” who “reviewed” this.
Yep. That is a rebuttal from a Huffington Post article. Let that sink in for a moment. Even Huff Post don’t want to touch Seneff’s claims with a ten-foot pole.
So far we have found that the suspicions about this article are well-founded. The site is not reliable, the “expert” cited is not reliable, the sources cited are not reliable, the evidence cited is essentially non-existent, the claims made are not particularly plausible, and there is no evidence to support the claims. But this leaves us with a problem: short of hours of research on each point made, how do I confirm that these people are lying to me on the internet? Because you should be able to trust the internet, right?
The average person can’t be expected to be an expert in all topics, nor be expected to have the time to track down and read every piece of science to confirm an article is accurate. But there are people on the internet who have their favourite topics that they will write (or make videos) about. This means you just have to search for rebuttals to articles. Google can be handy for this if you are familiar with how to weed out the rubbish results. Joining forums or following experts in various fields can help as well (e.g. Skeptics Stack Exchange, Science Based Medicine). There are also webtools available to help find good information. I’ve already mentioned Web of Trust above, but there are many others.
rbutr is one such tool that can help with finding rebuttal articles (disclaimer: I am involved with rbutr on social media). In the case of the Health Impact News article there were two linked rebuttals (I’ll be adding this one as well), here and here. This really helps to figure out whether the arguments presented are valid (although in this case, a basic application of logic should suffice). But there were more rebuttals linked to the Seneff journal article, 7 of them: here, here, here, here, here, here, and here. These links allow people to easily see the arguments laid bare.
Thus we can now see that the article can be dismissed as rubbish. A fair bit of work to get there, but in the end, we did it (~25 references and 1600 words later). Makes installing rbutr and Web of Trust in your web browser look like a great option, doesn’t it!
In the information age ignorance is a choice. But informing yourself isn’t as easy as just reading articles on subjects. Using a critical eye, applying logic, and accessing quality information has to be done to avoid being misinformed. When all said and done, evidence wins. And cat videos. And dog videos. In fact, any video featuring a cute animal wins.
I’ve been quite busy recently. There is the usual writing going on, but I also have a few articles in the works, another rugrat in the works, and I’ve also been interviewed for the Skeptically Challenged Podcast.
In the podcast, Ross, Ketan and myself discuss a range of topics and try to bring the science. Ketan discusses the mythical wind turbine syndrome, I discuss a recent climate paper, and we cover the promises of fusion power from Lockhead Martin and the recent Ebola hysteria.
Also, stay tuned until the very end and you’ll hear just one of the bits that Ross will have for subscribers, mainly jokes. Now just imagine how we managed to work rocket powered Miley Cyrus into the discussion.
If TV is the lard developing, heart attacking inducing, entertainment form, then reading is the brain workout. I’ve previously posted about how reading is good for the brain, but science is keen on finding out more, so there is always new research that brings up cool findings. I’m reposting an interesting article I found (here) that lists some benefits from reading with links to the research, proving that reading is good for you.
Merely reading a word reflecting a colour or a scent immediately fires up the corresponding section of the brain, which empathises with written experiences as if they actually happened to the audience. Researchers believe this might very well help sharpen the social acumen needed to forge valuable relationships with others.
In correlation with the previous perk, sensual stimulation makes it easier for aging brains to keep absorbing and processing new information over time. This occurs when the occipital-temporal cortex essentially overrides its own programming and adapts to better accommodate written language.
Avid readers enjoy a heightened ability to retain their cognitive skills over their peers who simply prefer other media — even when exposed to lead for extended periods, as indicated by an article in Neurology. It serves as something of a “shield” against mental decay, allowing the body to continue through the motions even when facing temporary or permanent challenges.
When educators at Obafemi Awolowo University incorporated education-themed comics and cartoons into primary school classrooms, they noted that the welding of pictures to words in a manner different than the usual picture books proved unexpectedly beneficial. Exposure to these oft-marginalized mediums actually nurtured within them a healthy sense of creativity — a quality necessary for logical and abstract problem solving.
On the whole, readers tend to display more adroit verbal skills than those who are not as fond of books, though it must be noted that this doesn’t inherently render them better communicators. Still, they do tend to sport higher vocabularies, which increase exponentially with the volume of literature consumed, and may discern context faster.
Anne E. Cunningham and Keith E. Stanovich’s “What Reading Does for the Mind” also noted that heavy readers tend to display greater knowledge of how things work and who or what people were; once again, findings were proportionate to how much the students in question devoured in their literary diets. Nonfiction obviously tends to send more facts down the hatch, though fiction certainly can hold its own in that department as well.
Some students obviously don’t perform well on tests despite their prodigious abilities, but in general, findings (such as those offered by the National Endowment for the Arts) show a link between pleasure reading and better scores. The most pronounced improvement, unsurprisingly, occurred on exams focused on analyzing reading, writing, and verbal skills.
According to a 2009 University of Sussex study, picking up a book could be one of the most effective strategies for calming down when life grows too overwhelming — great for both mental and physiological reasons. The University of Minnesota built on these findings and recommends reading some form of literature for at least half an hour every day for optimum relaxation.
Fully engaged reading sessions — not just skimming, in other words — actively engage the sections of the brain responsible for thinking critically about more than just texts. Writing, too, also serves as an excellent conduit sharpening the skills necessary for parsing bias, facts vs. fictions, effective arguments, and more.
In a British Medical Journal article, academics at the French National Institute of Medical Research showcased their findings regarding the relationship between a mind occupied by reading and a lower risk of dementia. Obviously, literature isn’t going to act as a cure, but nonreaders are 18% more likely to develop the condition and experience worsened symptoms.
Readers genetically or environmentally predisposed to MCI, Alzheimer’s, and other disorders characterized by cognitive decline won’t escape their fate if they live long enough; but not only do their literary habits push back the onset, these conditions also encroach at a more sluggish pace. More than any other way to pass the time, picking up some sort of book (no matter the medium) proves among the most effective strategies for delaying and slowing dementia.
Along with bolstering critical thinking skills, the authors of “Reading and Reasoning” in Reading Teacher noted that literary intake also positively influences logic and reasoning. Again, though, the most viable strategy for getting the most out of reading involves picking apart the words themselves, not merely flipping through pages.
Improved literacy means improved self-esteem, particularly when it involves kindergarten and middle school students whose grades will swell as a result, although high schoolers, college kids, and adults are certainly not immune to this mental health perk. Set realistic reading goals and work toward them for an easy, painless (and stress-free) way to kick up the spirits when confidence starts wavering.
Neuron published a Carnegie Mellon paper discovering how the language centers of the brain produced more white matter in participants adhering to a reading schedule over the period of six months. Seeing as how this particular tissue structure controls learning, it’s kind of sort of a good thing to be building, especially when it comes to language processing.
Brain flexibility is how the essential organ stratifies itself, delegates tasks, and compensates for damages, and Carnegie Mellon researchers believe reading might serve as a particularly excellent way to encourage this. These discoveries of how the brain organizes itself beg for further insight into the autism spectrum and other conditions that may stem from poor neurological communication.
The physiology of reading itself contributes to better memory and recall, specifically the part involving bilateral eye movement. However, it holds no influence over implicit memories: most of the benefit comes when recalling episodic memories.
Kids and parents who read aloud together enjoy tighter bonds than those who do not, which is essential to encouraging the healthiest possible psychological profile. Along with the cognitive perks, these sessions build trust and anxiety-soothing comfort needed to nurture positive behavior and outlooks.
Listening skills improve reading, and reading improves listening skills, particularly when one speaks words out loud instead of silently. When learning a primary or secondary (or beyond) language in particular, fostering interplay between the two ability sets makes it much easier to soak up vocabulary and grammar.
Once again, any bookish types hoping to claim the full benefit of this cognitive phenomenon gain it via close reading and analysis, not skimming, speed reading, and skipping. Because the activity is far from passive, it challenges the mind to focus, focus, focus: which certainly carries over into other areas of life!
Psychology professionals in the United Kingdom and United States gravitate towards bibliotherapy when treating non-critical patients, thanks to studies printed up in the journal Behaviour Research and Therapy. The practice involves prescribing a library card, which recipients use to check out one of the approved 35 self-help books for 12 weeks; as a supplement (not a replacement) to conventional therapy, it has proven extremely valuable to the clinically depressed and anxious.
Yes, who’d have thought that writing could be good for the brain? Slaving away writing seems to be like practicing sports or music, stimulating the brain to be better. Dr Martin Lotze used fRMI to look at novice and experienced writers’ brains – probably to steal ideas for a new book – and how they worked in different writing activities. Some regions of the brain became active only during the creative process, i.e. not while copying, with brainstorming sessions lighting up the vision-processing regions. It’s possible that they were, in effect, seeing the scenes they wanted to write.
But the two groups differed slightly in how their brains worked whilst being creative. Novice writers activated their visual centres, whilst experienced writers showed more activity in regions involved in speech. “I think both groups are using different strategies,” Dr. Lotze said. It’s possible that the novices are watching their stories like a film inside their heads, while the writers are narrating it with an inner voice. Experienced writers also had a region called the caudate nucleus become active, the part of the brain involved in skills that comes with practice. In the novices, the caudate nucleus was quiet, showing that practice works the brain.
The internet is a wonderful place to find information on just about any topic you can imagine and few you can’t. From the latest scientific study to the grumpiest cat, from insightful commentary to rule 34: the internet has it all. The problem is that not everyone is rational, logical, nor well informed, and they still have internet connections and the ability to make webpages and comment on social media.
As someone who tries to share science and knowledge with people, I love to engage and discuss topics. If I can help someone understand or learn something about a complex topic, then I feel like I’ve accomplished something. The more science communicators out there doing the same thing, the slightly better the world becomes. This better understanding leads to better decisions, better ideas, better inventions, better cat photos.
The problem is that not everyone appreciates being told that they are mythtaken or wrong. Others are adamant that they aren’t wrong. People will argue against the overwhelming scientific evidence on topics like climate change (real, man-made, we need to do something about it), genetic modification (breeding technique, cool innovation that is more precise and has great potential), modern medicine (seriously!?!), evolution (as solid a theory as gravity), and even the shape of the Earth (yes, flat-Earthers still exist). This anti-science nonsense is thankfully on the losing team, they just aren’t playing with a full deck.
It is these science deniers that are the most frustrating to deal with on social media and the internet. There is no evidence you can show them that won’t be dismissed – often as a conspiracy – and there is no rationality to their arguments. But they can also be very convincing to people who don’t know enough about a topic, which is how myths get started. And that is dangerous, once myths are started they are very hard to get rid of. So it is actually important to make sure that the science deniers aren’t existing in an echo chamber, which the internet has facilitated to some extent – I’m looking at you Alex Jones, Mike Adams and Joseph Mercola!
These science deniers can be a menacing drain of time, effort and inner calm. The easiest way to deal with them would be to block them, excise the wound, possibly burn the evidence of their existence. But then the science deniers have won. Their echo chamber is just that little bit more echo-y. But the echo chamber is going to keep echoing regardless, as discussed above. But won’t somebody think of the children!
I really hate blocking people on social media. The science denier drivel may pollute my newsfeeds, but blocking them also leaves me open to my own echo chamber. Sure, I might think I’m good at picking good information from bad, but if my thinking is never challenged, how can I be confident I’m not falling for confirmation bias? I guess this is the Catch 22 of the modern age, but with more cats.
With the rebirth of Cosmos on TV, Neil DeGrasse Tyson and the team have brought science back into the mainstream. No longer is science confined to the latest puff piece on cancer research that is only in the media because a) cancer and b) the researchers are pressuring the funding bodies to give them money. The terms geek and nerd have stopped being quite the derogatory terms they once were. We even have science memes becoming as popular as Sean Bean “brace yourself” memes.
This attention has also cast a light on the scientific process itself with many non-scientists and scientists passing comment on the reliability of science. Nature has recently publishedseveral articlesdiscussing the reliability of study’s findings. One article shows why the hard sciences laugh at the soft sciences, with the article talking about statistical errors. I mean, have these “scientists” never heard of selection and sample bias? Yes, there is a nerd pecking order, and it is maintained through pure snobbishness, complicated looking equations, and how clean the lab-coat remains.
A Shocking Amount of Medical Research Is Complete Bullshit
#6 – Kinda true. There are two problems here: media reporting of medical science and actual medical science. The biggest issue is the media reporting of medical science, hell, science in general. Just look at how the media have messed up the reporting of climate science for the past 40 years.
Of course, most of what is reported as medical studies are often preliminary studies. You know: “we’ve found a cure for cancer, in a petri dish, just need another 20 years of research and development, and a boatload of money, and we might have something worth getting excited about.”The other kind that gets attention isn’t proper medical studies but are spurious claims by someone trying to pedal a new supplement. So this issue is more about the media being scientifically illiterate than anything.
Another issue is the part of medical science that Ben Goldacre has addressed in his books Bad Science and Bad Pharma. Essentially you have a bias toward positive results being reported. This isn’t good enough. Ben goes into more detail on this topic and it is worth reading his books on this topic and the Nature articles I previously referred to.
Many Scientists Still Don’t Understand Math
#5 – Kinda true. Math is hard. It has all of those funny symbols and not nearly enough pie charts. Mmmm, pie! If a reviewer in the peer review process doesn’t understand maths, they will often reject papers, calling the results “blackbox“. Other times the reviewers will fail to pick up the mistakes made, usually because they aren’t getting paid and that funding application won’t write itself. And that’s just the reviewers. Many researchers don’t do proper trial design and often pass off analysis to specialists who have to try and make the data work despite massive failings. And the harsh reality is that experiments are always a compromise: there is no such thing as the perfect experiment.
Essentially, scientists are fallible human beings like everyone else. Which is why science itself is iterative and includes a methods section so that results are independently confirmed before being accepted.
And They Don’t Understand Statistics, Either
#4 – Kinda true, but misleading. How many people understand the difference between statistically significant and significance? Here’s a quick example:
This illustrates that when you test for something at the 95% confidence interval you still have a 1 in 20 chance of a false positive or natural variability arising in the test. Some “science” has been published that uses this false positive by doing a statistical fishing trip (e.g. anti-GM paper). But there is another aspect, if you get enough samples, and enough data, you can actually get a statistically significant result but not have a significant result. An example would be testing new fertiliser X and finding that there is a p-value of 0.05 (i.e. significant) that the grain yield is 50kg higher in a 3 tonne per hectare crop. Wow, statistically significant, but at 50kg/ha, who cares?!
But these results will be reported, published, and talked about. It is easy for people who haven’t read and understood the work to get over-excited by these results. It is also easy for researchers to get over excited too, they are only human. But this is why we have the methods and results sections in science papers so that calmer, more rational heads prevail. Usually after wine. Wine really helps.
Scientists Have Nearly Unlimited Room to Manipulate Data
#3 – True but misleading. Any scientist *could* make up anything that they wanted. They could generate a bunch of numbers to prove that, for an example of bullshit science, the world is only 6000 years old. But because scientists are a skeptical bunch, they’d want some confirming evidence. They’d want that iterative scientific process to come into play. And the bigger that claim, the more evidence they’d want. Hence why scientists generally ignore creationists, or just pat them on the head when they show up at events: aren’t they cute, they’re trying to science!
But there is a serious issue here. The Nature article I referred to was a social sciences study, a field that is rife with sampling and selection bias. Ever wonder why you hear “scientists say X is bad for you” then a year later it is, “scientists say X is good for you”? Well, that is because two groups were sampled and correlated for X, and as much as we’d like it, correlation doesn’t equal causation. I wish someone would tell the media this little fact, especially since organic food causes autism.
Other fields have other issues. Take a look at health and fitness studies and spot who the participants were: generally, they are university students who need the money to buy tinned beans and beer. Not the most representative group of people and often they are mates with one of the researchers, all 4 of them. Not enough participants and a biased sample: not the way to do science. The harder sciences are better, but that isn’t to say that there aren’t limitations. Again, *this is why we have the methods section so that we can figure out the limitations of the study.*
The Science Community Still Won’t Listen to Women – Update
#2 – When I first wrote this I disagreed, but now I agree, see video below. As someone with a penis, my mileage on this issue is far too limited. That is why it was only when a few prominent people spoke out about this issue that I realised science is no better than the rest of society. It hurts me to say that.
There is still a heavy bias toward men in senior positions at universities and research institutes, women get paid less, women are guessed to be less competent scientists, and apparently, it is okay to ogle female scientists’ boobs… Any of these sound familiar to the rest of society? This is gradually changing, but you have to remember what age those senior people are and what that generation required of women (quit when they got married, etc). That old guard may have influence but they’ll all be dead or retired soon where their influence will be confined to the letters to the editor in the newspaper. After seeing the video below, especially the way the question was asked, I think it is clear that the expectations for women create barriers into and through careers in science (the racism is similar and is one I see as a big issue). So it starts long before people get into science, then it continues through attrition.
Fast forward to 1:01:31 for the question and NDGT’s* answer (sorry, embed doesn’t allow time codes).
Scientists are meant to be thinkers, they are meant to be smart, they are meant to follow the evidence. They aren’t meant to behave like some cretin who hangs out on the men’s rights movement subreddit discussion. Speaking of which, watch science communicator Emily Graslie discuss the comments section of Youtube.
Here’s another from Thought Cafe and Dr. Renée Hložek.
Update: After the first photo of a black hole was published, women in STEM were back in the headlines, with people wanting to again marginalise women in STEM – not to mention how the media love to promote the “lone genius” when science is a team thing. Vox had a great article on it which included some great graphs from Pew Research.
It’s All About the Money
#1 – D’uh and misleading. Research costs money. *This is why we have the methods section, so that we can figure out the limitations of the study.* Money may bring in bias, but it doesn’t have to, nor does that bias have to be bad or wrong. Remember how I said above that science is an iterative process? Well, there is only so big a house of cards that can be built under a pile of bullshit before it falls down in a stinky mess. Money might fool a few people for a while (e.g. climate change denial) but science will ultimately win.
Ultimately, science is the best tool we have for finding out about our reality, making cool stuff, and blowing things up. Without it we wouldn’t be, this article wouldn’t be possible, we wouldn’t know what a Bill Nye smack down looks like. Sure, there is room for improvement, especially in the peer review process and funding arrangements, and science is flawed because it is done by humans, but science is bringing the awesome every day: we have to remember that fact.
After a recent discussion about gun myths, I realised that my last blog post hadn’t covered anywhere near enough of the myths that are floating around (this article will mainly be about US guns, but parallels from the resources and science cited can be drawn to other countries). This is obviously because stuff is much easier to make up than to research, just ask Bill “tides go in, tides go out” O’Reilly. One of the big problems with research in the US on guns is that the National Rifle Association has effectively lobbied to cut off federal funding for research and stymieing data collection and sharing on gun violence. As a result there are a lack of hard numbers and research often tends to be limited in scope. Scope: get it? So like a lost rabbit wandering onto a shooting range, or a teenager wearing a hoody, it’s time to play dodge with some of these claims.
Myth: Guns make you safer, just like drinking a bit of alcohol makes you a better driver.
The myth I hear the most often is that guns make you safer; just like the death penalty is a great deterrent, surveillance cameras stop crime, and the internet is a good source of medical advice. The problem with this myth is that people like having a safety blanket to snuggle. What they don’t realise is that guns don’t make you safer, they make you 4.5-5.5 times more likely to do something stupid to someone you know and love than be used for protection.
I want to be clear here: there’s nothing wrong with going shooting at the range, or hunting vermin. The problem is thinking that you can use a gun for self-defence, when it actually makes the violence problem worse. That gun escalates the violence because people have it there: why not use it? To wit the criminals enter into an arms race and a shoot first policy.
As for carrying around a gun for self-defence, well, in 2011, nearly 10 times more people were shot and killed in arguments than by civilians trying to stop a crime. In one survey, nearly 1% of Americans reported using guns to defend themselves or their property. However, a closer look at their claims found that more than 50% involved using guns in an aggressive manner, such as escalating an argument. A Philadelphia study found that the odds of an assault victim being shot were 4.5 times greater if they carried a gun. Their odds of being killed were 4.2 times greater.
It is even worse for women. In 2010, nearly 6 times more women were shot by husbands, boyfriends, and ex-partners than murdered by male strangers. A woman’s chances of being killed by her abuser increase more than 7 times if he has access to a gun, and that access could be the woman keeping one around just in case her attacker needs it. One US study found that women in states with higher gun ownership rates were 4.9 times more likely to be murdered by a gun than women in states with lower gun ownership rates; funny that.
There is also the action hero delusion that often gets trotted out when talking about guns for self-defence. The idea is that everyone is a good guy, so give them a gun and you have a bunch of action heroes ready to fight off the forces of evil. This has worked so well that all governments are thinking of getting rid of the military….
Mass shootings stopped by armed civilians in the past 30 years: 0
Chances that a shooting at an ER involves guns taken from guards: 1 in 5
I’ve seen several examples cited of “citizens” shooting someone who looked intent on killing everyone they could (with a gun…). But in every instance the “citizen” was actually an off-duty police officer, or a person in law enforcement, or someone in the military. In other words, the people who stop mass shootings or bad-guys with guns, are trained professionals.
There have also been a few studies done that claim X million lawful crime preventions, therefore guns must be good; notably by researchers Lott and Kleck. To say that their research is flawed is like saying Stephen King has sold a few books. Lott’s work has been refuted for extrapolating flawed data. Kleck’s research has similarly been refuted by many peer reviewed articles:
Myth: Guns don’t kill people, people kill people, quite often with a gun, because punching someone to death is hard work.
If this myth were true we wouldn’t send troops to war with weapons. I get where people are coming from with this myth, because the gun itself is an inanimate object and is only as good or bad as the person using it. Yes, I did just quote the movie Shane: thanks for noticing. But here is the thing, in a society we are more than just a bunch of individuals, we are a great big bell-curve of complexity. So when you actually study the entire population you find that people with more guns tend to kill more people—with guns. In the US, states with the highest gun ownership rates have a gun murder rate 114% higher than those with the lowest gun ownership rates. Also, gun death rates tend to be higher in states with higher rates of gun ownership. Gun death rates are generally lower in states with restrictions such as firearm type restrictions or safe-storage requirements.
Gun deaths graph: The three states with the highest rate of gun ownership (MT, AK, WY) have a gun death rate of 17.8 per 100,000, over 4 times that of the three lowest-ownership states (HI, NJ, MA; 4.0 gun deaths per 100,000).
The thing is that despite guns being inanimate objects, they affect the user/owner’s psyche. It’s like waking up one morning with a larger penis or bigger boobs: you not only want to show them off, you act differently as a result. Studies confirm this change in behaviour. Drivers who carry guns are 44% more likely than unarmed drivers to make obscene gestures at other motorists, and 77% more likely to follow them aggressively. Among Texans convicted of serious crimes, those with concealed-handgun licenses were sentenced for threatening someone with a firearm 4.8 times more than those without. In US states with Stand Your Ground and other laws making it easier to shoot in self-defence, those policies have been linked to a 7 to 10% increase in homicides.
Now people also like to try and red herring the argument against guns by pretending that video games or mental health is the problem. The NRA tried to claim video games were to blame after the Newtown shootings. Of course we’d be able to see this relationship by looking at gun ownership versus video game playing, like by comparing the USA to Japan.
Myth: They’re coming for your guns to stop our freedom and tyranny and democide and Alex Jones said so and aliens made me do it!
As I stated above, the statistics on guns and gun violence is hazy. No one knows the exact number of guns in America, but it’s clear there’s no practical way to round them all up (never mind that no one in Washington is proposing this). Those “freedom” loving gun owners – all 80 million of them – have the evil government out-gunned by a factor of around 79 to 1. If government were coming for the guns, you’d think they’d have done so before being this grossly out-gunned.
Yes, 80 million gun owners is a minority! I find it interesting that from 1989 to 2000 there was a decline in gun ownership of 46% to 32%. Now the decline in ownership rebounds to hover between 34 and 43% for 2000-2011 (notably the high point in 2007 was after the Virginia Tech shooting which the NRA did a lot of campaigning around), which shows why the decline didn’t continue. Now compare those rates of ownership to the recent report from the US Bureau of Justice Statistics sums up the rates of gun violence. You can clearly see a decline in gun violence from 1993 to 2000 before a plateau that has pretty much held since. This is confirmed by other studies. This is an important take home point: all the research shows violence and gun violence is on the decline. The idea that people need a gun for protection is becoming more and more ridiculous. This is despite the global decline in violence, and trends seen in countries like Australia (more Aussie stats here). On a side note, in the last lot of statistics you see that the more female, educated, non-white, and liberal you are, the less likely you are to own a gun.
So scare campaigns may work to boost sales of guns for a while, but overall, most people don’t want or need a gun. The long term trend has nothing to do with the government coming for the guns and everything to do with people realising they don’t need one and prefer to read a good book, or watch a movie, instead of going to the range.
The simple fact is that more guns in society is the best predictor of death, thus it is time to rethink the reasons for owning a gun, especially if that reason is in case you have to John McClane a situation.
Born to write? Born to be an athlete? Born to be a rocket scientist? People love to talk about “natural” ability or talent as the be all and end all of achievement. Since I actually own a genetics text book – it props up my DVD collection on the shelf – and once watched someone do manual labour, I feel qualified to comment on the talent vs. work debate.
Genetics is a big, complicated, topic, so I’m going to provide a facile overview of it. Genetics is that thing that means some people have higher baselines, are higher responders to training/learning, and are likely to achieve more (see this and read this for sports examples). For some the opposite is true, they have low baselines, don’t respond well to training/learning, and are likely to suck no matter what they do. There isn’t much you can do about your genetics, unless you happen to have a time machine and can play matchmaker to get better parents.
But that isn’t to say that you shouldn’t try to get good at stuff. Until you are tested and start training, you don’t really know what your “ability” is. And just because you might continue to suck, you will suck less than you did before, which means you will be better than those around you who didn’t even try. Take an example from sports – because people actually do science on athletes, the arts talk about their feelings too much – athletes tend to live longer than normal because they are more likely to be fitter, which lowers cardiovascular mortality. You don’t get fit sitting on a couch, watching TV, snacking on corn chips, in your underwear: you have to train.
So let’s take this into the writing field. You may have been born with a massive brain, nimble fingers, and an imagination that rivals college students tripping on acid, but that doesn’t mean much if you never learn to read, or write, or are too poor to have access to materials for writing, or the persistence to share that writing with the world. All that talent and ability counts for nothing if you don’t do something with it. You have to train. The difference between the talented individual and the untalented individual can often just be a lot of hard work by the untalented. I mean, who has sold more books: James Paterson or any of theBooker Prize winners?*
But let’s not get carried away. We have to acknowledge that any “talent” is a GxE interaction (genetics by environment interaction). Genetics, or that innate ability, is still a factor that we can’t dismiss, but so is the environment. So all of that skill development and training will come more easily, more quickly, and possibly progress further for some, but that isn’t an excuse for not doing the hard work.
There is a general rule in arguments: don’t argue with stupid people, they drag you down to their level and beat you with experience. That is pretty much the problem scientists and experts have when debating anti-science proponents – such as creationists, anti-vaccinators, anti-GM campaigners, climate change deniers, etc. Yet Bill Nye the Science Guy decided that, in the interest of science and education, he would debate a creationist.
The debate started with Bill Nye and Ken Ham stating a 5 minute opening piece. Then Ken went into his 50 minute argument, which is when my cushion really started to earn its keep protecting my desk from damage.
I really find it hard to fathom how anyone can be credulous of Ham’s statements. In his 50 minutes he used all sorts of logical fallacies, most notably his videos of “creationist scientists” as argument from authority. But it wasn’t this that really got the lump on my forehead rising, it was the use of “evidence” for his argument that simultaneously refuted the arguments. One example was the phylogenetic tree for dogs. Ham argued that the rise of Canis lupus familiaris from a wolf (yeah, just one, let’s just let that one go through to the keeper) was what you would see from biblical predictions of dogs speciating after the global flood 4,000 years ago. Just one problem. Teeny tiny. The figure showed dogs evolving from a group of wolf ancestors over the course of 14-15,000 years.
He didn’t just do this once, he did it repeatedly. Another example arose when he was talking up one of his creationist pals who helped design a satellite (or something, didn’t really care because it was irrelevant). He used the example of how scientists had been debating how old the universe was: they couldn’t agree on the age. The part he left out about that particular debate was that the age of the universe was somewhere around about 13.8 billion years old (+ 37 million years), and they had a bunch of data they were trying to make sure they had the errors accounted for. The debate was about the difference in the confidence range (or error margin) between the Planck satellite measures and the Wilkinson Microwave Anisotropy Probe measures. The error margin is 6,000 times greater than the age of the Earth that Ham claims. The Earth’s age is still 2 million times older than Ham’s claim, yet he uses this example as if to give credence to his claims.
Now Nye did his best in his 50 minutes to show that Ham’s claims were flawed, but also how evidence and scientific observation and prediction work. Others have claimed, and I agree to an extent, that Nye’s mistake was to try and cover too much ground. If he was talking to a receptive audience he would have destroyed Ham and had the crowd eating out of his hand. But at a creationist museum, with a bunch of science deniers, it would come across as too much information and too confusing. Although Nye’s last couple of minutes pretty much killed the entire debate, with trees, rocks, size of the universe, distance from stars but limits of how fast the light can travel, all showing that the Earth and Universe are much much older.
The first rebuttal saw Ham carrying on about “you weren’t there so you don’t know.” Brian Dunning had a great take on this particular argument:
There is a rumor that Bill Nye @TheScienceGuy debated evolution with Ken Ham. Not true. It did not happen, because you weren’t there.
In this first rebuttal, Ham again used evidence that rebutted his own claims, especially when talking about radio-carbon dating. Showing that measurements have error margins, or can be somewhat imprecise, doesn’t negate the fact that the measurements are still many orders of magnitude outside of the age of the Earth claimed by Ham. Then he moved onto saying that the bible is right, everything else is wrong (let’s just ignore that the bible isn’t even consistent with itself, let alone the fact that it is a translation of a translation, thus literal interpretation isn’t supported by biblical scholars).
Nye then rebutted Ham’s statements. His classic put down was for the claim that every animal and humans were vegetarian until they got of the ark: lion’s teeth aren’t really made for broccoli.* Ba-zing!
Next Ham tried to point out that creationism isn’t his model (then he blames secularists for scientists). This is true, there are other nutters who came up with this crap. But Ham tried to pretend that “scientists” came up with the various creation models (NB: just because a scientist said something, doesn’t make it science or scientific). Then he talks about species and kinds and how Nye was confusing what a kind was. Easy to do when the idea of a kind is bullshit and unsupported by any actual science.
Nye then tore apart the claims about the rise of species from kinds using the basic math involved. He also called bullshit on the ship building skills of ancient desert people. The main point in this rebuttal was that Ham hadn’t addressed Nye’s point adequately, and that Ham’s claims aren’t supported by the majority of religious people, let alone scientists.
My desk and forehead had had enough by this stage, so I didn’t watch the Q&A section, but it can be viewed here.
The point I wanted to make from this was that Ham had a huge advantage in this discussion. I’m not talking about the home team venue, nor the credulous crowd, I’m talking about the lack of need for evidence. All Ham had to do, and pretty much what he did, was seed doubt in science and then declare “creationism wins” (which might as well be “God did it”). This is the problem with any debate with anti-science: the scientists have to prove their case with evidence and logical reasoning; the anti-science side only has to sow some doubt. And that doubt can vary between legitimate claims through to flat out lies, it doesn’t matter. So Nye shouldn’t have taken the debate.
But Nye was right to take the debate.
Hang-on. Have you hit your head against your desk a few too many times during that debate?
No. Bill Nye is a well known and respected science communicator. He went into the belly of the beast to stand in the echo chamber and sow some doubt (how’s that for a metaphor-fest?). As he stated himself, Nye knows that America (and the world, but let’s allow him his patriotism) needs science and innovation for the future of society. Creationism and other anti-science nonsense undermine this. If no-one challenges the group-think and echo chamber of the creationists (et al.) then they will continue to be mislead and misinformed by people like Ken Ham. You can’t have someone reject evolution yet rely on germ theory for modern medicine. You can’t have someone reject radio-carbon dating yet use medical imaging. That is incompatible, that is a rejection of reality, and it leads to stupid stuff happening that curbs development of new technologies and advancements to society.
Other opinions on who won: Shane proposes that Nye needed to pick a couple of points to hammer home. This feeds intoscience communication researchthat shows you can get distracted from the main narrative with too many points.
Update: It is clear that many of Ham’s supporters were not listening to Bill Nye and are wilfully ignorant. This Buzzfeed article (yeah, I know, Buzzfeed) brings up a lot of the points that Nye addressed, explained clearly and simply, showing they didn’t listen to Nye, and slept through school.
Update: This article makes a nice statement that ties into some of my points about why Nye took the debate. To quote:
It brought new attention to YEC (Young Earth Creationism) to exactly the people we need to see it- the large swath of Christian and other religious parents who think of Intelligent Design or Guided Evolution or some other pseudo-scientific concept when they imagine “teaching the controversy“. These people are embarrassed by people like Ken Ham. They know the earth isn’t 6000 years old, and they understand just how impossible it is to square that belief with observable phenomena.
Update: I quoted Brian Dunning above and he wrote an article for Skeptoid about not debating anti-science people. I agree and disagree with his points as you will see from what I’ve written here and what Brian has written in his article. We can’t just preach to the choir, but we can’t provide legitimacy to nonsense either.
Update: The ever awesome Potholer54 just posted a video on one point about evolution and Ken Ham’s rebuttal of his own arguments. Worth watching.
* Okay, not the best point to make, as teeth aren’t definitive of diet, but if the comment is viewed as being representative of animal physiology overall, then it is a very valid putdown of the vegetarian claims.