Cool infographic comparing the destructive power of sci-fi weapons from Foundation Digital and Fat Wallet. And yes, the Smart Disk is probably more accurately called a Smart Chakram (if I learnt anything from watching Xena Warrior Princess).
Cool infographic comparing the destructive power of sci-fi weapons from Foundation Digital and Fat Wallet. And yes, the Smart Disk is probably more accurately called a Smart Chakram (if I learnt anything from watching Xena Warrior Princess).
Do you love the smell of books?
Do you prefer the feel of paper?
Do you feel slightly superior to others because you paid for the hardcover?
Do you grasp at any excuse to deride e-books and the people who read them?
Well, I have found the article for you!
Recently on Mental Floss an article entitled “5 Reasons Physical Books Might Be Better Than E-Books” sought to comfort snooty readers who wanted ammunition to fling at e-book readers. In the proud tradition of deriding any new technology as bad (see e-books, e-cars, driverless cars, etc), this article introduces us to some research that is wonderfully out of context for the intent of the article’s argument. Let’s dig in.
Though e-book readers have become a more common sight around town, traditional books still have their evangelists. According to The New York Times, e-book sales have been falling in 2015. Print definitely isn’t dead. In fact, according to some research, it may actually be a better choice for some readers. While scientists are still trying to tease out exactly how digital reading affects us differently, here are five ways e-books might be inferior to their dead-tree cousins.
When deriding things it is always best to reference another article that derides the same thing. In this case the article references the wonderfully misleading NYT piece on e-book sales slipping. Pity that the sales didn’t slip… That’s right, the NYT misrepresented a slowing in e-book sales growth as a drop in sales. And did they mention why readers were stating a preference for paper? Yes. Hidden in the article is a little quote about how publishers had been protecting their paper sales by inflating e-book prices. Now, my economics is a tad rusty, but I’m pretty sure making something more expensive when there are direct substitutes on offer results in a decrease in sales of that item and an increase in the sales of the substitution item. At least, that’s what I’ve heard…
1. E-BOOKS CAN REDUCE READING COMPREHENSION.
In a study of middle schoolers, West Chester University researchers found that students who read on iPads had lower reading comprehension than when they read traditional printed books. They discovered that the kids sometimes skipped text in favor of interactive features in the e-books, suggesting that certain multimedia in children’s e-books can be detrimental to the practice of reading itself. However, the researchers noted that some interactive features in e-books are designed to enhance comprehension, and that those might be more helpful than game-type interactive graphics.
This is a fantastic study in how multitasking is terrible for concentration and thus impacts reading comprehension. iPads have all sorts of cool stuff on them, including little notifications telling you that your friend just liked your latest picture of your meal. And building those distractions into the book being read: sounds like a great idea! What this study doesn’t do is support the idea that e-books reduce reading comprehension.
2. YOUNG KIDS CAN GET DISTRACTED BY E-BOOKS.
Similar results were found by a small study by the Joan Ganz Cooney Center that consisted of 32 kids reading e-books and print books with their parents. It found that “enhanced” e-books might be distracting. Kids who read enhanced e-books—ones with interactive, multimedia experiences—were more engaged with them physically, but in the end they remembered fewer narrative details than those who read print books or basic e-books [PDF].
Don’t read the link. Don’t read the link. You read the link: didn’t you. Leaving aside the tiny study size for a moment (a point the study authors acknowledge), the study itself supports the points I made above about being distracted whilst reading. And if you look through the study you see a great little chart that showed the comparison of reading comprehension – expressed as story details recalled – was actually superior in basic e-books than in print books or enhanced e-books.
The findings of the study were literally stated as:
The enhanced e-book was less effective than the print and basic e-book in supporting the benefits of co-reading because it prompted more non-content related interactions.
Odd that the “e-books are bad” article failed to highlight this finding…
3. YOU REMEMBER LESS ABOUT A BOOK’S TIMELINE.
Another study of adults also found that e-books can be hard to absorb. The researchers asked 25 people read a 28-page story on a Kindle and 25 to read the story in paperback, then asked the readers to put 14 events from the story in chronological order. Those who read the story on a Kindle performed worse on the chronology test than the book readers, though they performed about the same as print readers in other tests. Earlier research by the same scholars, from Stavanger University in Norway, found that Norwegian 10th graders also remembered more about texts if they read them in print rather than on a computer screen [PDF].
Finally we come to a study on actual e-books on an actual e-reader versus their dead tree counterparts. Of course I’m again blown away by the sample size of the study, a massive 50 people. That should easily extrapolate to the rest of humankind. The linked article doesn’t give us much information, but I found a better one, and it has this summary:
In most respects, there was no significant difference between the Kindle readers and the paper readers: the emotional measures were roughly the same, and both groups of readers responded almost equally to questions dealing with the setting of the story, the characters and other plot details. But, the Kindle readers scored significantly lower on questions about when events in the story occurred. They also performed almost twice as poorly when asked to arrange 14 plot points in the correct sequence.
I’d link to the original paper, but it is behind a paywall. Suffice to say that the error margins were pretty big (even the paper readers got 34% of the plot points in the wrong order). And this was a short story, something that shouldn’t be that difficult for any reader. So this probably says as much about the story as anything. They’d need far more stories and participants to get a good idea of what is going on. But I will concede that reading on paper vs e-reader vs screen is definitely a different experience and has an influence. What that influence is, positive, negative, or just different, needs more research.
Interestingly the study of reading PDF texts on a screen vs paper texts in high school students showed why scrolling is a terrible way to read anything. Scroll down to read more about PDFs sucking.
4. THEY’RE NOT GREAT AS TEXTBOOKS.
While e-book textbooks are often cheaper (and easier to carry) than traditional door-stop textbooks, college students often don’t prefer them. In some surveys of college kids, the majority of students have reported preferring print books. However, a 2012 study from the UK’s National Literacy Trust of kids ages 8 to 16 found that more than 50 percent of children reported preferring screen reading [PDF].
It is odd to start a point and then go on to disprove it. E-book textbooks being cheaper, easier to carry, and in some surveys preferred by the majority of respondents, seems to me to be the opposite of “not great”. The preference for paper textbooks claim comes from a survey of 527 students, yet is immediately refuted by the UK survey of 34,910 students. I wonder which one is more representative of how students feel about textbooks?
In the comments of the Mental Floss article, someone made a good point in regard to the format of textbooks. Oftentimes the textbooks are PDFs, which brings us back to the point about scrolling, and adds the problem with taking notes. Clearly the format of the e-book plays a big part in how people feel about them.
5. THEY’RE TIRING.
Staring at a lit screen can be tiring for the eyes and the brain. A 2005 study from Sweden found that reading digitally required a higher cognitive workload than reading on paper. Furthermore, staring at LED screens at night can disrupt sleep patterns. A 2014 Harvard study found that people who used e-readers with LED screens at night slept worse and were more tired the next day. So, if you’re going to go for an e-book, go for one without the backlight.
Now let us talk about how bad e-books are for your brain…. Sorry, did I say e-books when I meant LED screens like your iPad and computer? Silly me. Having bright light, especially from white background screens, shining in your eyes at night isn’t a good thing. But that is about as related to e-books as X-Factor is to talented singers. So the message about changing your screen setup for night viewing only really applies to readers if they utilise a backlit screen for reading.
And now that we are at the end of the article, let’s throw in some information for the pretence of balance in the hopes you will ignore the headline and main article points:
BUT DON’T THROW AWAY YOUR E-READER JUST YET.
However, all this may not mean that reading on a Kindle is really going to melt your brain. For instance, reading an e-book on a computer is a much different experience than reading on a Kindle, which is specifically designed for consuming books. So, too, is playing with an interactive e-book on an iPad, compared to using a simpler e-book device that only presents the text, with no opportunities to click away into digital distractions.
This really does appear to be information that would have been better presented in the context of the “e-books are evil” points above; doesn’t it. Throwing in this sort of context at the end rather than in the discussion of the study findings is a cheap tactic, a ploy that sees important information left until after you have already formed your opinion on a subject, or just plain stopped reading the article. This information has far less chance of being retained than the others points made earlier in the article, thus the article has created the bias they were after (deliberately or otherwise).
And some studies have found that part of the difference between the way people absorb information from e-books versus paper might be due to approaching e-books differently—in one test, participants didn’t regulate their study time with digital books like they did with paper texts, leading to worse performances. It’s possible that our expectations of e-book reading—as well as the different designs of the digital reading experience on a computer or iPad or Kindle—might affect how we approach the text and how much effort we put into studying them. As generations of e-book readers evolve, and people become more accustomed to the idea of sitting down with a digital textbook, these factors could change—for better or for worse.
These are all good points, again made at the end of the article rather than at least being hinted at throughout. And unlike the main points in the article, these are unreferenced. Are these points from the studies already referenced (some are) or some other studies that aren’t worth mentioning? In the former, you would expect these points to have been raised earlier in the article in the proper context, in the latter, this feels like an attempt to downplay the statements as less important than the referenced points above. Either way we are left with the sentiment “change is scary” rather than “change is change”.
Hopefully this breakdown of the Mental Floss article shows just how disingenuous many of these anti-technology articles are, especially the “e-books are evil” articles. I’m not trying to say that e-books are what everyone should be reading, or that our forests are now saved from Dan Brown. There is clear evidence that our changing technology is changing the way we read and absorb information, and this transition period is still a learning phase as to how and if we will change our reading preferences. But negative preconceived ideas about e-books (or technology) don’t help in communicating about the change that is happening.
In a recent post I discussed some points about how to spot anti-science nonsense. Pick a subject, any subject, and there will be someone – probably Alex Jones – making an outrageous claim about it. But don’t worry, they’ll solve the problem with items available from their reasonably priced store: $1440 per litre is a bargain price for something you don’t need and doesn’t do as claimed.
Obviously scammers are gonna scam, and anti-scientists are going to not-science. The thing is once you understand that something is wrong you have some responsibility to make sure the misinformation doesn’t spread like a leaky diaper. With great
power knowledge comes great responsibility. Which means you have to start discussing science with science deniers. Don’t forget to place a cushion on your desk and wear padded gloves.
Despite having the advantage of science/facts in the argument against science deniers, you have the decided disadvantage that you can’t just make stuff up (despite how tempting and financially rewarding it is). In fact you have to be better informed about not only your side of the argument but also about the science denier’s arguments.
Sounds odd, doesn’t it? You have to learn nonsense to talk about science. That makes as much sense as being pro-life and pro-death penalty. Bear with me here. Take this example of climate change denier Bret Stephens arguing against Bill Maher on Real Time:
Bret sounds convincing, doesn’t he? Bret sure thinks so. He makes some vague references to headlines from the 1930s and 1970s as dismissals of current concerns about oceans. Then he references an economic study on environmental policy priorities, all whilst looking very smug and sure of himself. These statements leave Bill at a stumbling point because he has to admit he doesn’t know what the hell Bret is talking about. The video edited out the pant-less victory lap Bret did of the studio, complete with crotch gyrations in Bill’s face, as he screamed “Take that liberal media!”
Now it isn’t a bad thing to admit you don’t know stuff. Nobody knows everything, it is arrogant to act like you do. Arrogance is of course the result of being surrounded by Knowitalls, an invisible mythical creature that looks like a cross between a unicorn and Bill O’Reilly. Anyway, I’m glad Bill Maher admitted he didn’t know about the study; if only he would do the same with his position on vaccination and GM/GMOs. But the admission did make him appear less convincing as he couldn’t directly rebut the points made.
And here is why you need to know what the anti-science people “know”. Take the first points Bret makes about the oceans dying. His two dates mentioned are actually making reference to points unrelated to the issue of climate change causing ocean acidification. The first date was reference to the Overfishing Conference in 1936 about whaling and fishery management (as far as I can ascertain), issues that were addressed by introducing catch sizes, fishing licenses, and the phasing out of whaling. So Bret is trying to justify inaction on climate change to save ocean damage by referencing an environmental concern that was acted upon. What a great argument!
His second date was the 1975 Newsweek and New York Times (and others) article about global cooling. This is a well worn climate change denier talking point/myth that has been thoroughly debunked yet has evolved beyond a PRATT point and become a zombie point. Some myths just won’t die and are constantly in search of brains to infect/affect.
We then hear Bret reference a Bjorn Lomborg study on best use of resources and where climate change ranked. Very convincing, aside from the fact that it was complete and utter nonsense. See, Bjorn doesn’t accept the actual risks and actual current changes that have occurred due to climate change. So his entire analysis and argument started off from a completely flawed position and was thus doomed to fail to draw any worthwhile conclusions. Actual experts have torn apart his work, particularly his “conference”, here, here and here. But Bill didn’t know this, thus the points made stand unchallenged and as a sort of “valid” evidence.
And this is why it is important to know your enemy. If you know the arguments they are likely to raise, then you can have rebuttals ready. In the case of citing Lomborg’s work you can point out the failings before people have a chance to take it seriously. In the case of old magazine articles, you can point out you only read them for the pictures. But it means you don’t just have to know the science, you have to know the anti-science.
It is also worth noting that Bret reeled off a string of statements that were essentially nonsense dressed up as facts. That is a tried and trusted debating tactic known as the Gish Gallop, and it is very hard to argue against. It takes a lot more energy to redress the nonsense than they take stating it, not to mention time wasted not making your own points. Also helps that science has to have facts on its side, anti-science can make it all up on the spot.
Of course the obvious thing to say here is that the anti-science movement often don’t see themselves as anti-science and will use similar tactics. They will familiarise themselves with the science in order to dismiss it. This is possibly the most annoying part of science communication, those imbedded in anti-science positions aren’t ignorant of the facts, they are wilfully ignorant of their fact-ness.
Just recently I was asked a question on one of my climate change posts. The question, whilst not about climate change nor climate science, was about similar anti-science nonsense that acts to confuse and befuddle those who aren’t familiar with the field. The comment in full:
I like your writing, I wish more would understand your logic when they spout facts and relationships. If you have time please, an article (though imperfect) comments,
“Bacteria…and plants use a seven-step metabolic route known as the shikimate pathway for the biosynthesis of aromatic amino acids; glyphosate inhibits this pathway, causing the plant to die…. Monsanto says humans don’t have this shikimate pathway, so it’s… safe……however, that our gut bacteria has this pathway, and these bacteria supply our body with crucial amino acids. Roundup …kills bacteria, allowing pathogens to grow; interferes with the synthesis of amino acids including methionine, which leads to shortages in critical neurotransmitters and folate; chelates (removes) important minerals like iron, cobalt…”
I would love to know your take on that possible cause and affect.
Thank You for your Time !
Dennis has asked how likely it is that this sciency sounding article is correct. The short answer is that you are more likely to get this week’s lottery numbers from one of these articles than any reliable facts. How can I be so dismissive? Well the thing is I’m not being dismissive, it just sounds like that because my skeptical science eye has spotted many holes in the quote and article. So let us go through them like a rugby player at an all you can eat buffet.
The first thing to note is the source of the article and the “expert” cited within. There are some tell-tale signs that a webpage may be unreliable, such as when they use terms such as “truth”, “natural”, “alt” as a prefix to any word, and “health” as their names. Health Impact News isn’t the giveaway here, it could be a legitimate source of information. In this case the giveaway is the byline “News that impacts your health, that other media sources may censor.” See: it’s a conspiracy!!! (Font = sarcasm) And conspiracy claims are always reliable (/sarcasm).
If you check out Web of Trust you can see that Health Impact News perpetuates a number of dubious and fraudulent claims, such as vaccine myths from the anti-vaxxer nutters. Which means that the slant the website is running is one that doesn’t respect scientific evidence. Not that this alone is enough to dismiss the claims.
The other source is the “expert” cited, one Stephanie Seneff. To say that this computer scientist is out of her depth in the field of health, genetics and chemistry is like suggesting Justin Bieber’s music is appealing to people with taste. She makes all sorts of wacky and unfounded claims about herbicides, GMOs and Monsanto, so calling her an expert or citing her work should get you laughed out of any room you are standing in.
What the article claims is really the crux of the dismissal. If someone claimed to have seen bigfoot doing lines of blow with someone other than Charlie Sheen, we’d be immediately suspicious since we know that greater than 90% of all cocaine is snorted in the company of Sheen. Similarly when someone claims that the most extensively tested herbicide of all time, the safest agrichemical ever made, the most widely used agrichemical on the market, is responsible for [insert health consequence here, in this case autism] then you should be a tad suspicious.
Let’s ignore the fact about the extensive safety testing. Let’s also ignore the fact that autism seems to be the disease de jour of the alt-health fear-mongers, linked to everything from GMOs to vaccines. Let’s also ignore the fact that agrichemical safety and efficacy have virtually nothing to do with the safety and efficacy of individual GMOs (GM and GE being another kettle of fish entirely), despite what the article tries to imply. Let’s also ignore that glyphosate binds tightly to organic matter and is rapidly broken down in the environment so actual levels consumed will be negligible, and those amounts won’t be doing anything in the digestive tract. Let’s just assume that glyphosate is getting into our bodies and causing damage at huge levels: what evidence is there to suggest it is glyphosate and not any other agrichemical or environmental toxin that has increased during the same time period (e.g. coal pollution)? What evidence is there to suggest there has actually been any rise in maladies that aren’t as a result of something else (because everyone knows that fat people got fat whilst only eating celery sticks)?
The reference material or evidence.
Big claims require even bigger evidence. Solid evidence. One thing I hate about news sites is that they so often make oblique references to a study that may or may not have been published in a reputable journal, rather than just link straight to the journal and paper in question. In this case there is no link to a journal, reputable or not, just links to other unreliable sites such as The Mind Unleashed and The Alliance of Natural Health USA webpage, as well as a Youtube video. So far I’m underwhelmed.
Remember, this article is reporting on Seneff’s claim that half of all people will be autistic by 2025 thanks to herbicides. Half!! This is a condition that has a median occurrence of 62 cases per 10,000 people. The spectacular rise in autism that we should expect in the next decade for a herbicide that has been in wide use for many decades already would require a bit more evidence than “well, we reckon.” Seneff claimed a correlation between glyphosate use and a rise in autism. She clearly didn’t compare the rise in autism to organic food.
Well, if you dig further into the reference of the reference (seriously, how hard is it to cite your sources properly!?!) you will find an actual journal paper by Seneff and Samsel in a journal called Entropy. Have you heard of Entropy and is it recognised as a go-to journal for science on the topic of, well, anything? Nope. And what about the study itself which claims that just about every malady you can think of is linked to glyphosate, what evidence does it present? Well pretty much none. To quote this article:
The evidence for these mechanisms, and their impact on human health, is all but nonexistent. The authors base their claim about CYP enzymes on two studies, one of liver cells and one of placental cells, which report endocrine disruptions when those cells are exposed to glyphosate. Neither study is CYP-specific (The effect of pesticides on CYP enzymes, by contrast, has been studied specifically.) As for the gut bacteria, there appears to be no research at all on glyphosate’s effect on them.
Samsel and Seneff didn’t conduct any studies. They don’t seem interested in the levels at which humans are actually exposed to glyphosate. They simply speculated that, if anyone, anywhere, found that glyphosate could do anything in any organism, that thing must also be happening in humans everywhere. I’d like to meet the “peers” who “reviewed” this.
Yep. That is a rebuttal from a Huffington Post article. Let that sink in for a moment. Even Huff Post don’t want to touch Seneff’s claims with a ten foot pole.
So far we have found that the suspicions about this article are well founded. The site is not reliable, the “expert” cited is not reliable, the sources cited are not reliable, the evidence cited is essentially non-existent, the claims made are not particularly plausible, and there is no evidence to support the claims. But this leaves us with a problem: short of hours of research on each point made, how do I confirm that these people are lying to me on the internet? Because you should be able to trust the internet, right?
The average person can’t be expected to be an expert in all topics, nor be expected to have the time to track down and read every piece of science to confirm an article is accurate. But there are people on the internet who have their favourite topics that they will write (or make videos) about. This means you just have to search for rebuttals to articles. Google can be handy for this if you are familiar with how to weed out the rubbish results. Joining forums or following experts in various fields can help as well (e.g. Skeptics Stack Exchange, Science Based Medicine). There are also webtools available to help find good information. I’ve already mentioned Web of Trust above, but there are many others.
rbutr is one such tool that can help with finding rebuttal articles (disclaimer: I am involved with rbutr on social media). In the case of the Health Impact News article there were two linked rebuttals (I’ll be adding this one as well), here and here. This really helps to figure out whether the arguments presented are valid (although in this case a basic application of logic should suffice). But there were more rebuttals linked to the Seneff journal article, 7 of them: here, here, here, here, here, here, and here. These links allow people to easily see the arguments laid bare.
Thus we can now see that the article can be dismissed as rubbish. A fair bit of work to get there, but in the end we did it (~25 references and 1600 words later). Makes installing rbutr and Web of Trust in your web browser look like a great option, doesn’t it!
In the information age ignorance is a choice. But informing yourself isn’t as easy as just reading articles on subjects. Using a critical eye, applying logic, and accessing quality information has to be done to avoid being misinformed. When all said and done, evidence wins. And cat videos. And dog videos. In fact any video featuring a cute animal wins.
I’ve been quite busy recently. There is the usual writing going on, but I also have a few articles in the works, another rugrat in the works, and I’ve also been interviewed for the Skeptically Challenged Podcast.
In the podcast, Ross, Ketan and myself discuss a range of topics and try to bring the science. Ketan discusses the mythical wind turbine syndrome, I discuss a recent climate paper, and we cover the promises of fusion power from Lockhead Martin and the recent Ebola hysteria.
Consider supporting Skeptically Challenged on Patreon;
Edit: Ross and I discussed a couple of other topics in the session below: Supplements and Atheists in Rehab.
New Captain Disillusion Video! – http://youtu.be/h0pIZH-W6b4?list=UUEOXxzW2vU0P-0THehuIIeg
Some links to the material I was name dropping:
Also, stay tuned until the very end and you’ll hear just one of the bits that Ross will have for subscribers, mainly jokes. Now just imagine how we managed to work rocket powered Miley Cyrus into the discussion.
If TV is the lard developing, heart attacking inducing, entertainment form, then reading is the brain workout. I’ve previously posted about how reading is good for the brain, but science is keen on finding out more, so there is always new research that brings up cool findings. I’m reposting an interesting article I found (here) that lists some benefits from reading with links to the research, proving that reading is good for you.
Merely reading a word reflecting a colour or a scent immediately fires up the corresponding section of the brain, which empathises with written experiences as if they actually happened to the audience. Researchers believe this might very well help sharpen the social acumen needed to forge valuable relationships with others.
In correlation with the previous perk, sensual stimulation makes it easier for aging brains to keep absorbing and processing new information over time. This occurs when the occipital-temporal cortex essentially overrides its own programming and adapts to better accommodate written language.
Avid readers enjoy a heightened ability to retain their cognitive skills over their peers who simply prefer other media — even when exposed to lead for extended periods, as indicated by an article in Neurology. It serves as something of a “shield” against mental decay, allowing the body to continue through the motions even when facing temporary or permanent challenges.
When educators at Obafemi Awolowo University incorporated education-themed comics and cartoons into primary school classrooms, they noted that the welding of pictures to words in a manner different than the usual picture books proved unexpectedly beneficial. Exposure to these oft-marginalized mediums actually nurtured within them a healthy sense of creativity — a quality necessary for logical and abstract problem solving.
On the whole, readers tend to display more adroit verbal skills than those who are not as fond of books, though it must be noted that this doesn’t inherently render them better communicators. Still, they do tend to sport higher vocabularies, which increase exponentially with the volume of literature consumed, and may discern context faster.
Anne E. Cunningham and Keith E. Stanovich’s “What Reading Does for the Mind” also noted that heavy readers tend to display greater knowledge of how things work and who or what people were; once again, findings were proportionate to how much the students in question devoured in their literary diets. Nonfiction obviously tends to send more facts down the hatch, though fiction certainly can hold its own in that department as well.
Some students obviously don’t perform well on tests despite their prodigious abilities, but in general, findings (such as those offered by the National Endowment for the Arts) show a link between pleasure reading and better scores. The most pronounced improvement, unsurprisingly, occurred on exams focused on analyzing reading, writing, and verbal skills.
According to a 2009 University of Sussex study, picking up a book could be one of the most effective strategies for calming down when life grows too overwhelming — great for both mental and physiological reasons. The University of Minnesota built on these findings and recommends reading some form of literature for at least half an hour every day for optimum relaxation.
Fully engaged reading sessions — not just skimming, in other words — actively engage the sections of the brain responsible for thinking critically about more than just texts. Writing, too, also serves as an excellent conduit sharpening the skills necessary for parsing bias, facts vs. fictions, effective arguments, and more.
In a British Medical Journal article, academics at the French National Institute of Medical Research showcased their findings regarding the relationship between a mind occupied by reading and a lower risk of dementia. Obviously, literature isn’t going to act as a cure, but nonreaders are 18% more likely to develop the condition and experience worsened symptoms.
Readers genetically or environmentally predisposed to MCI, Alzheimer’s, and other disorders characterized by cognitive decline won’t escape their fate if they live long enough; but not only do their literary habits push back the onset, these conditions also encroach at a more sluggish pace. More than any other way to pass the time, picking up some sort of book (no matter the medium) proves among the most effective strategies for delaying and slowing dementia.
Along with bolstering critical thinking skills, the authors of “Reading and Reasoning” in Reading Teacher noted that literary intake also positively influences logic and reasoning. Again, though, the most viable strategy for getting the most out of reading involves picking apart the words themselves, not merely flipping through pages.
Improved literacy means improved self-esteem, particularly when it involves kindergarten and middle school students whose grades will swell as a result, although high schoolers, college kids, and adults are certainly not immune to this mental health perk. Set realistic reading goals and work toward them for an easy, painless (and stress-free) way to kick up the spirits when confidence starts wavering.
Neuron published a Carnegie Mellon paper discovering how the language centers of the brain produced more white matter in participants adhering to a reading schedule over the period of six months. Seeing as how this particular tissue structure controls learning, it’s kind of sort of a good thing to be building, especially when it comes to language processing.
Brain flexibility is how the essential organ stratifies itself, delegates tasks, and compensates for damages, and Carnegie Mellon researchers believe reading might serve as a particularly excellent way to encourage this. These discoveries of how the brain organizes itself beg for further insight into the autism spectrum and other conditions that may stem from poor neurological communication.
The physiology of reading itself contributes to better memory and recall, specifically the part involving bilateral eye movement. However, it holds no influence over implicit memories: most of the benefit comes when recalling episodic memories.
Kids and parents who read aloud together enjoy tighter bonds than those who do not, which is essential to encouraging the healthiest possible psychological profile. Along with the cognitive perks, these sessions build trust and anxiety-soothing comfort needed to nurture positive behavior and outlooks.
Listening skills improve reading, and reading improves listening skills, particularly when one speaks words out loud instead of silently. When learning a primary or secondary (or beyond) language in particular, fostering interplay between the two ability sets makes it much easier to soak up vocabulary and grammar.
Once again, any bookish types hoping to claim the full benefit of this cognitive phenomenon gain it via close reading and analysis, not skimming, speed reading, and skipping. Because the activity is far from passive, it challenges the mind to focus, focus, focus: which certainly carries over into other areas of life!
Psychology professionals in the United Kingdom and United States gravitate towards bibliotherapy when treating non-critical patients, thanks to studies printed up in the journal Behaviour Research and Therapy. The practice involves prescribing a library card, which recipients use to check out one of the approved 35 self-help books for 12 weeks; as a supplement (not a replacement) to conventional therapy, it has proven extremely valuable to the clinically depressed and anxious.
Yes, who’d have thought that writing could be good for the brain? Slaving away writing seems to be like practicing sports or music, stimulating the brain to be better. Dr Martin Lotze used fRMI to look at novice and experienced writers’ brains – probably to steal ideas for a new book – and how they worked in different writing activities. Some regions of the brain became active only during the creative process, i.e. not while copying, with brainstorming sessions lighting up the vision-processing regions. It’s possible that they were, in effect, seeing the scenes they wanted to write.
But the two groups differed slightly in how their brains worked whilst being creative. Novice writers activated their visual centres, whilst experienced writers showed more activity in regions involved in speech. “I think both groups are using different strategies,” Dr. Lotze said. It’s possible that the novices are watching their stories like a film inside their heads, while the writers are narrating it with an inner voice. Experienced writers also had a region called the caudate nucleus become active, the part of the brain involved in skills that comes with practice. In the novices, the caudate nucleus was quiet, showing that practice works the brain.
Some other articles to read:
With the rebirth of Cosmos on TV, Neil DeGrasse Tyson and the team have brought science back into the mainstream. No longer is science confined to the latest puff piece on cancer research that is only in the media because a) cancer and b) the researchers are pressuring the funding bodies to give them money. The terms geek and nerd have stopped being quite the derogatory terms they once were. We even have science memes becoming as popular as Sean Bean “brace yourself” memes.
This attention has also cast a light on the scientific process itself with many non-scientists and scientists passing comment on the reliability of science. Nature has recently published several articles discussing the reliability of study’s findings. One article shows why the hard sciences laugh at the soft sciences, with the article talking about statistical errors. I mean, have these “scientists” never heard of selection and sample bias? Yes, there is a nerd pecking order, and it is maintained through pure snobbishness, complicated looking equations, and how clean the lab-coat remains.
As a science nerd, I feel the need to weigh in on this attack on science. So I’m going to tear apart, limb by limb, a heavy hitting article: Cracked.com’s 6 Shocking Studies That Prove Science Is Totally Broken.
To say that science is broken or somehow unreliable is nonsense. To say that peer review or statistical analysis is unreliable is also nonsense. There are exceptions to this: sometimes entire fields of study are utter crap, sometimes entire journals are just crap, sometimes scientists and reviewers suck at maths/stats. But in most instances these things are not-science, just stuff pretending to be science. Which is why I’m going to discuss this article.
A Shocking Amount of Medical Research Is Complete Bullshit
#6 – Kinda true. There are two problems here: media reporting of medical science and actual medical science. The biggest issue is the media reporting of medical science, hell, science in general. Just look at how the media have messed up the reporting of climate science for the past 40 years.
Of course most of what is reported as medical studies are often preliminary studies. You know: “we’ve found a cure for cancer, in a petri dish, just need another 20 years of research and development, and a boatload of money, and we might have something worth getting excited about.” The other kind that get attention aren’t proper medical studies but are spurious claims by someone trying to pedal a new supplement. So this issue is more about the media being scientifically illiterate than anything.
Another issue is the part of medical science that Ben Goldacre has addressed in his books Bad Science and Bad Pharma. Essentially you have a bias toward positive results being reported. This isn’t good enough. Ben goes into more detail on this topic and it is worth reading his books on this topic and the Nature articles I previously referred to.
Many Scientists Still Don’t Understand Math
#5 – Kinda true. Math is hard. It has all of those funny symbols and not nearly enough pie charts. Mmmm, pie! If a reviewer in the peer review process doesn’t understand maths, they will often reject papers, calling the results “blackbox“. Other times the reviewers will fail to pick up the mistakes made, usually because they aren’t getting paid and that funding application won’t write itself. And that’s just the reviewers. Many researchers don’t do proper trial design and often pass off analysis to specialists who have to try and make the data work despite massive failings. And the harsh reality is that experiments are always a compromise: there is no such thing as the perfect experiment.
Essentially, scientists are fallible human beings like everyone else. Which is why science itself is iterative and includes a methods section, so that results are independently confirmed before being accepted.
This illustrates that when you test for something at the 95% confidence interval you still have a 1 in 20 chance of a false positive or natural variability arising in the test. Some “science” has been published that uses this false positive by doing a statistical fishing trip (e.g. anti-GM paper). But there is another aspect, if you get enough samples, and enough data, you can actually get a statistically significant result but not have a significant result. An example would be testing new fertiliser X and finding that there is a p value of 0.05 (i.e. significant) that the grain yield is 50kg higher in a 3 tonne per hectare crop. Wow, statistically significant, but at 50kg/ha, who cares?!
But these results will be reported, published, and talked about. It is easy for people who haven’t read and understood the work to get over excited by these results. It is also easy for researchers to get over excited too, they are only human. But this is why we have the methods and results sections in science papers, so that calmer, more rational heads prevail. Usually after wine. Wine really helps.
Scientists Have Nearly Unlimited Room to Manipulate Data
#3 – True but misleading. Any scientist *could* make up anything that they wanted. They could generate a bunch of numbers to prove that, for an example of bullshit science, the world is only 6000 years old. But because scientists are a skeptical bunch, they’d want some confirming evidence. They’d want that iterative scientific process to come into play. And the bigger that claim, the more evidence they’d want. Hence why scientists generally ignore creationists, or just pat them on the head when they show up at events: aren’t they cute, they’re trying to science!
But there is a serious issue here. The Nature article I referred to was a social sciences study, a field that is rife with sampling and selection bias. Ever wonder why you hear “scientists say X is bad for you” then a year later it is “scientists say X is good for you”? Well, that is because two groups were sampled and correlated for X, and as much as we’d like it, correlation doesn’t equal causation. I wish someone would tell the media this little fact, especially since organic food causes autism.
Other fields have other issues. Take a look at health and fitness studies and spot who the participants were: generally they are university students who need the money to buy tinned beans and beer. Not the most representative group of people and often they are mates with one of the researchers, all 4 of them. Not enough participants and a biased sample: not the way to do science. The harder sciences are better, but that isn’t to say that there isn’t limitations. Again, *this is why we have the methods section, so that we can figure out the limitations of the study.*
The Science Community Still Won’t Listen to Women – Update
#2 – When I first wrote this I disagreed, but now I agree, see video below. As someone with a penis my mileage on this issue is far too limited. That is why it was only when a few prominent people spoke out about this issue that I realised science is no better than the rest of society. It hurts me to say that.
There is still a heavy bias toward men in senior positions at universities and research institutes, women get paid less, women are guessed to be less competent scientists, and apparently it is okay to ogle female scientists’ boobs… Any of these sound familiar to the rest of society? This is gradually changing, but you have to remember what age those senior people are and what that generation required of women (quit when they got married, etc). That old guard may have influence but they’ll all be dead or retired soon where their influence will be confined to the letters to the editor in the newspaper. After seeing the video below, especially the way the question was asked, I think it is clear that the expectations for women create barriers into and through careers in science (the racism is similar and is one I see as a big issue). So it starts long before people get into science, then it continues through attrition.
Fast forward to 1:01:31 for the question and NGT’s answer (sorry, embed doesn’t allow time codes).
Recently there have been a spate of very public sexist science moments. Whether it be telling female scientists they should find a male co-author to improve their science, or Nobel Laureates who don’t want to be distracted by women in the lab, it is clear that women in science don’t get treated like scientists. Which is why I find the Twitter response to the Tim Hunt debacle, #distractinglysexy, to be exactly the sort of ridicule required. Recent events seem to imply at least repercussions are occurring now.
Scientists are meant to be thinkers, they are meant to be smart, they are meant to follow the evidence. They aren’t meant to behave like some cretin who hangs out on the mens rights movement subreddit discussion. Speaking of which, watch science communicator Emily Graslie discuss the comments section of Youtube.
It’s All About the Money
#1 – D’uh and misleading. Research costs money. *This is why we have the methods section, so that we can figure out the limitations of the study.* Money may bring in bias, but it doesn’t have to, nor does that bias have to be bad or wrong. Remember how I said above that science is an iterative process? Well, there is only so big a house of cards that can be built under a pile of bullshit before it falls down in a stinky mess. Money might fool a few people for a while (e.g. climate change denial) but science will ultimately win.
Ultimately, science is the best tool we have for finding out about our reality, making cool stuff, and blowing things up. Without it we wouldn’t be, this article wouldn’t be possible, we wouldn’t know what a Bill Nye smack down looks like. Sure, there is room for improvement, especially in the peer review process and funding arrangements, and science is flawed because it is done by humans, but science is bringing the awesome every day: we have to remember that fact.
After a recent discussion about gun myths, I realised that my last blog post hadn’t covered anywhere near enough of the myths that are floating around (this article will mainly be about US guns, but parallels from the resources and science cited can be drawn to other countries). This is obviously because stuff is much easier to make up than to research, just ask Bill “tides go in, tides go out” O’Reilly. One of the big problems with research in the US on guns is that the National Rifle Association has effectively lobbied to cut off federal funding for research and stymieing data collection and sharing on gun violence. As a result there are a lack of hard numbers and research often tends to be limited in scope. Scope: get it? So like a lost rabbit wandering onto a shooting range, or a teenager wearing a hoody, it’s time to play dodge with some of these claims.
Myth: Guns make you safer, just like drinking a bit of alcohol makes you a better driver.
The myth I hear the most often is that guns make you safer; just like the death penalty is a great deterrent, surveillance cameras stop crime, and the internet is a good source of medical advice. The problem with this myth is that people like having a safety blanket to snuggle. What they don’t realise is that guns don’t make you safer, they make you 4.5-5.5 times more likely to do something stupid to someone you know and love than be used for protection.
I want to be clear here: there’s nothing wrong with going shooting at the range, or hunting vermin. The problem is thinking that you can use a gun for self-defence, when it actually makes the violence problem worse. That gun escalates the violence because people have it there: why not use it? To wit the criminals enter into an arms race and a shoot first policy.
Owning a gun has been linked to higher risks of homicide, suicide, and accidental death by gun. For every time a gun is used in self-defense in the home, there are 7 assaults or murders, 11 suicide attempts, and 4 accidents involving guns in or around a home. 43% of homes with guns and kids have at least one unlocked firearm, and in one experiment it was found that one third of 8-to-12-year-old boys who found a handgun pulled the trigger, which is just plain unsafe.
As for carrying around a gun for self-defence, well, in 2011, nearly 10 times more people were shot and killed in arguments than by civilians trying to stop a crime. In one survey, nearly 1% of Americans reported using guns to defend themselves or their property. However, a closer look at their claims found that more than 50% involved using guns in an aggressive manner, such as escalating an argument. A Philadelphia study found that the odds of an assault victim being shot were 4.5 times greater if they carried a gun. Their odds of being killed were 4.2 times greater.
It is even worse for women. In 2010, nearly 6 times more women were shot by husbands, boyfriends, and ex-partners than murdered by male strangers. A woman’s chances of being killed by her abuser increase more than 7 times if he has access to a gun, and that access could be the woman keeping one around just in case her attacker needs it. One US study found that women in states with higher gun ownership rates were 4.9 times more likely to be murdered by a gun than women in states with lower gun ownership rates; funny that.
There is also the action hero delusion that often gets trotted out when talking about guns for self-defence. The idea is that everyone is a good guy, so give them a gun and you have a bunch of action heroes ready to fight off the forces of evil. This has worked so well that all governments are thinking of getting rid of the military….
The reality is that the average person is not an action hero and would fail miserably in a high stress situation with actual bad guys. You only have to look at the statistics:
I’ve seen several examples cited of “citizens” shooting someone who looked intent on killing everyone they could (with a gun…). But in every instance the “citizen” was actually an off-duty police officer, or a person in law enforcement, or someone in the military. In other words, the people who stop mass shootings or bad-guys with guns, are trained professionals.
There have also been a few studies done that claim X million lawful crime preventions, therefore guns must be good; notably by researchers Lott and Kleck. To say that their research is flawed is like saying Stephen King has sold a few books. Lott’s work has been refuted for extrapolating flawed data. Kleck’s research has similarly been refuted by many peer reviewed articles:
Myth: Guns don’t kill people, people kill people, quite often with a gun, because punching someone to death is hard work.
If this myth were true we wouldn’t send troops to war with weapons. I get where people are coming from with this myth, because the gun itself is an inanimate object and is only as good or bad as the person using it. Yes, I did just quote the movie Shane: thanks for noticing. But here is the thing, in a society we are more than just a bunch of individuals, we are a great big bell-curve of complexity. So when you actually study the entire population you find that people with more guns tend to kill more people—with guns. In the US, states with the highest gun ownership rates have a gun murder rate 114% higher than those with the lowest gun ownership rates. Also, gun death rates tend to be higher in states with higher rates of gun ownership. Gun death rates are generally lower in states with restrictions such as firearm type restrictions or safe-storage requirements.
The thing is that despite guns being inanimate objects, they affect the user/owner’s psyche. It’s like waking up one morning with a larger penis or bigger boobs: you not only want to show them off, you act differently as a result. Studies confirm this change in behaviour. Drivers who carry guns are 44% more likely than unarmed drivers to make obscene gestures at other motorists, and 77% more likely to follow them aggressively. Among Texans convicted of serious crimes, those with concealed-handgun licenses were sentenced for threatening someone with a firearm 4.8 times more than those without. In US states with Stand Your Ground and other laws making it easier to shoot in self-defence, those policies have been linked to a 7 to 10% increase in homicides.
Now people also like to try and red herring the argument against guns by pretending that video games or mental health is the problem. The NRA tried to claim video games were to blame after the Newtown shootings. Of course we’d be able to see this relationship by looking at gun ownership versus video game playing, like by comparing the USA to Japan.
|Per capita spendingon video games||$44||$55|
|Civilian firearmsper 100 people||88||0.6|
|Gun homicidesin 2008||11,030||11|
Myth: They’re coming for your guns to stop our freedom and tyranny and democide and Alex Jones said so and aliens made me do it!
As I stated above, the statistics on guns and gun violence is hazy. No one knows the exact number of guns in America, but it’s clear there’s no practical way to round them all up (never mind that no one in Washington is proposing this). Those “freedom” loving gun owners – all 80 million of them – have the evil government out-gunned by a factor of around 79 to 1. If government were coming for the guns, you’d think they’d have done so before being this grossly out-gunned.
Yes, 80 million gun owners is a minority! I find it interesting that from 1989 to 2000 there was a decline in gun ownership of 46% to 32%. Now the decline in ownership rebounds to hover between 34 and 43% for 2000-2011 (notably the high point in 2007 was after the Virginia Tech shooting which the NRA did a lot of campaigning around), which shows why the decline didn’t continue. Now compare those rates of ownership to the recent report from the US Bureau of Justice Statistics sums up the rates of gun violence. You can clearly see a decline in gun violence from 1993 to 2000 before a plateau that has pretty much held since. This is confirmed by other studies. This is an important take home point: all the research shows violence and gun violence is on the decline. The idea that people need a gun for protection is becoming more and more ridiculous. This is despite the global decline in violence, and trends seen in countries like Australia (more Aussie stats here). On a side note, in the last lot of statistics you see that the more female, educated, non-white, and liberal you are, the less likely you are to own a gun.
So scare campaigns may work to boost sales of guns for a while, but overall, most people don’t want or need a gun. The long term trend has nothing to do with the government coming for the guns and everything to do with people realising they don’t need one and prefer to read a good book, or watch a movie, instead of going to the range.
The simple fact is that more guns in society is the best predictor of death, thus it is time to rethink the reasons for owning a gun, especially if that reason is in case you have to John McClane a situation.
More mythbusting gun articles: http://www.latimes.com/opinion/op-ed/la-oe-hemenway-guns-20150423-story.html
Born to write? Born to be an athlete? Born to be a rocket scientist? People love to talk about “natural” ability or talent as the be all and end all of achievement. Since I actually own a genetics text book – it props up my DVD collection on the shelf – and once watched someone do manual labour, I feel qualified to comment on the talent vs. work debate.
Genetics is a big, complicated, topic, so I’m going to provide a facile overview of it. Genetics is that thing that means some people have higher baselines, are higher responders to training/learning, and are likely to achieve more (see this and read this for sports examples). For some the opposite is true, they have low baselines, don’t respond well to training/learning, and are likely to suck no matter what they do. There isn’t much you can do about your genetics, unless you happen to have a time machine and can play matchmaker to get better parents.
But that isn’t to say that you shouldn’t try to get good at stuff. Until you are tested and start training, you don’t really know what your “ability” is. And just because you might continue to suck, you will suck less than you did before, which means you will be better than those around you who didn’t even try. Take an example from sports – because people actually do science on athletes, the arts talk about their feelings too much – athletes tend to live longer than normal because they are more likely to be fitter, which lowers cardiovascular mortality. You don’t get fit sitting on a couch, watching TV, snacking on corn chips, in your underwear: you have to train.
So let’s take this into the writing field. You may have been born with a massive brain, nimble fingers, and an imagination that rivals college students tripping on acid, but that doesn’t mean much if you never learn to read, or write, or are too poor to have access to materials for writing, or the persistence to share that writing with the world. All that talent and ability counts for nothing if you don’t do something with it. You have to train. The difference between the talented individual and the untalented individual can often just be a lot of hard work by the untalented. I mean, who has sold more books: James Paterson or any of the Booker Prize winners?*
But let’s not get carried away. We have to acknowledge that any “talent” is a GxE interaction (genetics by environment interaction). Genetics, or that innate ability, is still a factor that we can’t dismiss, but so is the environment. So all of that skill development and training will come more easily, more quickly, and possibly progress further for some, but that isn’t an excuse for not doing the hard work.
* Not that I’m insinuating that winning a Booker Prize actually makes you a talented or good writer. I actually use those prize lists to figure out what not to read.
There is a general rule in arguments: don’t argue with stupid people, they drag you down to their level and beat you with experience. That is pretty much the problem scientists and experts have when debating anti-science proponents – such as creationists, anti-vaccinators, anti-GM campaigners, climate change deniers, etc. Yet Bill Nye the Science Guy decided that, in the interest of science and education, he would debate a creationist.
The debate started with Bill Nye and Ken Ham stating a 5 minute opening piece. Then Ken went into his 50 minute argument, which is when my cushion really started to earn its keep protecting my desk from damage.
I really find it hard to fathom how anyone can be credulous of Ham’s statements. In his 50 minutes he used all sorts of logical fallacies, most notably his videos of “creationist scientists” as argument from authority. But it wasn’t this that really got the lump on my forehead rising, it was the use of “evidence” for his argument that simultaneously refuted the arguments. One example was the phylogenetic tree for dogs. Ham argued that the rise of Canis lupus familiaris from a wolf (yeah, just one, let’s just let that one go through to the keeper) was what you would see from biblical predictions of dogs speciating after the global flood 4,000 years ago. Just one problem. Teeny tiny. The figure showed dogs evolving from a group of wolf ancestors over the course of 14-15,000 years.
He didn’t just do this once, he did it repeatedly. Another example arose when he was talking up one of his creationist pals who helped design a satellite (or something, didn’t really care because it was irrelevant). He used the example of how scientists had been debating how old the universe was: they couldn’t agree on the age. The part he left out about that particular debate was that the age of the universe was somewhere around about 13.8 billion years old (+ 37 million years), and they had a bunch of data they were trying to make sure they had the errors accounted for. The debate was about the difference in the confidence range (or error margin) between the Planck satellite measures and the Wilkinson Microwave Anisotropy Probe measures. The error margin is 6,000 times greater than the age of the Earth that Ham claims. The Earth’s age is still 2 million times older than Ham’s claim, yet he uses this example as if to give credence to his claims.
Now Nye did his best in his 50 minutes to show that Ham’s claims were flawed, but also how evidence and scientific observation and prediction work. Others have claimed, and I agree to an extent, that Nye’s mistake was to try and cover too much ground. If he was talking to a receptive audience he would have destroyed Ham and had the crowd eating out of his hand. But at a creationist museum, with a bunch of science deniers, it would come across as too much information and too confusing. Although Nye’s last couple of minutes pretty much killed the entire debate, with trees, rocks, size of the universe, distance from stars but limits of how fast the light can travel, all showing that the Earth and Universe are much much older.
The first rebuttal saw Ham carrying on about “you weren’t there so you don’t know.” Brian Dunning had a great take on this particular argument:
There is a rumor that Bill Nye @TheScienceGuy debated evolution with Ken Ham. Not true. It did not happen, because you weren’t there.
In this first rebuttal, Ham again used evidence that rebutted his own claims, especially when talking about radio-carbon dating. Showing that measurements have error margins, or can be somewhat imprecise, doesn’t negate the fact that the measurements are still many orders of magnitude outside of the age of the Earth claimed by Ham. Then he moved onto saying that the bible is right, everything else is wrong (let’s just ignore that the bible isn’t even consistent with itself, let alone the fact that it is a translation of a translation, thus literal interpretation isn’t supported by biblical scholars).
Nye then rebutted Ham’s statements. His classic put down was for the claim that every animal and humans were vegetarian until they got of the ark: lion’s teeth aren’t really made for broccoli.* Ba-zing!
Next Ham tried to point out that creationism isn’t his model (then he blames secularists for scientists). This is true, there are other nutters who came up with this crap. But Ham tried to pretend that “scientists” came up with the various creation models (NB: just because a scientist said something, doesn’t make it science or scientific). Then he talks about species and kinds and how Nye was confusing what a kind was. Easy to do when the idea of a kind is bullshit and unsupported by any actual science.
Nye then tore apart the claims about the rise of species from kinds using the basic math involved. He also called bullshit on the ship building skills of ancient desert people. The main point in this rebuttal was that Ham hadn’t addressed Nye’s point adequately, and that Ham’s claims aren’t supported by the majority of religious people, let alone scientists.
My desk and forehead had had enough by this stage, so I didn’t watch the Q&A section, but it can be viewed here.
The point I wanted to make from this was that Ham had a huge advantage in this discussion. I’m not talking about the home team venue, nor the credulous crowd, I’m talking about the lack of need for evidence. All Ham had to do, and pretty much what he did, was seed doubt in science and then declare “creationism wins” (which might as well be “God did it”). This is the problem with any debate with anti-science: the scientists have to prove their case with evidence and logical reasoning; the anti-science side only has to sow some doubt. And that doubt can vary between legitimate claims through to flat out lies, it doesn’t matter. So Nye shouldn’t have taken the debate.
But Nye was right to take the debate.
Hang-on. Have you hit your head against your desk a few too many times during that debate?
No. Bill Nye is a well known and respected science communicator. He went into the belly of the beast to stand in the echo chamber and sow some doubt (how’s that for a metaphor-fest?). As he stated himself, Nye knows that America (and the world, but let’s allow him his patriotism) needs science and innovation for the future of society. Creationism and other anti-science nonsense undermine this. If no-one challenges the group-think and echo chamber of the creationists (et al.) then they will continue to be mislead and misinformed by people like Ken Ham. You can’t have someone reject evolution yet rely on germ theory for modern medicine. You can’t have someone reject radio-carbon dating yet use medical imaging. That is incompatible, that is a rejection of reality, and it leads to stupid stuff happening that curbs development of new technologies and advancements to society.
Other opinions on who won:
Shane proposes that Nye needed to pick a couple of points to hammer home. This feeds into science communication research that shows you can get distracted from the main narrative with too many points.
Update: Richard Linski has blogged about the debate and Ham’s use of his E. coli evolution work. Not surprisingly, Ham completely misrepresented the work. As I said above, Ham did this with many examples in his presentation. It is important that people realise just how deceptive Ham’s statements and claims are.
Update: It is clear that many of Ham’s supporters were not listening to Bill Nye and are wilfully ignorant. This Buzzfeed article (yeah, I know, Buzzfeed) brings up a lot of the points that Nye addressed, explained clearly and simply, showing they didn’t listen to Nye, and slept through school.
Update: This article makes a nice statement that ties into some of my points about why Nye took the debate. To quote:
It brought new attention to YEC (Young Earth Creationism) to exactly the people we need to see it- the large swath of Christian and other religious parents who think of Intelligent Design or Guided Evolution or some other pseudo-scientific concept when they imagine “teaching the controversy“. These people are embarrassed by people like Ken Ham. They know the earth isn’t 6000 years old, and they understand just how impossible it is to square that belief with observable phenomena.
Update: The ever awesome Potholer54 just posted a video on one point about evolution and Ken Ham’s rebuttal of his own arguments. Worth watching.
* Okay, not the best point to make, as teeth aren’t definitive of diet, but if the comment is viewed as being representative of animal physiology overall, then it is a very valid putdown of the vegetarian claims.
Ever since Spielberg made us scared of seeing any more Indiana Jones films, people have felt better about blaming him for the hysteria around sharks.
Recently in my home state of Western Australia there has been a decision made to cull sharks because some people have been killed by them. Clearly we should blame sharks for just wanting a hug and not humans for dressing up like shark food. This is a stupid decision and I’m about to outline why we can’t even tell if there have been more shark deaths, let alone whether a cull would actually work, let alone whether you’d know if the cull does anything. It all comes down to statistics. Well, that and media beat-ups to sell advertising space.
You’d honestly think that there had been a change in the number of people dying in Australia from shark attacks in order to justify a shark cull. Well, the official stats show there hasn’t been an increase in deaths from shark attacks. In fact the deaths are so low the noise around the long term average of 1.38 deaths per annum (2000-2012), that any increase or decrease in deaths are impossible to assign any significance to (see chart below). Three deaths in a year (2000): could be an anomaly. Zero deaths the year after (2001): likely to be regression to the mean. Number of deaths from the most ferocious animal on the planet: bees; 10 per year.
What you do see in the data is a slight increase in the number of attacks. If you look at the number of attacks and fatalities since 1900, there has been a general increase in the number of shark attacks, but a decrease in the fatalities from shark attacks. It’s almost as though there are more people in the world and more of them bobbing up and down in the ocean in seal costumes, possibly on a tasty cracker.
Now this is interesting for the world and Australia, as it appears that despite our best efforts as humans, sharks aren’t taking revenge for the 100 million of them we kill each year. But this is about a shark cull in Western Australia: what’s happening there? Well, these tables say it all really:
|State||# Cases||Fatal||Injured||Uninjured||Last Fatality|
|NSW||243||68 (27.9%)||120||55||2013 Coffs Harbour|
|QLD||251||82 (32.7%)||151||18||2011 Fantome Island|
|WA||92||20 (21.7%)||57||15||2013 Gracetown|
|SA||48||18 (37.5%)||23||7||2011 Coffin Bay|
|VIC||45||9 (20%)||27||9||1987 Mornington Peninsula|
|TAS||15||3 (20%)||8||4||1993 Tenth Is, Georgetown|
|NT||10||2 (Duh)||6||2||1938 Bathurst Island|
|Total||704||202 (28.7%)||392||110||(Revised 28/1/2014)|
Western Australia accounts for ~13% of shark attack deaths. When we look at 2012 data we see that WA is having a greater proportion of the Australian attacks and accounts for all the fatalities in Australia. The terms “bigger population”, “longer coastline”, “more cashed up bogans“ come to mind.
|TOTAL – Unprovoked||14||2||9||3|
|TOTAL – Provoked||8||0||5||3|
So there is no actual proof that there are any more deaths occurring from shark attacks, definitely no trend toward more deaths, but a significant increase in the number of media reports on those deaths (citation needed). Even on a state by state basis there isn’t any death trend. But there is a trend towards more shark incidents. What we are actually seeing is an increase in the number of people dressing up like seals/shark food (scuba divers and surfers).
Circumstances affecting shark / human interactions:
The number of shark-human interactions occurring in a given year correlates with human population increases and the amount of time humans spend in the shark’s environment. As Australia’s population continues to increase and interest in aquatic recreation rises, it would realistically be expected that there will be an increase in the number of shark encounters.
Let’s put that in perspective, Australians have a 1 in 3,362 chance of drowning at the beach and a 1 in 292,525 chance of being killed by a shark in one’s entire lifetime. In Australia there are 1.38 deaths per year from sharks, 121 deaths per year from drowning at the beach, and 1,193 deaths per year from driving. We’re more likely to die from all the stupid shit we do, than from sharks. So why have a shark cull?
There is no real reason to have a shark cull. We already kill 100 million of the things annually anyway. What we actually need to do is look at where the sharks are looking for food, has their food moved, if so due to what, and are we seeing less shark food available such that sharks are looking for alternate foods. The shark cull with drum lines and nets is actually likely to kill off dolphins, turtles, rays, and endangered shark species, which is why fisheries researchers don’t support the cull.
Update: I neglected to mention that other states in Australia have been using baiting and nets, in the case of Queensland, since 1962, and since 1937 in New South Wales. Reports are not complimentary of the Queensland nor New South Wales programs. To quote:
…the Fisheries Scientific Committee is of the opinion that the current shark meshing program in New South Wales waters’ adversely affects two or more threatened species, populations or ecological communities and could cause species, populations or ecological communities that are not threatened to become threatened.
And (okay, I’ve cherry picked this a bit, read the whole report on how we are overfishing, killing shitloads of sharks, destroying the fisheries and adding baiting on top of this):
The main pressures on grey nurse sharks appear to be fishing activities and shark control programs……. The biological susceptibility of sharks to over fishing, evidence for increasing fishing pressure and lack of information have given rise to increasing concern about the sharks and rays of the Reef.
Essentially shark baiting, whilst paling in comparison to the 100 million sharks killed for their fins annually, is another pressure that endangered species don’t need. Especially when the baiting is still killing other endangered animals, not just sharks.
For more, read these articles:
I recently watched a debate between Ben Goldacre and Peter Fischer on homeopathy. During the course of the debate, an audience member asked, “If water has a memory, how come you’re not sick every time you drink water out of the tap?.
A homeopathic practitioner answered (paraphrased) that boiling the water resets the memory and that homeopathic remedies are only effective when using boiled water. He makes another comment implying that if a remedy were prepared with tap water, it wouldn’t be effective.
I realize the above related question (“Does water have a memory…?”) is nearly identical. I’m trying to ask it another way as it’s possible to persist with the water memory concept despite the other question’s answer. One could simply say (my hypothetical response),
“Well, we don’t know how it works and perhaps it isn’t by the known mechanism of how water behaves… but trials indicate that it works, nonetheless and that’s all I need.”
Since the audience member in the video indicated tangible predictions, I’m interested if they’ve ever been put to the test. Thus, my question is:
Has a trial ever been conducted in which homeopathic remedies prepared from both unboiled and boiled water were compared against one another in terms of patient response?
If there is another way to answer this question please go for it.
It really doesn’t make any difference if the water is boiled or not, homeopathy doesn’t work.
The Minimum Dose and Avagadro’s Number The second and most controversial tenet in homeopathy is that remedies retain biological activity if they are diluted in a series (usually in a 1:10 or 1:100 diluent–volume ratio) and agitated or shaken between each dilution. Hahnemann began this process to reduce toxicity, but later he claimed that this “potenization” process extracted the “vital” or “spirit-like” nature of these substances (2). The limit of molecular dilution (Avagadro’s number) was not discovered until the later part of Hahnemann’s life; by then homeopaths all over the world were reporting that even very high potencies (dilutions lower than Avagadro’s number) produced clinical effects. The implausibility of such claims has led many to dismiss any evidence of homeopathy’s effectiveness as artifact or delusion (3). http://www.annals.org/content/138/5/393.full
But lets pretend for a moment that water does have memory. The aspect of boiling has not been researched. A search of Google Scholar nets no results for boiling and homeopathy. When referring to “how-to” guides of preparations it becomes obvious that homeopaths are merely after clean or unpolluted water to make their preparations in.
Ingredients … 1/2 or 1 litre of boiled water (distilled water may be bought at pharmacies in some countries, if you want that, and bottled, rinsed water is commonly sold in groceries too)
Preparing your own bottle: Boil the glass bottle and dropper in filtered water for 15 min. and let it cool completely. Fill it just to the neck with filtered or distilled water.
So clearly the idea that boiling is the only way to reset the water is not backed up by the practices employed by homeopaths themselves. This combined with the fraudulent claim that water retains memory shows that this is another misdirection to allow justification.
According to Vendomois et al, 2009:
these data highlight signs of hepatorenal toxicity, possibly due to the new pesticides specific to each GM corn.
Monsanto, the manufacturer of two of the studied strains of GM corn, responded, dismissing the article, particularly by criticizing the statistical methods used. Is Monsanto’s criticism valid?
Have their been additional studies done that either support or refute the claim that genetically-modified corn has toxic effects?
The simple answer is no.
GM corn has the BT gene that allows lower use of pesticides due to increased or the RR gene that allows the use of glyphosate for weed control. Neither of these alterations have any impacts upon the production of sugars or proteins in the plant. http://www.ca.uky.edu/entomology/entfacts/ef130.asp
The problem that can arise is from the pesticides that are now used on the crops and the timing of their application. These pesticides are known to harm mammals and if the dose is high enough can cause problems. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2793308/
Generally though, because you are removing pests and weeds the plants tend to be healthier so they are less impacted by pathogens, thus better for consumption. http://www.acsh.org/factsfears/newsID.962/news_detail.asp
There is an issue with using corn as a feed supplement in animals though. Corn is not a complete food source and is generally low in protein, especially tryptophan. This means that a feed mix is required, not just straight corn meal. http://www.ag.ndsu.edu/pubs/ansci/beef/as1238w.htm
Another issue is that corn can cause Pellagra. This is due to the niacin and B12 being bound in the corn starches and not being released in normal digestion. Tryptophan is also low in corn and can cause Pellagra.
So the problems often cited with GM corn are actually just problems with corn itself. Neither are harmful, if used correctly in a balanced diet, but pesticide residues are of concern. For more see this: http://onlinelibrary.wiley.com/doi/10.1046/j.0960-7412.2002.001607.x/full
Another point that should be made is that the paper cited is from a notorious “research” group who produce shoddy science in order to further their biased agenda against GM technology. The big problem with the paper is that it uses the wrong sort of analysis and the data has already been analysed by two other papers and found to say the opposite of what this paper says. Essentially, if you do a statistical test with a 95% confidence margin, you are saying that you have one chance in twenty of being wrong because of natural variability. So if you measure 20 variables with separate tests, you are likely to have one be a false positive result. Measure 40 with separate tests, 2 false positives. This is what the research group did, set up the stats to generate lots of false positives, instead of analysing the data correctly with tests that account for this problem. It should be noted that this is a common problem/tactic with anti-GM research papers.
Additional question: The ACSH source claims Studies Indicate GM Crops Are Safer and Healthier, but last time ACSH reported their funding, they were co-funded by what are now GMO companies. Currently they are not open about their funding at all. Therefore, their independence cannot be established. Can you back up the claim by research where all funding sources are open and independent? – gerrit
Reply: Of course there is plenty of independent data. gmopundit.blogspot.com/ has an entire series devoted to the safety studies of GM crops. The highly respected journal Nature had an entire edition devoted to the topic. But that is beside the point, the underlying mechanism of the Bt is not one that works on humans (it is even sprayed on organic farms). We don’t have an alkaline stomach to activate the chemical (ditto some insects it doesn’t impact either) which means it can’t do anything. So the concerns are completely misplaced.
A recent article in NaturalNews claims that last year there were leaked documents exposing that the U.S. Environmental Protection Agency’s (EPA) illegitimately approved toxic pesticide clothianidin for use, while being aware it might kill bees.
Now, the article says, there’s a new study by Purdue University that confirms that clothianidin is actually killing off bees, and that it’s spread has become systematic in the entire food chain.
The entire report is available online via PubMed: Multiple Routes of Pesticide Exposure for Honey Bees Living Near Agricultural Fields
The article goes on to warn about the consequenses of all this:
Without bees, which are now dying off at an alarming rate due to exposure to clothianidin and various other insecticides and fungicides, one third or more of the food supply will be destroyed, including at least 100 varieties of fruits and vegetables that rely on bees for pollination.
The claim is that if people in US don’t do something to stop the bees from dying, at least one third of the food supply (in the US) will be destroyed. Because bees are dying. Which is the caused mainly by clothianidin. Which is in use because of EPA’s failure or corruption.
Is this information accurate? Or does the article misrepresent the situation somehow? Is the study legitimate?
Related: Are Bees Disappearing and Why
There is controversy over the role of neonicotinoids in relation to pesticide toxicity to bees and imidacloprid effects on bee population. Neonicotinoid use has been strictly limited in France since the 1990s, when neonicotinoids were implicated in a mass die-off of the bee population. It is believed by some to account for worker bees’ neglecting to provide food for eggs and larvae, and for a breakdown of the bees’ navigational abilities, possibly leading to what has become generally known as Colony Collapse Disorder.
Low concentrations of imidicloprid and clothianidin have impacts upon bees’ ability to forage and return to the hive.
The results show that almost all the control honey bees returned to the hive, and started again visiting the feeder between 2 to 5 hours after the release. Honey bees fed with the concentration of 100 ppb also returned to the hive, but they returned to visit the feeder only 24 hours after the release. Honey bees fed with 500 ppb and 1000 ppb completely disappeared after the release, and they were not seen during the following 24 hours, neither at the hive nor at the feeding site.
But neonicotinoid insecticides are only one of of many things impacting upon bees, and most research indicates that it is a combination of factors that is behind CCD.
The most recent report (USDA – 2010) states that “based on an initial analysis of collected bee samples (CCD- and non-CCD affected), reports have noted the high number of viruses and other pathogens, pesticides, and parasites present in CCD colonies, and lower levels in non-CCD colonies. This work suggests that a combination of environmental stressors may set off a cascade of events and contribute to a colony where weakened worker bees are more susceptible to pests and pathogens.” Applying proteomics-based pathogen screening tools in 2010, researchers announced they had identified a co-infection of invertebrate iridescent virus type 6 (IIV-6) and the fungus Nosema ceranae in all CCD colonies sampled. (Quoted from Wiki, original USDA report linked above)
So this issue is much larger than any one chemical group and is about environmental management and pesticide usage in general. Most insecticides will kill bees, especially with direct contact. Bees are only one of several pollination vectors in the world, so while they are important, this scare campaign is misguided. What is actually needed is further understanding of CCD, bee breeding programs and management strategies that will actually deal with this issue.
Also, as a general rule of thumb, just about anything that appears on Natural News is likely to be wrong.
Update: A paper published in the middle of last year has some interesting results that could indicate a/the driver of CCD in horticulture. Essentially the article shows that bees don’t just forage on one farm, instead collecting pollen from the surrounding area as they see fit. As such, they come back with all sorts of pollens and all sorts of pesticides and fungicides. It is this combination of pesticides and fungicides in the bees’ found that appears to make the bees a bit sick, so they are more likely to get lost whilst foraging or get infected with mites and fungi. Note the lack of worry about clothianidin and other neonicotinoids, but rather the fungicides being the big problem. To quote:
Our results show that beekeepers need to consider not only pesticide regimens of the fields in which they are placing their bees, but also spray programs near those fields that may contribute to pesticide drift onto weeds. The bees in our study collected pollen from diverse sources, often failing to collect any pollen from the target crop (Fig. 1). All of the non-target pollen that we were able to identify to genus or species was from wildflowers (Table S1), suggesting the honey bees were collecting significant amounts of pollen from weeds surrounding our focal fields.
This indicates that beekeepers and horticultural farmers don’t appear to be respecting withholding periods for agricultural sprays the way they should. Partly because the bees are foraging where they don’t expect them to and partly because they haven’t correctly planned sprays and pollination. It will be interesting to see if these results are backed up by more causative work, although I’m not sure it will apply to broadacre farming (does that mean CCD is mainly a horticulture and small hectare farming issue?).
Update: reasonably balanced TED talk on CCD:
Does global warming make for milder winters? What about specifically North America?
(You often hear people extol global warming for giving us mild winters. Is there evidence of a causal link?)
Two examples claims of global warming causing mild winters in New York, and Tibet are linked in the comments. However, my question is whether this is a global phenomenon.
This is a tricky question to answer because weather, what you experience at your house right now, is not really that same thing as climate, the patterns of global air and sea movements that bring weather.
So milder winters can be a possibility in certain locations, as they will be exposed to an overall warming of the entire atmosphere. But colder winters can be experienced.
Since the mid 1970s, global temperatures have been warming at around 0.2 degrees Celsius per decade. However, weather imposes its own dramatic ups and downs over the long term trend. We expect to see record cold temperatures even during global warming. Nevertheless over the last decade, daily record high temperatures occurred twice as often as record lows. This tendency towards hotter days is expected to increase as global warming continues into the 21st Century.
Vladimir Petoukhov, a climate scientist at the Potsdam Institute for Climate Impact Research, has recently completed a study on the effect of climate change on winter. According to Petoukhov,
These anomalies could triple the probability of cold winter extremes in Europe and northern Asia. Recent severe winters like last year’s or the one of 2005-06 do not conflict with the global warming picture, but rather supplement it.
Weather being a local response to climatic conditions means that you have to understand what has changed in the climatic patterns in your region. What are your local weather drivers? How have they changed since the 1970s?
Thus, you could end up with some areas experiencing colder winters; due to greater moisture levels in the air, more precipitation of snow, greater heat loss at night due to clear skies, etc. Or you could have an area that will experience milder temps in winter due to warmer air currents, warmer oceans, localised heat island impacts, etc.
For further information you should investigate the weather and climate agencies publications for your area.
I was having a conversation last night with someone who was questioning why science? Doesn’t it get in the way of creativity? I’ve never seen it that way, I think Heinlein, Assimov and the like would agree with me. Zen Pencils did the comic below which encapsulates why science very nicely.