Book vs Movie: Once Upon a Time in Hollywood – What’s the Difference?

Did you know that Quentin Tarantino had novelised his ninth film? Neither did I. Let’s take a look and What’s the Difference?

As a Tarantino fan since the early 90s – geez, that makes me sound even older than I am – I have to come clean on Once Upon A Time In Hollywood. I didn’t like it.

I’ll even go a step further and say that his previous film, Hateful Eight, wasn’t good either.

Unlike Hateful Eight, which had a decisive moment when the film fell apart (Tarantino’s voice over setting up the third act just ruined everything for me), Once Upon A Time In Hollywood was entirely pedestrian. It always felt like a film avoiding being anything other than a love letter to Hollywood films of the 60s.

In fairness to the movie, Tarantino was clearly trying to subvert many of the usual movie moments and be more about actors making great films. For example, the scene at the ranch was setup for a fight for Pitt’s character (Cliff Booth) and the Manson acolytes. Instead, Tarantino subverts that moment and there is no fight, allowing us plenty more time for DiCaprio’s character to learn about method acting from his child co-star.

That the novelisation is quite different from the film isn’t particularly surprising. It’s pretty difficult to make Brad Pitt into a thoroughly unlikable character in a movie. Something to do with charisma and production credits. But the book is unconstrained by actor charisma, which makes it a good opportunity to throw the character under the bus.

Regardless of Tarantino’s future literary aspirations, I hope his tenth/final film is able to cement his career as one of the greats.

Once Upon a Time in Hollywood: Who is Cliff Booth anyway?

Once Upon a Time in Hollywood is a celebrated installment in writer/director Quentin Tarantino’s oeuvre. So when he came out with a book adaptation of the story, we were first in line to read it. But was the book markedly different from the film, and do those differences mean something big? We think so and we’ll explain in this Book vs. Film on Once Upon a Time in Hollywood – The New Ending.

The Unappreciated Female Writers Who Invented the Novel

This month’s It’s Lit covers Amatory Fiction.

This is an interesting video for several reasons. I’m always amused when the topic of rethinking “great authors” comes up and people without pearls start clutching them.

The literary canon excluding certain types of authors and books shouldn’t be news to people. But there always seems to be plenty of reactionary debate making excuses for why, for example, Grapes of Wrath got published while Sanora Babb’s Whose Names Are Unknown (written the same year on the same topic, both using Babb’s notes) took 65 years to be released. Yeah, that was a thing.

I’ve covered this before when calls have been made to increase the diversity of the literary lists for students in the hopes that more diversity of texts will be taught. Getting people who don’t read much to acknowledge that “literary greats” are less about talent than luck (timing, contacts, $$, etc) is a hard task. Trying to get those same people to acknowledge that women, people of colour, and non-Americans might have written books throughout history is often a hurdle they are unwilling to even attempt jumping.

Which brings me around to one of my favourite topics here: snobbery and guilty pleasures. The It’s Lit video shows how snobbery essentially relegated an important part of literature to the unknown and unappreciated baskets of history. Combine that snobbery with a bit of the old bigotry of the pants and you will have people trying to ignore a segment of literature that broke boundaries (e.g. Behn wrote one of the earliest anti-slavery novels).

For more on Sanora Babb’s novel, it is worth watching this video:

The guy typically credited with inventing what we know as the modern novel was Miguel de Cervantes with his cumbersome 800+ page book, Don Quixote. But what if I told you that the real antecedent for the modern novel was created by… ladies.

Before the rise of what would become the modern novel, there was Amatory fiction. Amatory fiction was a genre of fiction that became popular in Britain in the late 17th century and early 18th century. As its name implies, amatory fiction is preoccupied with sexual love and romance. Most of its works were short stories, it was dominated by women, and women were the ones responsible for sharing and promoting their own work.

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres, and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Book vs movie: Invincible – What’s the Difference?

This instalment of What’s the Difference? comes from Wisecrack and looks at Invincible.

I have to admit to having given the Invincible TV show a miss. Firstly because it is on Amazon, whose billing practices and worker treatment are terrible. Secondly because I read Irredeemable first and kinda felt I’d been down this path already.

It is interesting that several recent superhero adaptations have looked for material that explores the idea of “What would superheroes really be like?” Probably not surprising given the fact that comic book movie fatigue has started to hit.

Invincible: Does it have an attitude problem?

In a year full of comic adaptations, Invincible stands out as one of the best. But how do its 8 episodes compare to the 144 issues of the original comic? And what does attitude have to do with it? Let’s find out in this Book vs. Film: Invincible.

Jane Eyre: Why We Keep Reading It

This month’s It’s Lit! is all about Jane Eyre, even some of the fan-fic it inspired.

Having not read Jane Eyre, after watching this video I’m even less motivated to do so.

I’m fickle I guess.

Jane Eyre by Charlotte Bronte was there for the weird girls, the quiet ones who watched and listened, the ones who pined away for someone to accept them in all of their weird, dark glory.

But in the nearly 175 years since its publication, the collective definition of what it means to be “a woman on the outside of society” has changed and expanded dramatically—and yet here we are, still dissecting Charlotte Bronte’s words and gravitating towards Jane as a protagonist.

For those of you who have never read Jane Eyre or enjoyed one of the 8000 films, television, stage, or radio adaptations not to mention countless literary retelling here we go.

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Good Shows, Bad Endings

As a science nerd, I love graphs. So this post is an excuse to share the work of Bo McCready.

So who doesn’t love having their favourite show suffer under the inability of the creators to care enough about finishing it properly? Whether it be show runners wanting to do something else, off-set controversies requiring massive changes to the show, or the writers being told they have 8 episodes to wrap up all the intricate characters and plots in a satisfactory way as they join the dole queue, show endings can suffer as a result.

If you go to the Tableau page you can interact with these graphs. You can see the version I’ve saved includes The Simpsons and its slow decline since seasons 5 and 6, that has you questioning how it is still on air 25 seasons later. But you have the ability to add any show.

You can also look at Good Shows with Good Endings. It’s interesting to browse through the list to see several of my favourites: Banshee, Justified, The Wire, Leverage, and Person of Interest. In this graphic I’ve added The Expanse, which has the interesting trend of getting better with each season and finishing at its peak.

As with all data, it is important to take onboard the limitations of this presentation.

Take for example the widely loved mediocrity that was Friends. A highly rated show with a highly rated finale whose entire run could be described as middling entertainment. You can see that largely unchallenging, unengaging, inoffensive entertainment has more of a chance to keep its audience happy.

Another example is Game of Thrones. Hey, remember that show? It was only two years ago. Remember how this pop culture phenomenon died and hasn’t been spoken about with anything other than derision since? I’ve gone into how the final season and finale didn’t manage to meet fan expectations, but worse than Baywatch, How I Met Your Mother, and [insert literally any sitcom ever here]?

I think the takeaway from these graphics is that people need to watch an episode of According to Jim or Big Brother to remind themselves that even at their worst, your favourite show was a well crafted gem.

Film genre popularity

As a science nerd, I love graphs. So this post is an excuse to share the work of Bo McCready.

The first is a graphic of film genres over time. As you can see, some genres are niche (sci-fi and fantasy), some have become less popular over time (westerns and musicals), while some have become more popular (horror and documentaries). Meanwhile, comedy has been dominating since the 1930s.

It should be noted how the films are classified. Obviously, very few films are purely one genre. Westerns would often be (hugely problematic) action movies as well. Some westerns were also romances, and there are at least a few famous musicals in that genre too. More recently, sci-fi could be more accurately termed comic book/Marvel movies. But they also tend to be comedies, action, and box-office gold.

So what does this data actually tell us?

Well, I think it shows a couple of things. The first is that one one genre ever really dominates, despite what we may think. The second is that most films are rarely able to fit neatly within one genre box, no matter how hard reductionists wish they would. And the third is that a bit of humour is always welcome.

How Manga Took Over American Bookshelves

Who likes Manga? And more importantly for the smoking jacket wearing class, is it literature? This month’s It’s Lit! discusses.

Okay, let’s just ignore the American-centric aspect of PBS videos. I’m sure one of their bylaws is about having to do cultural imperialism.

It’s quite interesting how Manga and Anime have percolated out into the mainstream. Most people will have been exposed to at least some of the Anime of various Manga. For myself, I can remember watching Astro Boy as a kid and discovering comics of it at the library. This lead to questions about why they would make a comic of a perfectly watchable TV show? Wouldn’t it make more sense to write something new that could be made into a TV show? Is there some reference in this card index that will help 9 year old me understand this issue better?

At the same time, Manga still has a fringe quality to it. This is partly due to it being (scare quotes) FOREIGN (/scare quotes). But it is also related to the comic format.

You see, comics are made for kids – puffs on pipe whilst leaning against mantle next to log fire, monocle helping me peer down my nose at those Lesser Works.

This tide is slowly turning. People are now able to recognise the merits of comics and Manga. And at some stage we might even get a decent live-action movie based on a Manga.

Astro Boy, Dragon Ball, Akira, Sailor Moon, Demon Slayer, Death Note all these interesting, iconic anime have something very much in common they started off as: manga.

Manga, by its most simplistic definition, are comics or graphic novels originating from Japan, which became extremely popular in the United States starting in the 80s and 90s. We’ve already touched on Western Graphic Novels and Comics, but you know we couldn’t just leave it at that (not with this t-shirt). So today we’re discussing manga as its own rich literature, reflecting the complicated political history of Japan.

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres, and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Book vs Movie: Shadow & Bone – What’s the Difference?

Screen Rant have broken down the differences between book and Netflix show for Shadow & Bone.

I’m about half-way through the series on Netflix. It didn’t exactly wow me out of the gate, but I did think it had potential. A few episodes in and I’m entertained.

I really appreciate the differences between the book and the show. It elevates the show about generic YA and gives us Inej Ghafa earlier in the series.

Although, still generic enough to have me laugh at the super obvious love triangle.

Book vs Movie: The Queen’s Gambit – What’s the Difference?

This month’s What’s the Difference? looks at The Queen’s Gambit.

On my Netflix profile there are a couple of types of recommendation categories that keep popping up. One is “Because you watched John Wick shoot 400 people in the head” and the other is “Movies based on books”. Both are obviously bound to have good recommendations in them.

Needless to say, I became aware of The Queen’s Gambit because of the fact it was based upon a book. That I’d never heard of the book is probably telling you a lot about how many chess thrillers I read.

The changes mentioned in the video above were intriguing. The one that stuck out to me was the mother being turned into a rich woman with a PhD. In the 1950s. This really feels like some lazy shorthand by the screenwriters.

Often in movies you’ll have super-smart characters described as having a number of PhDs. Because smart people obviously feel the need to have multiple PhDs rather than doing postdocs, climbing the corporate or academic hierarchy, and becoming world renowned. It used to be that these smart characters would have a large IQ, but that is falling out of favour. I’m hoping it is because authors realise that IQ has limited utility, but realistically it will be because it has become a cliche.

The idea that the mother had to be rich is something I’ve noted with a lot of films and TV shows of late. There appears to be a fetishisation of wealth happening in our media. Sure, Snowpiercer, Parasite, etc., have all been popular of late. But look at how many protagonists are billionaires (or millionaires). Think about the revision of characters like Spiderman from working class to being wealthy or having wealthy benefactors.

Maybe I’m just having selective memory. There is a noted phenomenon of movie and TV show productions displaying an utter failure to understand what things cost or what poverty looks like. Like having the Friends cast living in apartments that none of them could afford. Maybe I’m just hung-up on the Spiderman example – since a big part of his character was struggling financially whilst being a hero. Or maybe having poor characters makes product placement – like a lime green Alpha Romeo in Michael Bay’s 6 Underground – really hard.

Netflix made chess sexy again with its limited series The Queen’s Gambit. With Anya Taylor-Joy as orphan turned chess prodigy Beth Harmon, writer director Scott Frank created a cinematic portrayal of the mind of a chess genius through substance abuse, struggles with mental health and even the Cold War. But Walter Tevis wrote the novel in a simple, unadorned style that’s a far cry from the stylish and sexy version of the story that wound up in your Netflix queue. So with no restraint on spoilers, it’s time to ask, what’s the difference?

While Anya Taylor-Joy is a more glamorous version of Beth Harmon in the Golden Globe winning awards season darling, the story follows a lot of the same beats. Following her from orphan and prodigy to eventual chess master and world champion, Thomas Brodie-Sangster of Game of Thrones and Harry Melling from the Harry Potter franchise play more complicated versions of their book counterparts. But many of the changes, while seemingly small, have sneaky wide-ranging implications on the Beth’s journey from learning the game from the janitor at an orphanage, to developing an addiction to pills and ultimately her victory over a Russian Grand Master. It’s an interesting and sometimes frustrating collection of changes!

Misunderstanding George Orwell and 1984

Have you heard people refer to us as currently living in 1984? Has someone said to you that data tracking is very Big Brother? Then you might enjoy this video from Dr Tom Nicholas.

I am routinely amazed at the vacuous, superficial, and cherry picked references people make to George Orwell’s novels, particularly 1984.

In some respects, I understand. 1984 is quite a lugubrious read. It and Animal Farm are often read during high school as compulsory texts, a time people are noted for being at the peak of the intellectual prowess. So it is understandable that people remember little, if anything, about Orwell’s books.

But it is frustrating to run across many “appeals to Orwell” by commentators (like Jordan Peterson). These people will present themselves as having read and internalised Orwell’s writing, and are now helping us understand its significance. Yet even just reading the SparkNotes should have people seeing through these commentators.

If there is any one line from Orwell that can dispel the misunderstandings more thoroughly than any other, it is this one from the essay Why I Write:

In this month’s video, we’re looking at the work of both Jordan B. Peterson (author of Maps of Meaning, 12 Rules for Life and Beyond Order) and George Orwell (author of 1984, Animal Farm and Homage to Catalonia).

Professor Peterson has a video on his YouTube channel titled “On Free Thought and Speech in London” in which, inspired by seeing a statue of Orwell, he suggests that one of the aspects which separated the capitalist west from the communist east during the Cold War was an ability for journalists to “say what they think”.

Taking this as a starting point, I seek to dig into uses (and abuses) of George Orwell’s work by Peterson and the political right more broadly. Through contextualising Nineteen Eighty-Four and Animal Farm within Orwell’s own life, I seek to draw out the deep critiques of Peterson’s beloved “Western culture” which are contained within those books.

Towards the end, I also consider whether 1984 might provide an interesting lens for unpacking Peterson’s own work and the Cold War view of the world which underlies it.

What’s in a (Pen) Name?

This month’s It’s Lit! discusses author names and why they are often pseudonyms.

One of the things not discussed in the video is just how ancient the idea of pen names are and how they are/have been used to denote multiple authors.

Homer’s works were probably written by many people, making Homer a pen name. Lao Tzu was also likely to be an attribution for the Taoist collection Tao Te Ching, based upon a semi-mythical founder of the philosophy and religion. The author favoured by business and military leaders, and people with aspirations to being serial killers, Sun Tzu, is believed to have never existed. Instead, decades or centuries of knowledge was collected under the name.

More recently, we see pen names being taken for collaborations. James SA Corey of The Expanse fame is the pen name of Daniel Abraham and Ty Franck. Grant Naylor of Red Dwarf fame is actually Rob Grant and Doug Naylor. Ilona Gordon and Andrew Gordon write under the name Ilona Andrews for their adventures in urban fantasy. And Tom Clancy is famously half-a-dozen homicidal monkeys taped together.

In all these instances, you can see why a pen name was utilised. Having two authors on the cover of a book is reserved for franchises, like James Paterson and Clive Cussler. The name that will sell the book is at the top in bold, and the flunky who actually wrote it is attributed somewhere they’ll be missed. So writing teams need a pen name. The more historical examples appear to be about attribution to a progenitor or (semi) mythical figure, either to honour the inspiration for later works (particularly from more oral traditions), or to collect work under one banner.

This makes pen names very interesting. Particularly as knowledge of the author/s fades into history. The art outlives the artist. Yet we still try to figure out who they were and how they came to entertain and influence us long after their passing.

Maybe one day our descendants will be arguing whether James Paterson was a real author or just a marketing brand for blooding new authors. Maybe by then the Clancy monkeys might have mellowed out a bit.

To some people, the idea of a pen name seems kind of weird. If I, a writer, am going to put countless hours of hard work and thought into my masterpiece, why wouldn’t I want to put my own name underneath the title?

But from Stephen King to Ben Franklin to …., who had their own secret aliases, to Mark Twain and Dr. Seuss, whose pseudonyms became so famous that they are remembered by their pen names and not their actual names, the nom de plume has a long and proud history in the literary world.

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres, and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Book vs Movie: How Themes Changed in Lord of the Rings

It’s been 4 years since the last post about Lord of the Rings. Let’s do this!

This is a slightly different take on the differences between the book and the film. Wisecrack have looked at the major themes rather than diving into all of the changes made in the adaptation.

When I last discussed the Lord of the Rings series (see here, here, and here), I heaped praise upon the adaptations. They were able to trim down the waffle and create possibly one of the best trilogies in film history.

I hadn’t previously given a lot of thought to the changes in the themes between the book and the films. Now that it has been mentioned, the character arcs in the film should have been more obvious to me. I also find the idea that the movies were (accidentally??) made to be more secular is interesting. Perhaps that change is as much to do with when the book was written versus when the films were made.

Given the desire to reboot and remake every intellectual property in the cupboard these days, maybe the next LOTR movies will be given an Avengers style makeover. Lots of quippy dialogue, everyone has lots of money, several of the lead characters spend time with their shirts off and arms bulging, and somehow there will be product placement everywhere.

Covid Writer’s Blockdown

Something I’ve been musing about for – checks calendar – YOU’VE GOT TO BE KIDDING ME, IT’S ONLY BEEN A YEAR???

Sorry, anyway, something I’ve been musing about is writing during a pandemic. At the start of lockdowns, I remember hearing the buzz about how there would be a rush of book submissions to agents and publishers with everyone having lots of time to write. All those people who believed they had a novel stuck inside them now had enough time to pull out the scalpel and repeatedly stab themselves until they realised they should have taken their doctor more seriously.

But now we’re coming out the other side of that sunny optimism. Articles are starting to appear discussing how lockdown has equalled blockdown.

Punny terms aside, the article in The Guardian touches on much of what I’ve been thinking about without really understanding the issue. It hints at the problem without really spelling it out.

It all comes down to how the creative space works. You need to be able to let your mind wander off to the plains of [insert metaphor here, something really wankery that fits with us creative types] where your story can take shape. To let your mind wander requires a lack of interruptions, a level and type of noise that isn’t distracting, and you have to not be stressed (see my posts on these topics).

Now, what could possibly be getting in the way of creativity during Covid-19?

This is why the original articles talking about how lockdowns would lead to a splurge of novels always seemed optimistic to me. There were only superficial conditions for creativity, not the actual conditions for it. Just having kids in the house all day would be distracting enough to turn the best of times into the blurst of times. Add in working from home and the noted work creep that has had. Add in not working. Add in working on what is called the front-lines in a great reference to trench warfare – and how far away the generals are holding their tea party. Add in home-schooling. Add in stress, financial or existential. Add in feeling crowded in your workspace and then not leaving that workspace for weeks/months on end.

These aren’t the conditions for writing. These are the conditions for sitting on the couch, huddled under a blanket, mindlessly scrolling through social media in search of that sweet sweet shot of endorphins. Is it any surprise that baking sourdough bread, watching terrible Netflix original movies, and tidying the house became popular in 2020?

I recognised this early on and didn’t put too much pressure on myself to write. Sorry, rather, I didn’t put too much pressure on myself to write quality material. Writing wasn’t the problem. Having it resemble something that wasn’t a desperate cry for help or a tirade that would be combed over by profilers wondering why I’d committed such an unspeakable act was the problem.

I’m sure there are writers out there who haven’t had a problem with lockdowns and creativity. From what I’ve seen, dedicated workspaces for writing and a history of consistent writing habits (and being an empty nester) are helpful. But for the rest of us, creativity has been given all the wrong conditions to thrive, so don’t be too hard on yourself.

Or do be hard on yourself. Maybe we could have learned how to be creative under pressure. Maybe we do suck!

Book vs Movie: Death Note – What’s the Difference?

A bit of a change of pace for this What’s the Difference? with Wisecrack diving into the key difference between the Netflix version of Death Note and the Manga and Anime.

I was first exposed to Death Note via the Japanese live-action film adaptation. It was an intriguing and decent film (with some pretty dodgy CGI for Ryuk). That lead me to watch the Anime TV adaptation, which is excellent, if just a bit heavy handed. I have to admit to reading very little of the Manga because I kinda felt like the Anime had covered it really well.

When I saw they were releasing an American version of Death Note on Netflix, I was all over it. I didn’t expect the dense and loquacious Anime, but was thinking they’d remake the film adaptation with better CGI, no subtitles (because Americans don’t read), and star some former Disney child actors looking to do something gritty but lucrative to make sure the mouse didn’t throw their souls into the volcano under Disney Land. What we got was 90 minutes of garbage.

On the Wikipedia page for the American Death Note film, it is described as “loosely adapted from the Japanese Manga”. The word loosely is doing a lot of heavy lifting in that description.

The above video does a pretty good job of covering what the film does wrong. Not different, wrong. The film really does feel like someone saw that Death Note was successful, so they bought the rights, got a director to read the elevator pitch for the series (kid gets the power to kill people by writing in a demon’s book), and thought that was all they needed to do. Everything about it is a failure to understand what Death Note was about. The characters are shallow and lack any value to the story. The story lacks any substance. And they managed to turn one of the most compelling sequences from the Anime into a chase scene involving the wrong characters.* Because every American film needs a chase scene… In short, they made a bad film and an even worse adaptation.

The main thing to remember about Death Note is that they’ve already made a very good adaptation with the Anime.

* See this video that discusses the scene from the Anime I’m referring to.

How does Netflix’s Death Note adaptation hold up to the original?

The anime version of Death Note, which is a faithful adaptation of the original manga, is one of our all-time favorites. So how does it compare to the not-so-beloved 2017 Netflix version? Let’s find out in this “Book” (aka Anime) vs. Film: Death Note

Dune, The Most Important Sci Fi Series Ever?

This month’s It’s Lit! discusses the series that should have landed in cinemas this year. But 2020 had other ideas, being the giant indestructible spanner thrown into the works of regular functioning society. So let’s just talk about the books instead.

A couple of years ago, I finally got around to reading Dune. I had previously gotten my hands on three of the expanded universe books written by Keven J Anderson and Herbert’s son. Let’s just say that those novels made me question the sanity of my friends who kept recommending the Dune novels.

Fortunately, I got past the ability of publishers to milk a premise long past the death of the cow. Dune was an excellent story.

In my review I made allusions to the point made in the It’s Lit! video about how the first novel has the feel of the rise of a demagogue. Having not gotten to the sequels as yet, the deconstruction of that sound particularly interesting. Dune only hints at the idea of how getting rid of the awful the ruling structures and leaders would be great. Destiny is tied into things a bit too much, while it appears the sequels unravels this idea.

Does this make the original novel and larger series the most important sci-fi ever? I’m not entirely convinced. Some books have inspired real life advances in technology or society (although less of the latter). I’m not sure Dune has had that impact, unless there is a spice I should be using in my cooking I’m unaware of. That isn’t to say Dune isn’t a great book (I’ll hopefully have some insight on the series in coming months) nor that it wasn’t influential in sci-fi. The lone fact that it managed to show that sci-fi could be a bestseller, particularly in hardcover, was a wake-up for the publishers who rejected the first novel such that an auto-repair manual publisher picked it up.

The main issue will be whether the new movie will arrive and not be the disappointment the other adaptations have been.

The planet is Arrakis. Also known as Dune. And y’all, it’s a mess. December of this year, we were supposed to see the arrival of director Denis Villeneuve’s interpretation of the 1965 novel Dune, which had been previously (and rather infamously) brought to life by David Lynch in 1984, and again in a three-part miniseries on the SyFy channel in the early 2000s. Now many sci-fi nerds were both excited and nervous about the new adaptation directed by Villeneuve, but owing to the ongoing plague of eternity, the release has been pushed back to next year. So in lieu of that, y’all have to use this video to tide you over.

What is Dune? Why must the spice flow? And what is with all the sand?

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

The Fiery History of Banned Books

Time to talk about banned books again.

I’ve been talking about banned books here for quite some time. Australia does it, USA has an annual banned books week (1, 2, 3, 4), and without fail, the reasons for banning books are stupid.

If there is any one term to summarise why books are banned it is because something in the book makes someone feel uncomfortable.

Don’t particularly feel like discussing historical and contemporary racism in the USA, especially if this discussion highlights current social and personal failings to address the issue? Then ban Huckleberry Finn (or just drop it from the curriculum) because it uses the N-word.

Does discussion of sexuality and sex make you blush or feel inadequate for only knowing one position (facing west and thinking of England)? Then ban books that mention sex. Or nudity. Or sound like they might be.

Is the book you’re reading treat LGBTQI+ people as (shock horror) people? Then quick, ban that thing before anyone has a chance to empathise with a marginalised group and think that treating them poorly shouldn’t be happening.

Since at least 213 BCE, book burnings have been a reaction to the power of the written word. When roasting paper in a giant circle went out of style (at least in the intellectual sphere), the governments would take it upon itself to ban books. However, when we talk about book bannings today, we are usually discussing a specific choice made by individual schools, school districts, and libraries made in response to the moralistic outrage of some group. This, while still hotly-contested and controversial, is still nothing in comparison to the ways books have been removed, censored, and outright destroyed in the past. So on that happy note, let’s … explore how the seemingly innocuous book has survived centuries of the ban hammer.

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Can robots write?

Machine learning produces dazzling results, but some assembly is still required

Shutterstock

Alexandra Louise Uitdenbogerd, RMIT University

You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample:

I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.

But exactly how was the article created? And is it really true that software “wrote this entire article”?

How machines learn to write

The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)

OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.

But let’s step back and look at what text generation software actually does.

Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).

Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.




Read more:
From Twitterbots to VR: 10 of the best examples of digital literature


Writing by numbers

Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.

As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.

More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.

This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.

While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.

Imitating the brain

Neural networks work a bit like tiny brains made of several layers of virtual neurons.

A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.

The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.

To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.

Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.

In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.




Read more:
Friday essay: a real life experiment illuminates the future of books and reading


What’s so special about GPT-3?

GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.

The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.

GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.

The robot writer

But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.

This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.

This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.

AI song Beautiful the World, by Uncanny Valley.

We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.

For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.

Co-creativity: humans and AI together

So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?

A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.

Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.




Read more:
Computing gives an artist new tools to be creative


The Conversation


Alexandra Louise Uitdenbogerd, Senior Lecturer in Computer Science, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Let the soul dangle’: how mind-wandering spurs creativity

The Renaissance painter Albrecht Dürer was regarded by his friends as a master in the art of mind-wandering. He could become ‘enwrapped’ in his own pleasant reflections, wrote the German humanist Willibald Pirckheimer, at which times Dürer ‘would seem the happiest person on Earth’.

Many of us are familiar with mind-wandering in a number of guises: procrastination, reflection, meditation, self-flagellation, daydreaming. But while some mental meandering seems fruitful, on other occasions it has the unmistakeable bite of a bad habit, something that holds us back from reaching our full potential. Reverie can be a reprieve from reality and a font of inspiration, yes. But equally familiar is the mind’s tendency to devolve into sour and fruitless rumination when left to its own devices, especially when we’re in the grip of depression, anxiety or obsession.

Can art itself be a useful catalyst for nudging us towards more helpful emotions and mental states? Whether in the form of literature, rap or abstract oil painting, many of us know we can improve the tenor of our thoughts by contemplating art. The Germans have a lovely saying for the benefits of keeping an idle (or idling) mind: ‘die Seele baumeln lassen’, meaning ‘let the soul dangle’. Now, the emerging science of neuroaesthetics is beginning to reveal the biological processes that sit behind such ‘dangling’.

To begin with, contemporary cognitive science has presented a vast amount of evidence that mental states send and receive ripples of cause and effect across the rest of the body. Think how your mouth might water when you look at a photo of a tasty chocolate cake, or how tense you feel when watching a suspenseful TV drama. Thoughts, feelings and emotions, whether aimless or deliberate, are a somatic cascade of multiple biological events. And it’s this cascade that art somehow taps into.

Galen, the second-century Greek physician, was well aware of the connection between mind and body. He believed that mind-wandering was the result of physical and mental lassitude, and so prescribed a regime of logic and hard, structured work to avoid it. ‘Laziness breeds humours of the blood!’ Galen is believed to have said. The assumption here is that concentration is a kind of psychobiological discipline, something we have to work at to stop our wayward minds and bodies from veering out of our control.

However, there’s an even older tradition from Ancient Greece that views daydreaming as a boost to our wellbeing. Galen’s Hippocratic forebears argued that mind-wandering was in fact the best strategy for guiding us back into healthy states. And modern-day research in developmental psychology has shown that children and adults who engage in certain kinds of mind-wandering actually display more cognitive flexibility, and perform better when called upon to exercise ‘executive’ functions such as problem-solving, planning and managing their own thoughts and feelings.

Neuroimaging – a method of ‘seeing’ the brain in action – has started to reveal the brain processes that correlate with these mental states. Far from falling idle, the brains of people asked to stay still and think of nothing in particular continue to fizz and pop in patterns of activity known as the default mode network (DMN). These activations are closely related to those engaged during self-referential thinking, the experience of the self, and intuition. Moreover, they are observed alongside activation patterns in the prefrontal cortex (PFC) – the area typically associated with those important ‘executive’ functions. Strikingly, the greater the strength of the relationship between these two domains of the brain – intuition and executive function – the more creativity a person tends to display when asked to solve a problem. Brain scans demonstrate correlation, not causation; but even so, they hint at the possibility that reverie might help to prime us to think both productively and creatively by somehow cementing our sense of self, drawing body and mind together in a train of thought and biological action.

Art can be a catalyst for this sort of reverie, as well as a tool to regulate and control it. Both the basic properties of art (whether it’s in a minor or major key; the colours of a painting), as well as the complexities of its content (the lyrics of a song, the facial expression of a person in a painting), can induce reflections and emotions – and will invariably affect our body’s physiology. Thinking creatively, and engaging with works of art, have both been correlated with DMN activity – especially when people report that the aesthetic experience was particularly strong and meaningful to them. In these moments, our encounter with art seems to trigger an autobiographical daydreaming, a flow experience with a ‘me factor’.

Of course, art can also provoke unhelpful ruminative urges. Listening over and again to that song might not help you get over a heartbreak. But art-induced sadness doesn’t always make you slide into negative mental loops. In fact, art can help us adapt to the immediate source of pain by acting as a prop for emotional catharsis. We all know the strange, pleasurable, consoling feeling that comes after having a good cry. This experience appears to be precipitated by the release of the hormone prolactin, which has also been associated with a boosted immune system, as well as bonding with other people. The arts are a relatively safe space in which to have such an emotional episode, compared with the real-life emotional situations that make us cry. Even sad or otherwise distressing art can be used to trigger a kind of positive, psychobiological cleansing via mind-wandering.

History is full of examples of the relationship between reverie and creativity. Here is one, idiosyncratic example: the German art historian Aby Warburg (1866-1929) organised his library of 50,000 books with the aim of promoting mind-wandering. His collection was the kernel for the Warburg Institute in London, where we now work as researchers. Each of the library’s four floors is devoted to one of four themes – image, word, orientation, and action – and separated into sub-themes, such as ‘magic and science’, ‘transmission of classical texts’, and ‘art history’. Guided by Warburg’s ideas about what makes a good neighbour for a book, this unique approach to classification allows a withered 17th-century medical tome to cluster next to texts on mathematics, the cosmos and harmony. The shelves promote intellectual serendipity as you skip from the book (or thought) you thought you wanted, to another intriguing idea or topic that hadn’t even occurred to you.

Art appreciation is held in high esteem in most cultures and societies. It is often portrayed as a laborious cognitive exercise, but this is to forget that the arts provide an opportunity for intense emotional experiences, positive mind-wandering and psychobiological self-regulation. Dürer perhaps captures the activity of such inactivity best of all. ‘If a man devotes himself to art,’ he wrote, ‘much evil is avoided that happens otherwise if one is idle.’Aeon counter – do not remove

Julia Christensen, Guido Giglioni & Manos Tsakiris

This article was originally published at Aeon and has been republished under Creative Commons.
https://aeon.co/ideas/let-the-soul-dangle-how-mind-wandering-spurs-creativity

Books to musicals

This month It’s Lit! looks at the source material that helps people burst into song.

It’s gotta be said: I’m not a fan of musicals.

Maybe it was the “Andrew Llyod Webber’s Greatest Hits” tape that infected our car stereo during long trips as a child. Maybe it is that for every good song in a musical there is three to twenty average to terrible songs. Maybe it’s that my idea of a musical begins and ends with Elvis movies (We’re Gonna Win This Race).

It appears clear that my appreciation of musicals is somewhat shaped by poor childhood experiences. These scars are real!!

Regardless, it is still interesting to see how the adaptations of books are very important in the creation of musicals. The writing process is obviously very complicated to take a book and not only capture the story in a visual form, but also write songs that don’t make you take a power drill to your ears.

Perhaps transforming books into musicals is the peak of book adaptations.

Some say that theater is dead, and that’s probably because most playhouses the world over are closed at the moment owing to a worldwide pandemic. and yet the musical lives on… on Disney plus — as the nation has been rapt with a filmed version of the Broadway smash hit, Hamilton.

This had us come to the realization that a lot of the bread and butter of musical theater is built off of books! And so, like every television program that starts looking for new ideas, it has finally come to this: The It’s Lit! Musical episode

Hosted by Lindsay Ellis and Princess Weekes, It’s Lit! is a show about our favorite books, genres and why we love to read. It’s Lit has been made possible in part by the National Endowment for the Humanities: Exploring the human endeavor.

Book vs Movie: Total Recall – What’s the Difference?

In this month’s What’s the Difference, CineFix look at Total Recall* and Phillip K Dick’s We Can Remember It For You Wholesale.

Can you believe it has been 30 years since the release of Total Recall? At least nobody invented Johnny Cabs in that time.

Many years ago I wrote a post discussing my thoughts about the differences between the first Total Recall movie, the remake with Kate Beckinsale (and some guy called Co-lin Faarill), and the book/short story. In it, I talked about how quickly the movies diverge from the book, essentially before the end of the first act (around the inciting incident). And then I went on to spend several thousand words complaining about the lack of massive biceps and extra boobs in the remake.

For me, this comparison of book to movie and remake shows just how far you can diverge from the source material whilst still retaining a lot of similarities. It also shows the strength of the original premise from Phillip K Dick, because even the remake of Total Recall didn’t completely suck, despite having Len Wiseman involved.**

I’m sure by the time the fortieth anniversary for the original movie rolls around, Hollywood will have released at least two remakes, a TV show, a Mars Lander tie-in short movie with a digitally recreated young Arnie, and a triple breast augmentation procedure.

* The first one, not the bland remake with the genocide of robots.

** The remake mainly suffers from being just that bit soulless. It doesn’t feel like anyone involved cared that much about the film, just that it was a good solid paycheque. As a result, they churned out a good solid action movie that is largely forgettable. Another one in the long line of perfectly adequate movies that make you feel like you’ve been robbed of the opportunity for something better. Not bad enough to justify your hate, but not good enough that you’ll forgive its flaws.

Some News:

I apologise for the lack of updates lately. I have several book reviews I haven’t gotten to, a couple of posts I’ve contemplated and then given up on, and a few of my regular posts (like this Book vs Movie series) that I haven’t published. This is partly sheer laziness and partly due to having taken on a freelance writing job for a magazine due out later this month. I’ll attempt to get back to weekly posts soon.