Every author is asked by new writers for advice. There is, however, no all-encompassing, single answer that also happens to be correct. Quite a lot of commonly offered suggestions (“write every day”) don’t work for everyone and must be approached with caution.
A few years ago, I set out to create a list that will benefit all new writers. I put ten commandments through the wringer of my peers, who suggested modifications and noted that this list applies not just to new writers but to writers at every stage of their career. Indeed, I’ve needed reminding of more than one myself.
Here, then, are the 10½ commandments of writing – with an extra one for free.
1. Read widely
To succeed as a writer, you must occasionally read. Yet there are wannabe-novelists who haven’t picked up a book in years. There are also, more tragically, writers too busy to engage with the end-product of our craft. If the only thing you’re reading is yourself you are bound to miss out on valuable lessons.
The same applies to reading only within a favourite genre. A varied diet will strengthen your literary muscles.
No need to thrash out 1,000 words a day or pen a perfect poem before breakfast, but you do have to write. The fundamental qualification for being a writer is putting words on the page.
If you aren’t doing that now, it’s possible you never will.
3. Follow your heart
When you really want to write literary fiction, but the market wants paranormal romance, write literary fiction. Chasing paranormal romance will be futile. Writing well is hard enough without cynicism getting in the way.
Passion doesn’t always pay, but it increases the odds of your work finding a home.
4. Be strategic
But the choice is never between just literary fiction and paranormal romance. You might have poetry and narrative non-fiction passion projects as well, and it’s possible narrative non-fiction will appeal to the widest audience. If a wider audience is what you want, narrative non-fiction is the one to choose.
If, however, you don’t give two hoots about your audience, write what you like.
There are lots of different kinds of writers and lots of different paths to becoming the writer you want to be.
5. Be brave
Writing is hard, intellectually and physically. It also takes emotional work, dealing with exposure, rejection, fear and impostor syndrome. It’s better you know this upfront, in order to fortify yourself.
These crises, however, are surmountable. We know this because there are writers out there, leading somewhat normal lives, even healthy and happy ones. You can too, if you don’t give up.
The ones who persist are the ones who prevail.
6. Be visible
Many writers would prefer they remain hidden in a dark cave for all eternity. But stories demand to be communicated, which means leaving that cave. Whether it’s you or your written word, or both, broaching the bubble of self-isolation is important.
This doesn’t mean assaulting every social platform and attending every festival and convention. Find the kind of engagement that suits you and embrace it, and don’t overdo it. Remember: you still have to write.
7. Be professional
Don’t lie. Don’t belittle your peers and don’t steal from them. Keep your promises. Communicate. Try to behave like someone people will want to work with – because we all have to do that, at some point.
Heed what people you’re working with are saying, because you never know what gems of knowledge you might glean – about craft, about the market, about something you’re working on – among the knowledge you (think you) already possess.
9. Don’t settle
Every story requires different skills. You’ll never, therefore, stop learning how to write. The day you think you’ve worked it out is the day the ground beneath you begins to erode, dropping you headlong into a metaphorical sinkhole – and nobody wants that. Least of all your readers.
Readers can tell when you’re getting lazy, just like they can tell when you’re faking. You’re one of them. Deep down, you’ll be the first to know.
10. Work hard
Put in the hours and you’re likely to get some return on your investment. How many hours, though?
There’s a wonderful saying: “Even a thief takes ten years to learn her trade.” Writing is no different to any other career. Hope for overnight success; plan for being like everyone else.
The bonus commandments
When I put this list to my friends, several raised the importance of finding your people. Although I agree this is an important principle, I would argue it is implicit in commandments 6-8: these have no meaning without engaging. I decided to encapsulate this as 10.5. Embrace community
After I’d been teaching and giving talks on this topic for several years, someone suggested another commandment that lies beneath the rest. It is so fundamental none will work unless you have this in spades. It is 0. Really want it, which sounds so obvious that it barely needs stating – except it does.
One day, I may no longer want to write. If that happens, I will take every mention of writing from this list and substitute the name of a new vocation – because this list applies to everything.
The basic gist of the article is that it would be really nice if some of the additions to the school book list were actually being taught to kids. We’re still seeing the same old “favourites” being taught, mainly because they’ve always been taught so there are plenty of SparkNotes on them.
The audacious dream is to expose kids to more authors, diversity of texts, and some of the other great books that weren’t written by a dude wearing a ruff. I’d hope that this more diverse array of texts will inspire a lifelong love of reading by showing kids that there is more to reading than a couple of 400-year-old plays and some poetry that even poets regard as pretentious.
What was interesting was the response on social media and in the comments.
There aren’t any great female and people of colour authors!! I’d have learned about them in school if there were!! Stop being racist and sexist against white men!!
Arguments like this are, of course, said without a hint of irony.*
These arguments are always frustrating. The traditionalism argument about how great these authors are ignores how those texts make it onto the syllabus in the first place and that literally no one wants to take them off anyway. It also feeds into the larger problem of Book Wardens, who suck all the fun out of reading. I want to make a joke about crusty old vampires ruining reading, but they sparkle now, so they’d make it fabulous.
There is also the reactionary culture warrior aspect to this argument. Quick, someone who knows more about this subject suggested we make a change for the better: Man the keyboards, all caps the objections, haul out the canards!
These brave warriors are the last defenders against those evil thinkers and knowers. Only they can protect society from people who would dare to acknowledge there are other decent books worth reading.
In some respects, they remind me of the Literati who bravely defend Fort Literature from the invading hordes of the Lesser Works. As I’ve pointed outpreviously, the origins of what we call literature versus genre have their origins in the class divide during the Industrial Revolution. Workers got to read one type of magazine, whilst richer managers (but not the capitalists) got a fancier magazine. The stories that were published in the fancier magazines became literary, whilst the rest was genre. So it is quite literally the snobbery of class divides deciding what is literary.
These reactionary culture warriors aren’t necessarily siding with the Literarti so much as reinforcing the status quo. They like the nice ye-old definition of literary and artistic merit we often operate under in society. But it isn’t a good definition as it is more about what a certain group of people like. And that certain group holds the power, which the reactionary culture warriors need to defend at all costs!**
Maybe if these warriors (and literati) were to actually read some of the other great books they might learn something.
* Said on the internet, the greatest information resource in history, no less. But worse, the article and people like myself were pointing out the problems with their arguments. It’s like trying to lead a horse to a glue factory and they are refusing to acknowledge they were too slow for racing.
** The reason why is interesting. For some, it is just about “change bad”. For some, it is about pwning the libs, which as far as I can tell appears to be anyone who has read a book since high school. For others, it is about sucking up to those with power or influence in the hopes they will be rewarded in some way. This seems like an odd position to take given the topic at hand, but it has to be about the only time I’ve seen an Arts Professor lauded for their support of (insert classic literary text here). We live in strange times.
In independent publishing, 10% of the book sale goes to the author, perhaps another 10% to the printer, and up to a whopping 70% for distribution. What’s left has to pay the publisher, editor, marketers, admin staff and keep the lights on.
But we can improve our approach to author rights. Here are five lessons we can learn from elsewhere to help Australian writers earn more money.
Traditionally, contractual “out of print” clauses have let authors reclaim their rights when a print run has sold out and the publisher doesn’t want to invest in another. But in our recent analysis of almost 150 contracts in the Australian Society of Authors archive, we found 85% of contracts with these clauses allowed authors to reclaim their rights only when the book was “not available in any edition”.
These days, books can be kept available (at least digitally or via print-on-demand) forever – but that doesn’t mean their publishers are still actively promoting them.
A better approach is to allow authors to reclaim their rights towards the end of a work’s commercial life, determined with reference to objective criteria like the number of copies sold or royalties earned in the previous year. The Australian Society of Authors recommends authors only sign contracts that have this meaningful kind of out-of-print clause – but many publishers still try to get authors to sign up to unacceptable terms.
A growing number of countries (including France, Romania, Slovenia, Spain, Macedonia and Brazil mandate author rights based on objective criteria. The French law is an interesting model. There, authors can get their rights back if a book has been published for at least four years, and they haven’t been credited royalties for at least two. This opens up new possibilities for the author to license it to another publisher, or even sell it directly to libraries or consumers.
#2: ‘Use it or lose it’: return author rights when they’re not being used
Publishers take very broad rights to most books: in our recent archival analysis we found 83% took worldwide rights, and 43% took rights in all languages. It’s easy to take rights – but if publishers do so, they should be obliged to either use them or give them back.
To that end we can learn from the “use it or lose it” laws that bind publishers in some parts of Europe. In Spain and Lithuania, for example, authors can get their rights back for languages that are still unexploited after five years.
#3: Introduce a ‘bestseller’ clause to contracts
Of course, it’s not always the case that there’s no money in publishing: sometimes a title that was expected to sell 5,000 copies sells 5,000,000. That changes the economics enormously: but in many cases, the contract only provides the same old 10% revenue for the author. For works that achieve unexpected success, we can learn from Germany and the Netherlands (and the proposed new EU copyright law). They have “bestseller” clauses that give authors the right to share fairly in unexpected windfalls arising from their work.
#4: Legally enshrine the right to fair payment
Even where there’s not much money to be made, the author should still receive a fair share. Again, Germany and the Netherlands lead the way on this. There, authors are entitled to “fair” or “equitable” payment for their work – and can enforce those rights if their pay is too low.
These laws don’t set a dollar amount, since what is “fair” depends on all the circumstances. However, such laws at least provide a minimum floor. If the contracted amount is unfair or inequitable, authors have a legal right to redress.
#5: Put time limits on transfers
In Australia, copyright lasts for the life of the author, and then another 70 years after that. Publishers almost always take rights for that full term – only 3% of the contracts between publishers and authors we looked at took less. But publishers don’t need that long to recoup their investments. In the US, authors can reclaim their rights from intermediaries 35 years after they licensed or transferred them.
In Canada, copyrights transfer automatically to heirs 25 years after an author dies. We used to have the same law in Australia, but it was abolished for spurious reasons about 50 years ago. If we reintroduced a similar time limit on transfers, it would open up new opportunities for authors and their heirs (for example, to license or sell to a different publisher, libraries or direct to the public).
It’s true that there’s often not much money in publishing. But by changing our approach to author rights, we can help writers earn more and make Australian books more freely available.
Industry and educators are agreed: the world needs creativity. There is interest in the field, lots of urging but remarkably little action. Everyone is a bit scared of what to do next. On the question of creativity and imagination, they are mostly uncreative and unimaginative.
Some of the paralysis arises because you can’t easily define creativity. It resists the measurement and strategies that we’re familiar with. Indisposed by the simultaneous vagueness and sublimity of creative processes, educators seek artificial ways to channel imaginative activity into templates that end up compromising the very creativity they celebrate.
For example, creativity is often reduced to problem-solving. To be sure, you need imagination to solve many curly problems and creativity is arguably part of what it takes. But problem-solving is far from the whole of creativity; and if you focus creative thinking uniquely on problems and solutions, you encourage a mechanistic view – all about scoping and then pinpointing the best fit among options.
It might be satisfying to create models for such analytical processes but they distort the natural, wayward flux of imaginative thinking. Often, it is not about solving a problem but seeing a problem that no one else has identified. Often, the point of departure is a personal wish for something to be true or worth arguing or capable of making a poetic splash, whereupon the mind goes into imaginative overdrive to develop a robust theory that has never been proposed before.
For teaching purposes, problems are an anxious place to cultivate creativity. If you think of anyone coming up with an idea — a new song, a witty way of denouncing a politician, a dance step, a joke — it isn’t necessarily about a problem but rather a blissful opportunity for the mind to exercise its autonomy, that magical power to concatenate images freely and to see within them a bristling expression of something intelligent.
That’s the motive behind what scholars now call “Big C Creativity”: i.e. your Bach or Darwin or Freud who comes up with a major original contribution to culture or science. But the same is true of everyday “small C creativity” that isn’t specifically problem-based.
Relishing the independence of the mind is the basis for naturally imaginative activity, like humour, repartee, a gestural impulse or theatrical intuition, a satire that extrapolates someone’s behaviour or produces a poignant character insight.
A dull taming
Our way of democratising creativity is not to see it in inherently imaginative spontaneity but to identify it with instrumental strategising. We tame creativity by making it dull. Our way of honing the faculty is by making it goal-oriented and compliant to a purpose that can be managed and assessed.
Alas, when we make creativity artificially responsible to a goal, we collapse it with prudent decision-making, whereupon it no longer transcends familiar frameworks toward an unknown fertility.
We pin creativity to logical intelligence as opposed to fantasy, that somewhat messy generation of figments out of whose chaos the mind can see a brilliant rhyme, a metaphor, a hilarious skip or roll of the shoulders, an outrageous pun, a thought about why peacocks have such a long tail, a reason why bread goes stale or an astonishing pattern in numbers arising from a formula.
Because creativity, in essence, is somewhat irresponsible, it isn’t easy to locate in a syllabus and impossible to teach in a culture of learning outcomes. Learning outcomes are statements of what the student will gain from the subject or unit that you’re teaching. Internationally and across the tertiary system, they take the form of: “On successful completion of this subject, you will be able to …” Everything that is taught should then support the outcomes and all assessment should allow the students to demonstrate that they have met them.
After a lengthy historical study, I have concluded that our contemporary education systematically trashes creativity and unwittingly punishes students for exercising their imagination. The structural basis for this passive hostility to the imagination is the grid of learning outcomes in alignment with delivery and assessment.
It might always be impossible to teach creativity but the least we can do for our students is make education a safe place for imagination. Our academies are a long way from that haven and I see little encouraging in the apologias for creativity that the literature now spawns.
My contention is that learning outcomes are only good for uncreative study. For education to cultivate creativity and imagination, we need to stop asking students anxiously to follow demonstrable proofs of learning for which imagination is a liability.
Biologists are gathering evidence of green algae (pictured here in Kuwait) becoming carbohydrate-rich but less nutritious, due to increased carbon dioxide levels. As science fiction becomes science fact, new forms of storytelling are emerging. Raed Qutena
I count myself lucky. Weird, I know, in this day and age when all around us the natural and political world is going to hell in a handbasket. But that, in fact, may be part of it.
Back when I started writing, realism had such a stranglehold on publishing that there was little room for speculative writers and readers. (I didn’t know that’s what I was until I read it in a reader’s report for my first novel. And even then I didn’t know what it was, until I realised that it was what I read, and had always been reading; what I wrote, and wanted to write.) Outside of the convention rooms, that is, which were packed with less-literary-leaning science-fiction and fantasy producers and consumers.
Realism was the rule, even for those writing non-realist stories, such as popular crime and commercial romance. Perhaps this dominance was because of a culture heavily influenced by an Anglo-Saxon heritage. Richard Lea has written in The Guardian of “non-fiction” as a construct of English literature, arguing other cultures do not distinguish so obsessively between stories on the basis of whether or not they are “real”.
Regardless of the reason, this conception of literary fiction has been widely accepted – leading self-described “weird fiction” novelist China Miéville to identify the Booker as a genre prize for specifically realist literary fiction; a category he calls “litfic”. The best writers Australia is famous for producing aren’t only a product of this environment, but also role models who perpetuate it: Tim Winton and Helen Garner write similarly realistically, albeit generally fiction for one and non-fiction for the other.
Today, realism remains the most popular literary mode. Our education system trains us to appreciate literatures of verisimilitude; or, rather, literature we identify as “real”, charting interior landscapes and emotional journeys that generally represent a quite particular version of middle-class life. It’s one that may not have much in common these days with many people’s experiences – middle-class, Anglo or otherwise – or even our exterior world(s).
Like other kinds of biases, realism has been normalised, but there is now a growing recognition – a re-evaluation – of different kinds of “un-real” storytelling: “speculative” fiction, so-called for its obviously invented and inventive aspects.
a much larger collective conviction about who’s entitled to tell stories, what stories are worth telling, and who among the storytellers gets taken seriously … not only in terms of race and gender, but in terms of what has long been labelled “genre” fiction.
Rawson’s latest book, From the Wreck, intertwines the story of her ancestor George Hills, who was shipwrecked off the coast of South Australia and survived eight days at sea, with the tale of a shape-shifting alien seeking refuge on Earth. In an Australian first, it was long-listed for the Miles Franklin, our most prestigious literary award, after having won the niche Aurealis Award for Speculative Fiction.
The Aurealis awards were established in 1995 by the publishers of Australia’s longest-running, small-press science-fiction and fantasy magazine of the same name. As well as recognising the achievements of Australian science-fiction, fantasy and horror writers, they were designed to distinguish between those speculative subgenres.
Last year, five of the six finalists for the Aurealis awards were published, promoted and shelved as literary fiction.
A broad church
Perhaps what counts as speculative fiction is also changing. The term is certainly not new; it was first used in an 1889 review, but came into more common usage after genre author Robert Heinlein’s 1947 essay On the Writing of Speculative Fiction.
Whereas science fiction generally engages with technological developments and their potential consequences, speculative fiction is a far broader, vaguer term. It can be seen as an offshoot of the popular science-fiction genre, or a more neutral umbrella category that simply describes all non-realist forms, including fantasy and fairytales – from the epic of Gilgamesh through to The Handmaid’s Tale.
While critic James Wood argues that “everything flows from the real … it is realism that allows surrealism, magic realism, fantasy, dream and so on”, others, such as author Doris Lessing, believe that everything flows from the fantastic; that all fiction has always been speculative. I am not as interested in which came first (or which has more cultural, or commercial, value) as I am in the fact that speculative fiction – “spec-fic” – seems to be gaining literary respectability.
(Next step, surely, mainstream popularity! After all, millions of moviegoers and television viewers have binge-watched the rise of fantastic forms, and audiences are well versed in unreal onscreen worlds.)
One reason for this new interest in an old but evolving form has been well articulated by author and critic James Bradley: climate change. Writers, and publishers, are embracing speculative fiction as an apt form to interrogate what it means to be human, to be humane, in the current climate – and to engage with ideas of posthumanism too.
These are the sorts of existential questions that have historically driven realist literature.
According to the World Wildlife Fund’s 2018 Living Planet Report, 60% of the world’s wildlife disappeared between 1970 and 2012. The year 2016 was declared the hottest on record, echoing the previous year and the one before that. People under 30 have never experienced a month in which average temperatures are below the long-term mean. Hurricanes register on the Richter scale and the Australian Bureau of Meteorology has added a colour to temperature maps as the heat keeps on climbing.
There is an infographic doing the rounds on Facebook that shows sister countries with comparable climates to (warming) regions of Australia. But it doesn’t reflect the real issue. Associate Professor Michael Kearney, Research Fellow in Biosciences at the University of Melbourne, points out that no-one anywhere in the world has any experience of our current CO2 levels. The changed environment is, he says – using a word that is particularly appropriate for my argument – a “novel” situation.
Elsewhere, biologists are gathering evidence of algae that carbon dioxide has made carbohydrate-rich but less nutritious. So the plankton that rely on them to survive might eat more and more and yet still starve.
Fiction focused on the inner lives of a limited cross-section of people no longer seems the best literary form to reflect, or reflect on, our brave new outer world – if, indeed, it ever was.
Whether it’s a creative response to catastrophic climate change, or an empathic, philosophical attempt to express cultural, economic, neurological – or even species – diversification, the recognition works such as Rawson’s are receiving surely shows we have left Modernism behind and entered the era of Anthropocene literature.
And her book is not alone. Other wild titles achieving similar success include Krissy Kneen’s An Uncertain Grace, shortlisted for the Aurealis, the Stella prize and the Norma K. Hemming award – given to mark excellence in the exploration of themes of race, gender, sexuality, class or disability in a speculative fiction work.
Kneen’s book connects five stories spanning a century, navigating themes of sexuality – including erotic explorations of transgression and transmutation – against the backdrop of a changing ocean.
Earlier, more realist but still speculative titles (from 2015) include Mireille Juchau’s The World Without Us and Bradley’s Clade. These novels fit better with Miéville’s description of “litfic”, employing realistic literary techniques that would not be out of place in Winton’s books, but they have been called “cli-fi” for the way they put climate change squarely at the forefront of their stories (though their authors tend to resist such generic categorisation).
Both novels, told across time and from multiple points of view, are concerned with radically changed and catastrophically changing environments, and how the negative consequences of our one-world experiment might well – or, rather, ill – play out.
Catherine McKinnnon’s Storyland is a more recent example that similarly has a fantastic aspect. The author describes her different chapters set in different times, culminating – Cloud Atlas–like, in one futuristic episode – as “timeslips” or “time shifts” rather than time travel. Yet it has been received as speculative – and not in a pejorative way, despite how some “high-art” literary authors may feel about “low-brow” genre associations.
Kazuo Ishiguro, for instance, told The New York Times when The Buried Giant was released in 2015 that he was fearful readers would not “follow him” into Arthurian Britain. Le Guin was quick to call him out on his obvious attempt to distance himself from the fantasy category. Michel Faber, around the same time, told a Wheeler Centre audience that his Book of Strange New Things, where a missionary is sent to convert an alien race, was “not about aliens” but alienation. Of course it is the latter, but it is also about the other.
All these more-and-less-speculative fictions – these not-traditionally-realist literatures – analyse the world in a way that it is not usually analysed, to echo Tim Parks’s criterion for the best novels. Interestingly, this sounds suspiciously like science-fiction critic Darko Suvin’s famous conception of the genre as a literature of “cognitive estrangement”, which inspires readers to re-view their own world, think in new ways, and – most importantly – take appropriate action.
A new party
Perhaps better case studies of what local spec-fic is or does – when considering questions of diversity – are Charlotte Wood’s The Natural Way of Things and Claire Coleman’s Terra Nullius.
The first is a distinctly Aussie Handmaid’s Tale for our times, where “girls” guilty by association with some unspecified sexual scenario are drugged, abducted and held captive in a remote outback location.
The latter is another idea whose time has come: an apocalyptic act of colonisation. Not such an imagined scenario for Noongar woman Coleman. It’s a tricky plot to tell without giving away spoilers – the book opens on an alternative history, or is it a futuristic Australia? Again, the story is told through different points of view, which prioritises collective storytelling over the authority of a single voice.
“The entire purpose of writing Terra Nullius,” Coleman has said, “was to provoke empathy in people who had none.”
This connection of reading with empathy is a case Neil Gaiman made in a 2013 lecture when he told of how China’s first party-approved science-fiction and fantasy convention had come about five years earlier.
The Chinese had sent delegates to Apple and Google etc to try to work out why America was inventing the future, he said. And they had discovered that all the programmers, all the entrepreneurs, had read science fiction when they were children.
“Fiction can show you a different world,” said Gaiman. “It can take you somewhere you’ve never been.”
And when you come back, you see things differently. And you might decide to do something about that: you might change the future.
Perhaps the key to why speculative fiction is on the rise is the ways in which it is not “hard” science fiction. Rather than focusing on technology and world-building to the point of potential fetishism, as our “real” world seems to be doing, what we are reading today is a sophisticated literature engaging with contemporary cultural, social and political matters – through the lens of an “un-real” idea, which may be little more than a metaphor or errant speculation.
In Canada, they get them back 25 years after they are dead when the rights automatically revert to their estate. In Australia, our law used to do the same, but we removed the provision in 1968. In our law, authors are never given back what they give away.
Some publishers voluntarily put such clauses in their contracts, but that is something they choose to do, rather than something the law mandates.
Australia’s copyright term is long. For written works it lasts for 70 years after the death of the author. It was extended from 50 years after death as part of the Australia-United States Free Trade Agreement.
What copyright is for
Copyright is a government-granted limited monopoly to control certain uses of an author’s work.
It is meant to achieve three main things: incentivise the creation of works, reward authors, and benefit society through access to knowledge and culture.
Incentive and reward are not the same thing.
The incentive needn’t be big
The copyright term needed to provide an incentive to create something is pretty short.
But, in practice authors, artists and composers are often obliged to transfer all or most of their rights to corporate investors such as record labels or book publishers in order to receive anything at all.
In the film and television industries it is not unusual for creators to have to sign over their whole copyright, forever – and not just here on Earth but throughout the universe at large.
Canada’s law already protects authors by giving rights back to their heirs 25 years after they die. Bryan Adams’s proposal is to change one word in that law. Instead of copyright reverting to the creator 25 years after “death”, he wants it to revert 25 years after “transfer”.
Copyright is meant to be about ensuring access
Handing rights back to creators after 25 years would not only help them secure more of copyright’s rewards, it would also help achieve copyright’s other major aim: to promote widespread access to knowledge and culture.
Right now our law isn’t doing a very good job of that, particularly for older material.
Copyright lasts for so long, and distributors lose financial interest in works so fast, that they are often neither properly distributed nor available for anyone else to distribute.
In the book industry my research into almost 100,000 titles has found that publishers license older e-books to libraries on the same terms and for the same prices as newer ones. That includes “exploding” licences which force books to be deleted from collections even if nobody ever borrows them.
Publishers are interested in maximising their share of library collections budgets, not ensuring that a particular author continues to get paid or a particular title continues to get read.
Restricting access to books is not in the interests of authors or readers.
… and directing rewards where they are needed
If rights reverted after 25 years, as I have proposed and as Adams now proposes, authors would be able to do things like license their books directly to libraries in exchange for fair remuneration – say $1 per loan.
We could do it all without changing the total copyright term imposed on us by the Australia-US Free Trade Agreement and other treaties. We could get creators paid more fairly while keeping Australian culture alive.
Creativity is often defined as the ability to come up with new and useful ideas. Like intelligence, it can be considered a trait that everyone – not just creative “geniuses” like Picasso and Steve Jobs – possesses in some capacity.
It’s not just your ability to draw a picture or design a product. We all need to think creatively in our daily lives, whether it’s figuring out how to make dinner using leftovers or fashioning a Halloween costume out of clothes in your closet. Creative tasks range from what researchers call “little-c” creativity – making a website, crafting a birthday present or coming up with a funny joke – to “Big-C” creativity: writing a speech, composing a poem or designing a scientific experiment.
Psychology and neuroscience researchers have started to identify thinking processes and brain regions involved with creativity. Recent evidence suggests that creativity involves a complex interplay between spontaneous and controlled thinking – the ability to both spontaneously brainstorm ideas and deliberately evaluate them to determine whether they’ll actually work.
Despite this progress, the answer to one question has remained particularly elusive: What makes some people more creative than others?
In a new study, my colleagues and I examined whether a person’s creative thinking ability can be explained, in part, by a connection between three brain networks.
Mapping the brain during creative thinking
In the study, we had 163 participants complete a classic test of “divergent thinking” called the alternate uses task, which asks people to think of new and unusual uses for objects. As they completed the test, they underwent fMRI scans, which measures blood flow to parts of the brain.
The task assesses people’s ability to diverge from the common uses of an object. For example, in the study, we showed participants different objects on a screen, such as a gum wrapper or a sock, and asked to come up with creative ways to use them. Some ideas were more creative than others. For the sock, one participant suggested using it to warm your feet – the common use for a sock – while another participant suggested using it as a water filtration system.
Importantly, we found that people who did better on this task also tended to report having more creative hobbies and achievements, which is consistent with previous studies showing that the task measures general creative thinking ability.
After participants completed these creative thinking tasks in the fMRI, we measured functional connectivity between all brain regions – how much activity in one region correlated with activity in another region.
We also ranked their ideas for originality: Common uses received lower scores (using a sock to warm your feet), while uncommon uses received higher scores (using a sock as a water filtration system).
Then we correlated each person’s creativity score with all possible brain connections (approximately 35,000), and removed connections that, according to our analysis, didn’t correlate with creativity scores. The remaining connections constituted a “high-creative” network, a set of connections highly relevant to generating original ideas.
Having defined the network, we wanted to see if someone with stronger connections in this high-creative network would score well on the tasks. So we measured the strength of a person’s connections in this network, and then used predictive modelling to test whether we could estimate a person’s creativity score.
The models revealed a significant correlation between the predicted and observed creativity scores. In other words, we could estimate how creative a person’s ideas would be based on the strength of their connections in this network.
We further tested whether we could predict creative thinking ability in three new samples of participants whose brain data were not used in building the network model. Across all samples, we found that we could predict – albeit modestly – a person’s creative ability based on the strength of their connections in this same network.
Overall, people with stronger connections came up with better ideas.
What’s happening in a ‘high-creative’ network
We found that the brain regions within the “high-creative” network belonged to three specific brain systems: the default, salience and executive networks.
The default network is a set of brain regions that activate when people are engaged in spontaneous thinking, such as mind-wandering, daydreaming and imagining. This network may play a key role in idea generation or brainstorming – thinking of several possible solutions to a problem.
The executive control network is a set of regions that activate when people need to focus or control their thought processes. This network may play a key role in idea evaluation or determining whether brainstormed ideas will actually work and modifying them to fit the creative goal.
The salience network is a set of regions that act as a switching mechanism between the default and executive networks. This network may play a key role in alternating between idea generation and idea evaluation.
An interesting feature of these three networks is that they typically don’t get activated at the same time. For example, when the executive network is activated, the default network is usually deactivated. Our results suggest that creative people are better able to co-activate brain networks that usually work separately.
Future research is needed to determine whether these networks are malleable or relatively fixed. For example, does taking drawing classes lead to greater connectivity within these brain networks? Is it possible to boost general creative thinking ability by modifying network connections?
For now, these questions remain unanswered. As researchers, we just need to engage our own creative networks to figure out how to answer them.