Book review: Clear Bright Future by Paul Mason

Clear Bright Future: A Radical Defence of the Human BeingClear Bright Future: A Radical Defence of the Human Being by Paul Mason
My rating: 4 of 5 stars

Humans: Okay, no killing people.
AI: Slavery is cool though, right?
Humans: No, no killing, no slavery!
AI: But you do it all the time. No fair!

Clear Bright Future is Paul Mason’s attempt to address the “value alignment problem” with regard to our society and the potential of AI. He sets out how we largely don’t have a set of values, thanks to things like neo-liberalism, post-modernism, and scientism, and how we desperately need to define our values. Those values, he argues, should be clearly defined, humanist, and done before the capitalists, authoritarians, or other ne’er-do-wells ruin the future.

I first became interested in reading Mason’s books when I saw his Google Talk about Post-Capitalism. He was one of the first people I’d heard make a clear argument for something that is lurking in every digital age IP lawsuit. Clear Bright Future jumped up my reading list thanks to my local library and an interview where Mason discussed the need for society/humans to decide what we value and to start making it a priority.

The overall point made in this book is valid and Mason does a reasonable job of making a convincing argument. Even if he is completely wrong about humanism, he is completely right about needing to define our values. Our values. Not someone looking to make a buck. Not someone looking to become dictator for life. Everyone.

And here comes the but. But, I think Clear Bright Future falls down as some points made are attacks on strawpeople or gross simplifications. He’ll swing between exacting explanations and diverse insights and then make quick leaps via these lazy tactics.

Take for example his comments about science moving from claims of hard objectivism to (a more realistic) subjectivism. Mason essentially engages in a confusing blend of scientism and anti-scientism. He talks as if science is simple hard facts (when it is within X% error, contingent on assumptions, within certain frames of reference, etc.) and then rejects the science that shows things are more complicated than that.

Another example is his criticism of postmodernism as anti-humanist and the foundation of a lot of today’s problems. Somewhere there is a philosophy professor shaking their head and chuckling at the idea that postmodernism texts have resulted in anything other than incomprehensible books and an industry of metanarrative loving critics blaming it for everything. At best, Mason is mistaking a part of the field for the whole. Sure, the rejection of the simplistic and metanarrative claims of earlier humanism is certainly a po-mo thing, but hardly the whole thing (e.g. see this)

These flaws do detract a bit from what is a very interesting book with a compelling message. Definitely worth reading and thinking about what our values are.

Other reviews worth reading:
Clear Bright Future by Paul Mason review – a manifesto against the machine
Review of Paul Mason’s Clear Bright Future: A Radical Defence of the Human Being

Comments while reading:
You can sustain an economy on life support, but not an ideology. People were starting to ask when things would get better for them rather than for yacht owners. (Paraphrased)

Having seen some of Mason’s work before I’ve been interested in his take on things. He offers insights and ideas you haven’t considered. I also find I don’t entirely agree with his conclusions. In one part he was outlining the idea of material realism (materialism) which was a pretty decent lay explanation. But then he sort of created a strawman to suggest that modern tech economies claim to create value out of nothing (computers create their own data, thus value, without work). I’m not sure that the people who say that actually believe it, rather they are using a heuristic.

View all my reviews

Can robots write?

Machine learning produces dazzling results, but some assembly is still required

Shutterstock

Alexandra Louise Uitdenbogerd, RMIT University

You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample:

I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.

But exactly how was the article created? And is it really true that software “wrote this entire article”?

How machines learn to write

The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)

OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.

But let’s step back and look at what text generation software actually does.

Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).

Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.




Read more:
From Twitterbots to VR: 10 of the best examples of digital literature


Writing by numbers

Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.

As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.

More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.

This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.

While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.

Imitating the brain

Neural networks work a bit like tiny brains made of several layers of virtual neurons.

A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.

The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.

To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.

Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.

In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.




Read more:
Friday essay: a real life experiment illuminates the future of books and reading


What’s so special about GPT-3?

GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.

The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.

GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.

The robot writer

But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.

This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.

This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.

AI song Beautiful the World, by Uncanny Valley.

We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.

For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.

Co-creativity: humans and AI together

So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?

A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.

Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.




Read more:
Computing gives an artist new tools to be creative


The Conversation


Alexandra Louise Uitdenbogerd, Senior Lecturer in Computer Science, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Book Review: Humans Need Not Apply by Jerry Kaplan

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial IntelligenceHumans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan

My rating: 4 of 5 stars

ABS brakes were the first step. The last will be us humans in observation cages next to the monkeys.

Jerry Kaplan is an expert in Artificial Intelligence and Computational Linguistics and attempts to guide the reader through what impacts AI and Robots will have on our future. In doing so, he raises many of the economic, ethical, and societal problems we are going to have to start addressing.

I first became aware of this book via CGP Grey’s short documentary of the same name (see below). To say there is a storm coming is an understatement. Kaplan guides us through the technological aspects of this topic with knowledge and skill. Where this book falls down is in his blind adherence to free-market solutions – ironically whilst pointing out several examples of where the free-market has failed in the past.

For example, some of his ideas about education are problematic. What he proposes with “job mortgages” is essentially traineeships and cadetships* that in my parents’ and grandparents’ generations were paid for by employers, with his modern twist being that employees should take out a job mortgage for. In other words, all of the cost and risk is moved from employers to employees.** How can anyone suggest that sort of thing as though they aren’t talking about slavery or indentured servitude?*** Sci-fi has been imagining that sort of scenario for decades and they weren’t calling it a good idea.

His comments about how rich people being in charge isn’t all bad, like back in ancient Eygpt… Because monarchies worked so well for everyone, who was a monarch.

Another gem was the idea that the free market could be in charge of wealth redistribution… Because it does such a great job of that right now. Now, in fairness, his plan was actually pretty good, but there were built in assumptions he didn’t really question despite laying out the framework with his discussion of automation taking our jobs.

Kaplan spent most of his book outlining what amounts to a post-scarcity world, a world where human “work” would essentially cease to exist, and thus cost, value and products become meaningless. How can you maintain our current economic system in that world? Don’t we need to be rethinking about what utopia we wish to design and the machines that will make that happen?

The final chapter has some interesting questions and ideas about what role humans can play in a world that the robots run and own. Whilst the ideas aren’t new, since science fiction has been prodding that topic for the best part of 70 years, he has grounded them in reality. If there is one takeaway from this book, it is that we all need to start planning the future now.

Overall, this was a fascinating book that is well worth reading.

* A point he acknowledges he is updating to be free-market and more “beneficial”
** It could be argued that this has already happened and Kaplan is just taking it one step further.
*** Again, a point he acknowledges with reference to AIs becoming free of ownership.

https://www.reddit.com/r/Futurology/c…
https://www.theguardian.com/culture/2…

View all my reviews

Book review: Dark Intelligence by Neal Asher

Dark Intelligence (Transformation, #1)Dark Intelligence by Neal Asher

My rating: 3 of 5 stars

If AI essentially become gods does that make humans the prime mover?

Thorvald Spear died 100 years ago in the war with the Prador. Fortunately, this is the future, so death is less final than it used to be. But Spear is less than happy with how he died at the hands of the black AI, Penny Royal, and decides to destroy it. Along the way he manages to piss off Isobel Satomi, who has also got a carapace to pick with Penny Royal. So she adds Spear to her list of things to destroy. Meanwhile, Penny Royal is up to something, and everyone wants to know what.

This was my first outing with Neal Asher and his Polity universe. Asher was recommended to me by a fellow blogger – Bookstooge – so I found this recent series in the library. There is much I enjoyed about this book, and by extension the universe Asher has created. The details that give this universe a lived in feel, the cyberpunk sensibilities, the interesting sci-fi technology, are all fantastic. The story and characters are also interesting. So why only 3 stars?

There were two things that really stopped me enjoying this novel more: the length; and the anachronisms.

Sci-fi has a habit of being long because someone decided that that was okay for spec-fic genres. Dark Intelligence made me notice that this was a long book. Usually if you are really enjoying a book, the length goes unnoticed. So I had the sense that there was too much padding, unnecessary exposition, and side plots. It all fits together nicely, but I’m sure that this could come in much shorter without losing anything.

The second problem I had was with the constant anachronisms. Sounds being described as like a domino being slapped on the board… because dominoes is so popular right now, let alone a few centuries into the future. The Polity universe is filled with hyper-intelligent AIs that can do just about anything… but apparently cars still need a human to drive them. Tesla had that covered two years ago. After noticing one anachronism the floodgates opened, to the point where I started to question if this was Asher having a joke. But I doubt this is the case, since he criticised Greg Bear for doing the same thing in a book review.

Perhaps I’ll enjoy The Skinner more, which Bookstooge originally recommended.

View all my reviews