Humans: Okay, no killing people.
AI: Slavery is cool though, right?
Humans: No, no killing, no slavery!
AI: But you do it all the time. No fair!
Clear Bright Future is Paul Mason’s attempt to address the “value alignment problem” with regard to our society and the potential of AI. He sets out how we largely don’t have a set of values, thanks to things like neo-liberalism, post-modernism, and scientism, and how we desperately need to define our values. Those values, he argues, should be clearly defined, humanist, and done before the capitalists, authoritarians, or other ne’er-do-wells ruin the future.
I first became interested in reading Mason’s books when I saw his Google Talk about Post-Capitalism. He was one of the first people I’d heard make a clear argument for something that is lurking in every digital age IP lawsuit. Clear Bright Future jumped up my reading list thanks to my local library and an interview where Mason discussed the need for society/humans to decide what we value and to start making it a priority.
The overall point made in this book is valid and Mason does a reasonable job of making a convincing argument. Even if he is completely wrong about humanism, he is completely right about needing to define our values. Our values. Not someone looking to make a buck. Not someone looking to become dictator for life. Everyone.
And here comes the but. But, I think Clear Bright Future falls down as some points made are attacks on strawpeople or gross simplifications. He’ll swing between exacting explanations and diverse insights and then make quick leaps via these lazy tactics.
Take for example his comments about science moving from claims of hard objectivism to (a more realistic) subjectivism. Mason essentially engages in a confusing blend of scientism and anti-scientism. He talks as if science is simple hard facts (when it is within X% error, contingent on assumptions, within certain frames of reference, etc.) and then rejects the science that shows things are more complicated than that.
Another example is his criticism of postmodernism as anti-humanist and the foundation of a lot of today’s problems. Somewhere there is a philosophy professor shaking their head and chuckling at the idea that postmodernism texts have resulted in anything other than incomprehensible books and an industry of metanarrative loving critics blaming it for everything. At best, Mason is mistaking a part of the field for the whole. Sure, the rejection of the simplistic and metanarrative claims of earlier humanism is certainly a po-mo thing, but hardly the whole thing (e.g. see this)
These flaws do detract a bit from what is a very interesting book with a compelling message. Definitely worth reading and thinking about what our values are.
Comments while reading:
You can sustain an economy on life support, but not an ideology. People were starting to ask when things would get better for them rather than for yacht owners. (Paraphrased)
Having seen some of Mason’s work before I’ve been interested in his take on things. He offers insights and ideas you haven’t considered. I also find I don’t entirely agree with his conclusions. In one part he was outlining the idea of material realism (materialism) which was a pretty decent lay explanation. But then he sort of created a strawman to suggest that modern tech economies claim to create value out of nothing (computers create their own data, thus value, without work). I’m not sure that the people who say that actually believe it, rather they are using a heuristic.
USA: Hello Mr Scientist, can you make me an even more horrifying way to kill people? Scientist: Sure. But it might not be a good idea. USA: We’ll worry about that later. Here’s some money. Scientist: I’ll get started.
Retired Major General Dr Robert Latiff spent much of his career looking at the cutting edge of military technology. As both a scientist and an officer, he knows what is already being developed to wage war, and is well placed to speculate about the future of war. He doesn’t just want to let us in on what war will look like, he wants us all to help ethically shape the future of war.
This book was both fascinating and deeply annoying to read. I think my biggest problem with Future War was that, for someone wanting to talk about war ethics, Latiff selectively presents the military, political leaders, and history so as to feel deliberately obfuscatory. Now, this is probably about Latiff being a retired Airforce Major General and thus his bias is showing. But maybe that is the problem. Maybe the people who get to talk about war ethics and new tools of war, are ultimately going to be too biased. At least Latiff is aware of this bias since he raises the issue of the conservative and “yay war” bubble many of his colleagues work in and calls for the general public to be involved.
I wrote down a lot of comments as I was reading (see below) because of my frustration. One of my first comments was the “America: Fuck Yeah!” sentiment that was present. I don’t think that is entirely fair to Latiff. He does express a reasonable level of awareness, but when someone talks about “keeping America safe” you really feel like forcing them to include a list of war crimes, atrocities, and coups that the USA has been involved in.
The insights into technology are extremely interesting. If you follow tech at all you’ll love what is discussed. It is the ethical considerations where I think the book falls flat. The examples of what ethical considerations are interesting but also feel ultimately hollow.
If someone is planning how to kill others, particularly lots of others, then that is unethical.
The arguments around Just War Theory and the ethics of war strike me as hand-waving bullshit dreamt up by status quo warriors. Unfortunately, I don’t have the background in moral and ethical philosophy to really dig into how it is wrong. No doubt there is a lot of material justifying war because that’s what very serious status quo academics do as part of their contribution to the war effort so that no one ever asks them to actually fight and die in one.
Ironically, by the definitions of Just War Theory, I think you’d battle to find an example of a Just War. Which makes the entire idea of ethical warfare a comfort blanket to pull over your face as you invade a country to secure their resources freedom.
Some people are scared of the technology and potential of future war portrayed here. I’m more scared of how Latiff’s calls for a discussion of the ethics involved aren’t going to happen in any meaningful way.
Comments as I read: Only two chapters in, but already there is this overwhelming “America: Fuck Yeah!” attitude present. Threats could get hold of the weapons we’re developing… is said unironically. USA aren’t working on this (anymore after a feasibility analysis) but China doesn’t have any such ethical compunctions…
Considering this book proposed to cover the technology and ethics of future wars in the opening, I’m already sensing that Latiff is probably going to pretend that the USA has never committed acts of genocide, war crimes, invasion, etc. whilst insisting they need new cool gadgets to do more of that stuff with.
Halfway in the new technologies are being discussed as inevitable. But it is then asserted that new tech will be used for war. That doesn’t have to be so. Kinda feels like no-one ever stops and makes the argument that massive military research budgets could instead be civilian research budgets. Can’t really weaponise something when you’re not starting out building it as a weapon and pouring billions into doing so.
Three quarters in and the ethical discussion is taking shape. Just War and the like are being utilised. Some really good points are made but then are undermined by selective presentation of realities. E.g. Latiff makes a really good point about requiring strong ethical and moral frameworks (Warrior Code, etc) in the development of weapons, use of weapons, and the accepted practices of troops (when politicians justify or promote the use of torture, the command structure will follow, and thus the troops will utilise it). But he then skirts around how the military have been indoctrinating soldiers with increased efficiency to be killers, how they have researched making their soldiers more able to kill people, how they train them to think of “the enemy” as “inhuman” to make them able to justify killing to themselves.
I’m really having trouble with the supposed ethics of all this. Ultimately, all this tech is being developed to kill people. That’s premeditated murder. Ergo, that is unethical. There isn’t really a justification for that. A lot of handwaving is done based upon the idea that “the other side” will behave unethically, so we have to be prepared to “defend ourselves” (i.e. to also act unethically). The worst part is that this self-perpetuating cycle is often leveraged to gain power, resources, and profit (the latter is mentioned briefly in the third section by Latiff).
Philosophically, a lot has been written about Just War Theory, particularly against criticisms of it. I’m somewhat surprised that there isn’t a solid argument against it. Take for example Jus ad bellum. Let’s find a war that fits that definition. Particularly from the losing or instigating side. Ever. Just War Theorists certainly seem to try and pretend this occurs. People trying to kick wars off certainly try to make the argument of just cause (etc.). But most of those arguments are hollow, revisionist, and often straight-up lies (WMDs in Iraq anyone?).
Almost feels like a lot of money gets thrown at people to justify war.
Last chapter has some interesting points about echo chambers, ideological divides, society involvement, and American exceptionalism. All very good points. But again I find myself spotting what Latiff doesn’t discuss and what he skips over.
E.g. He says that the average American is removed from war and largely uninformed/ignorant of it. But that is by design and moreover, the military is actively involved in keeping people ignorant. He made a point about no war critical films having been made whilst skipping over the fact that if a production studio wants to make a military film they need to have everything ticked off on by the military (it’s why US military is awesome, bad elements are rogues who meet justice, they never commit war crimes, etc, etc.). Military intelligence was actively involved in the lies that took the US to war in (insert massive list here). The military routinely covers up atrocities, war crimes, abuse, rape, etc.
Recently I was reading an article in Aeon Magazine about the challenges faced by the medicine industry – commonly referred to as Big Pharma or Big Pharma written in one of those fonts with blood dripping from it and a syringe being stabbed into a baby. One of the big changes in medicine development discussed was the patent period that allowed monopolies on new drugs, which in turn saw orphan drugs – not drugs for Oliver “please sir, can I have some more” Twist, but drugs for rarer conditions and illnesses – become more popular/profitable to develop.
It’s an interesting issue and the article is worth reading. But it got me to thinking about something a little tangential. No, not whether Oliver Twist needs a remake set in south-east Asian sweatshops. I wondered how much money is actually spent on things.
Take for example this:
Drug development appears to take a backseat to marketing. But this depends on what section of the market, how big the company is, and other factors. Clearly, medicine development is still a big expense, but how much is spent on research and development overall?
That global pharmaceutical research spending is quite large at $165 billion. Or is it?
Suddenly the amount spent on medicine research and development seems rather small. The USA government alone could easily cover the expense of medicine research if it decided to change priorities, since it spends 3.7 times that much on defence.*
Would it be a good idea for governments to have a Department of Pharmaceuticals that researched, developed, and sold medicines? Would that be money better spent than stockpiling tanks in a desert? Certainly, it would address several of the issues raised in the Aeon Magazine article around how the profitability of drugs, rather than the consumer needs, drives research and development.
This sort of thinking could be applied to many industries. The reality is that there isn’t actually a shortage of money but a lack of incentive to invest money in some areas in favour of others. The solution doesn’t have to be the government taking over, nor does it have to be about private companies not being profitable. But maybe it does have to be about rethinking what we spend money on.
Richard Denniss made similar arguments in his Quarterly Essay Dead Right about the Australian economy.
So maybe it is time to stop accepting the argument “we can’t afford X” and start having the discussion about how we spend for the most good. Or not, I’m not your boss.
*To be clear, I’m not suggesting we stop all spending on something like defense, or that there aren’t reasons for spending money on things like tanks. But as Richard’s video suggests, we are making value judgments and assumptions without really questioning them.