Content warning: Completely non-novel ideas portrayed as brain blasts.

EDIT: Cunningham’s Law continues to be real, as I’ve been told by readers that the TL;DR of this post is that the orthogonality thesis is real.

Deciding to become vegetarian

Several months ago, after drinking the AGI/ASI Kool-Aid that we are “soon” going to be bootstrapping our way to making the Omnimessiah using LLMs, I decided to try and become vegetarian. My thought process behind it was that a super-intelligence will be trained on human data, and if it learns through that data that humans care little to nothing about beings that are less intelligent than themselves, then why should it not treat humans the same? We farm chickens, cows, and pigs to fuel ourselves – what’s stopping the ASI from doing the same a-la The Matrix, if it deems it necessary?

Of course, I’ve considered this idea before – I just used The Matrix, one of the most popular films of all time, as an allegory – but I just shrugged off the idea until I saw that making such an intelligence was possible in my lifetime.

And now here I am, several months later on a warm summer night eating a veggie-quinoa bowl and thinking to myself “this would go really well with some grilled chicken. I really wish I could eat meat.”

Maybe it’s my desire for meat that’s making me think these things, but I’m not so sure that my argument from a few months ago really holds.

A second thought experiment

If my concern is that a super-intelligence is going to cause harm to humans because it sees them as lesser beings, I don’t think it follows that I should only stop eating meat. I should stop driving because bugs will inevitably be hit by my windshield. I should be against wind energy because windmills will inevitably kill birds who don’t know better. And I’m sure you can think of many other unreasonable examples. I don’t want an ASI to carpet bomb human a city so it can build another compute cluster like we humans may bulldoze an ant hill.

So through my actions, what is it that I’m really trying to prevent these less-intelligent animals from experiencing? And how does that translate to how I want a super-intelligence to treat me? I think that, in a word, the answer is that I want to minimize suffering for all of these creatures.

Dukkha

But that term “suffering” is doing a lot of heavy lifting, so let me try to elaborate. I think a key part of suffering is the ability for one to experience pain. Pain is an evolutionary advantageous feature of life because it is an incredible motivator to get you out of your current situation to spread your genes more. You fell down a hill and broke your leg? You better have a pain response, because if you don’t prioritize fixing that leg now, your genes defining your lack of a pain response aren’t going to propagate to future generations. It’s pretty obvious why most life on Earth exhibits some form of it.

So let’s start off by saying that suffering is equivalent to the amount of pain that one experiences. This can be any kind of pain – physical, mental, emotional, etc. In short, suffering is being in an unpleasant state. The question then is what types of suffering can other life on Earth experience?

I can’t say much for plants and fungi – they don’t even have a nervous system.

Worms? Well I don’t want to cause it pain, so I won’t be purposefully stepping on it or putting a live one on a fishing hook if I can avoid it. Given my nature and biases, I’m very tempted to tell myself “but the worm is so small, replaceable, and unremarkable, so why shouldn’t I be okay with stepping on it?” But if you have that opinion, then you also should be okay with a sufficiently-intelligent ASI treating you like you treat a worm[1].

Small mammals? Well now we’re getting into territory where the intelligent creature is similar to humans. If you hurt a squirrel and you’re not a psychopath, you’ll certainly feel some empathy seeing it wriggle, squirm, or perform other behaviors indicating that it’s in pain. People generally seem to fold around this area, and it mostly seems to be due to the fact that they can empathize with the creature because they know what it’s like to experience physical pain.

What about more intelligent mammals like dogs and pigs? Now we can bring emotions and emotional pain into the picture. It seems wrong to not emotionally embrace a dog such that they end up depressed or having other behavioral issues.

Humans have at least all of these forms of suffering, but arguably more, as we’re able to transcend many of the things that lesser creatures can’t seem to grok. For example, humans are able to recognize the negative consequences of indulging in certain vices (drugs, excessive eating, hedonism of any sort, etc.) and practice self-control to live life in a more fulfilling way. We have complex social structures where we long for connection with others. Exercise can lead to a great endorphin rush, even if your body was designed through natural selection to strongly desire eating sugar and low stress. Living “the good life” requires much more than just physical safety and emotional stability.

Super-intelligence needs to be able to suffer

Given all of that, it seems to me it is a necessary requirement for an aligned super-intelligence to be able to empathize with humanity if we don’t want it to squish us like bugs. In order for it to empathize with humanity, it must be able to suffer the same way that we can. That means that, encoded in its architecture, there needs to be some neuron superposition representing the qualia of what it feels like to sleep in too much, to long for a best friend, or to regret eating a whole sleeve of Oreos and puking in its mother’s bed. It needs to understand how humans experience emotions and for it to be able to experience those emotions as well.

Up until now, I thought that emotional intelligence was an unnecessary aspect of intelligent agents that we’d end up deploying. But after thinking about this more, I’m much more convinced that we need to encode these qualia into our models – otherwise, we’re creating a super-intelligent alien to humanity that won’t care about what happens to us.

As for my vegetarianism and how I want to treat other animals: After thinking about this, I don’t think I need to stay on a 100% vegetarian diet anymore. But I want to ensure that the places that I source my meat from treat their animals well such that, when they are butchered, they’ve lived as happy and fulfilling of lives as they can. I will still not buy factory-farmed meat, and I will try to avoid meat-based products if I don’t know where the meat has been sourced from. And I will prefer meats from animal that emit less carbon overall, like preferring poultry to red meats.

Footnotes

[1] That is, unless you can make an argument for an ASI seeing humanity as its creator and, therefore, worthy of some special treatment such that, no matter how smart the ASI becomes, we are not worms to it.