Friday, August 14, 2015

A reply to Magnus Vinding on consciousness, ethics, and future suffering

Magnus Vinding recently published a piece, "My Disagreements with Brian Tomasik", that discusses his views on consciousness, moral realism, and reducing suffering. Magnus offers a nice defense of many of his points, and I really enjoyed reading it, though I ultimately disagree with most of his conclusions. (On some points I'm more agnostic.) To save time, I'm only replying to a few issues, though much more could be said in this debate. Following are italicized quotes from Magnus's piece, followed by my replies.

Magnus: He might even claim that there is no screen, only “information processing”, and that consciousness is all a user created illusion.

Yes. :)

Magnus: When it comes to our knowledge, consciousness is that space in which all appearances appear, the primary condition for knowing anything

If you believe that philosophical zombies are possible, then you'd agree that they can know things without being conscious. Even if you deny that zombies are possible, you might believe it possible that a sufficiently intelligent computer system could "know things" and act on its knowledge without being conscious.

Magnus: denying that consciousness exists is like denying that physical space exists. 

Yes, both are matters of faith, but I find that the intuitive appeal of a physical ontology is stronger than that of a physical + mental ontology or a purely mental ontology.

Magnus: If you don't believe consciousness exists, how can you hold that consciousness is the product of computation?

Denying consciousness is a strategic way to explain how my view differs from naive consciousness realism. I don't deny that there are processes that deserve the label of "consciousness" more than others. In this section, I draw an analogy with élan vital. To distinguish my position from vitalism, I would say that "life force" doesn't exist. But it's certainly true that organisms do have a kind of vital force of life to them; it's just that there's no extra ontological baggage behind the physical processes that constitute life force, and it's a matter of interpretation whether a given physical process should be considered "alive".

Magnus: Furthermore, if I were to accept Brian's constructivist view, the view that we can simply decide where to draw the line about who or what is conscious or not, I guess I could simply reject his claim that flavors of consciousness are flavors of computation – that is not how I choose to view consciousness, and hence that is not what consciousness is on my interpretation. That “decision” of mine would be perfectly valid on Brian's account

Yes, though I would oppose it insofar as I don't like that decision.

Arguments over how to define "consciousness" are like arguments between liberals vs. conservatives over how to define "equality".

Magnus: I need make no extra postulate than that suffering really exists and matters,

I don't know what it means (ontologically) for something to "matter".

Magnus: Moral realism, on our accounts, merely amounts to conceding that suffering should be avoided, and that happiness should be attained.

Taboo "should".

Magnus: He appears certain that we will soon give rise to a suffering explosion where we will spread suffering into outer space.

That seems very likely conditional on space colonization happening.

Magnus: He criticizes those who believe in a bright future for being under the spell of optimism bias, but one could level a similar criticism against Brian's view: that his view is distorted by a pessimism bias.

I think space colonization would also result in a "happiness explosion", with the expected amount of happiness as judged by a typical person on Earth plausibly exceeding the expected amount of suffering. But I think we should give special moral weight to suffering, which means that the potential explosion of suffering beings isn't "outweighed" by the potential to also create more happy beings.

Magnus: If we become able to abolish suffering, as Pearce envisions, then why wouldn't we?

If we became able to feed ourselves without killing animals, then why wouldn't we?
If we became able to distribute wealth more evenly to prevent homelessness, then why wouldn't we?
If it were possible to eliminate wars by being more caring for one another, then why wouldn't we?

Magnus: One may object that our future is rather determined by market forces, but couldn't one argue that the pain-pleasure axis indeed is the ultimate driver of these, and hence that markets gradually will work us in that direction: away from pain and misery, toward greater happiness, at least for those actively participating in them, which in the future may be an ever-increasing fraction of sentient beings.

I'm doubtful that non-human animals will begin holding economic wealth and making trades with humans. Advanced digital intelligences probably will, but lower-level "suffering subroutines" will probably not. At present, and plausibly in the future, most sentience relies on altruism in order to be cared about by powerful agents.

Also, Robin Hanson's Malthusian scenario is one example where actors in a market economy may be driven into potentially miserable lives despite being able to buy and sell goods as rational agents.

Magnus: Perhaps Brian's view does not support this, since there ultimately is nothing more reasonable about minimizing suffering than there is about, say, maximizing paperclips on his view, and hence that empathy is all we have to rely on in the end when it comes to convincing people to reduce suffering. I doubt he would say that, but I don't know.

All reasoning relies on foundational premises that come from emotion or other primitive cognitive dispositions. But reasoning based on those premises is often useful in convincing people to care about certain things.

Magnus: As an empirical matter, however, it seems clear to me that reason is the prime force of moral progress, not empathy.

It's not obvious to me either way.

Magnus: small changes today can result in big differences in outcome tomorrow, and hence that the motion of our tiny wings at least do have a small chance of actually making a major impact.

Agreed, but with "small chance" being a key qualification.

Magnus: Even if [digital sentience] is not possible, Brian still thinks we are most likely to spread suffering rather than reduce it, for instance by spreading Earth-like nature and the suffering it contains. I don't consider this likely, because what rules the day with respect to people's views of nature is status quo bias, which works against spreading nature.

That might be true regarding directed panspermia (we can hope), but at least when it comes to terraforming, the economic incentive would be very strong (in futures where digital intelligence doesn't supplant biological humans). People have no qualms about starting farms in an area (disrupting the status quo) to feed and clothe humans. Likewise when it comes to terraforming other planets so that they can eventually support farms.

Magnus: It seems to me that when it comes to where on the hedonic scale our future will play out, it is appropriate to apply a very wide probability distribution [...]. Brian seems to have confined his distribution to an area that lies far below basement level. An unjustified narrowness, it seems to me.

It depends whether you're thinking of "happiness minus suffering" or just "suffering". I claim that the "suffering" dimension and the "happiness" dimension will both explode if space is colonized. I'm more agnostic on the sign of "happiness minus suffering" (relative to a typical person's assessments of those quantities), but I don't think "happiness minus suffering" is the right metric for whether space colonization is good, since suffering has higher moral priority than happiness.

Magnus: I understand where Brian is coming from: he has had discussions with vegans, and the majority, I would guess, have defended leaving the hellhole that is nature alone.

Yes. :)

Magnus: “Non-interference” seems to be the predominant view among both vegans and non-vegans

Well, the rate of conservationism is higher among vegans than in the general population (since vegans tend to be liberal, and liberals tend to support ecological preservation).

Magnus: The core issue here is suffering in nature, so I think it's worth asking the question: who is most likely to care about suffering in nature, someone who is vegan for ethical reasons or someone who is not?

Given that habitat loss is now and will be for the next many decades the primary anthropogenic determinant of wild-animal suffering -- not whether people care about wild animals -- then what matters most in the short run is how much people support environmental conservationism. In the long run, ideas about animal suffering may matter more.

Magnus: For example, among the few vegans whom I have spoken about the issue with, I have only met agreement: yes, we should help beings in nature if we can rather than leave them to suffer. I have met nothing like it among non-vegan friends, and it is a much larger sample.

Thanks for the data points. :)

Magnus: And genuine moral concern for non-human beings is exactly what must be established in order for us to take the suffering of non-human beings seriously. I maintain that there is no way around veganism for the establishment of such moral concern.

Many transhumanists aren't vegan but care about wild-animal suffering.

Magnus: I welcome Brian's response to any of my comments above, and hope he will keep on challenging and enlightening me with his writings.

Thanks! Same to you. :)