Sam Harris, a hero of mine, argues that maximising well being is objectively moral. While I don't fully agree with this, in the sense that it's objectively true, I've previously said that humanity should DEFINE maximising well being as a fundamental value.
But I forsee problems with this form of utilitarianism. In fact, I can forsee 4 problems:
1) The genocide problem
2) The altering of our genome probem
3) The drug paradise problem
4) The scaling problem
5) The measurement problem
perhaps no need for a strict definition? Arfitical learning
1)
What if maximising well being is most easily and reliably accomplished by painlessly murdering depressed people? This would push the average up. Why not kill every human on the planet except for the happiest, most well tallented one? Assuming that the last person on earth would somehow be unaffected, then won't this maximise well being? Perhaps you could wiggle your way out of this by maximising total wellbeing instead of average wellbeing - where wellbeing can only be measured as a positive value (just like degrees kelvin). Or perhaps you could resovle it by applying a few constraints to the optimization problem. But, then we're getting to a point where these additional rules are starting to feel arbitrary. If we start to patch up a rule such that it's inline with our moral intuition, then you might as well avoid the rule alltogehter.
2 & 3)
With advances in technologies like CRISPR, perhaps it may be possilbe one day to design our bodies such that we feel unparalleled bliss, no matter what we do. Maybe it might also be possible to design a side effect free drug which makes us feel unparalleled bliss. If this were the case, then we could all live completely isolated, unacomplished lives and still feel overwhelming happy. By our definition of moral, does this mean that this is a good thing? Ambiguity in the term "wellbeing" is now significant. Perhaps we could tighten the definition of wellbeing in such a way that it incorporates human connection, getting a job etc etc. But this is starting to feel arbitrary once again. Perhaps a drug induced bliss is reaching maximum wellbeing, and maybe I've just developed some misleading intuitions because of my anti-drug upbringing. I'm unsure.
4)
Do we just want to maximise the wellbeing of humans? Surely Apes and Chimps feel suffereing, so they should be incorperated into the equation somehow. What about ants and mosquitos? I previously thought you could solve this by applying a weighted sum of wellbeing. So since ants (presumably) don't experience a wide spectrum of suffereing / joy, we won't need to care too much about an ant verses the wellbeing of a dog for example. But what if we're wrong about the nature of experience of ants? Or what if the trillions of ants collectively still have a large influcne on the total or averaged wellbeing? Likewise, would it be a good idea to force mosquitos into extinction, even under the assumption that there are adverse affects regarding the food chain etc? If so, why would the 'no genocide constraint' be applied to humans, but not mosquitos? These are all highly theoretical considerations. On a practical level, it would be naive to think that human bais's won't leak into this equation somewhere. Humans value the life of a dog, far more than that of a dung beatle because we've evolved to feel empathy / compassion for mamals that have morally irrelevant features: cute ears, wide eyes etc.
5)
Lastly, and most obviously, how do we precisely measure wellbeing? Even if neuroscience develops to the point that we can understand subjective experience at the level of individual neurons, we will still need to rigerously define what outputs we desire. For example, even if we know deterministically that chemical A used on a patient causes output B, how do we convert that to a wellbeing score? A definition will still need to be made, and it can only be informed by our biology, not determined by it. But maybe I'm being too harsh. After all, the original definition of temperature was amazingly crude - it was defined as the volume of mercury in a glass tube. Then as more theoretical models were made which were informed by data, the definition of temperature changed to one that was more precise. Perhaps the lesson learned here is that a crude definition of wellbeing is ok to begin with. Maybe a definition of wellbeing could initially be "everyone should have enough food to eat proportional to each beings body size". Then, as a socieity grows and gets more complex, theoretical models involving wellbeing could be made which could help inform new refinements to the definition. Perhaps iteration is the key?
All in all, i'm still very confused. I'm convinced I live on a small spinning rock, hurtling though space, where nothing really matters. Maybe trying to force order out of pure indifference is a fools errand. Maybe I should only define what's moral for myself? I'm not sure.
No comments:
Post a Comment