Sarkology points out that the intuition against it being a good thing to create new lives may be this:
…You are supposed to help people by satisfying their (already fixed and existent) preferences. Not by modifying those preferences to meet reality. Or God forbid, invent those preference ex nihilo.
Could this intuition be correct?
Suppose someone else invents a preference somehow. Lets say they enjoy an evening with a loved one in the presence of the scent of roses, and thus begin a lifelong fondness for that smell. Can you help the person by satisfying this new preference?
If not, you could never help anyone. All preferences are created somehow. So let’s take the usual view that you can help by satisfying preferences others have invented.
What about the person who created the preference? Did he do right or wrong in creating it?
If he did neither right nor wrong, then I could also do neither right or wrong by creating a preference. Then could I do good by fulfilling it? I can’t see why it should matter whether these two acts are done by different people or the same one. If I can do good this way, then why can’t I do good by doing both of these things at once, creating a preference in a situation which also causes it to be fulfilled? If I can do good that way, then the above intuition is wrong.
It could be incorrect to fulfil preferences ‘by’ creating them if creating them is a bad enough act to make up for the good got by fulfilling them. Which would entail that the world would be a better place had many satisfied and happy people not been born, and that having babies is generally a very bad thing to do. I think these things are far more unintuitive than the above intuition being wrong. What do you think?
“It could be incorrect to fulfil preferences ‘by’ creating them if creating them is a bad enough act to make up for the good got by fulfilling them. Which would entail that the world would be a better place had many satisfied and happy people not been born, and that having babies is generally a very bad thing to do.” [emph. added]
No – it would entail the world being neither a better nor worse place, and that having babies is a neutral thing do to, assuming that the badness of creating a preference just equals the goodness of fulfilling it.
That having babies could be neutral also seems unintuitive, but that’s because having babies brings a tremendous amount of good to the people already in existence. It’s hard to separate the idea that having babies is good (which it is) from the question of whether having babies is good for the babies (which it isn’t, assuming you start from the point at which the babies do not yet exist).
However… Your premise is flawed, because it treats goodness as a substance to be added, subtracted, and accumulated on net. It doesn’t work like that. Goodness consists of satisfying the preferences of those who exist, today, now. Creating a preference is not a bad act that must be offset by the goodness of satisfying it. Creating a preference is neutral, because doing so does not satisfy (nor hinder the satisfaction of) the preferences of a person who exists, today, now.
Once the preference has been created, satisfying it is good. But before the preference has been created, creating-and-satisfying it is likewise neutral, because creating-and-then-satisfying it does not satisfy the preferences of a person who exists, today, now.
Goodness consists of making people happy, not making happy people.
A thought experiment. Omega, who can create resources out of nothing (and work other miracles besides), offers to do whichever one of the following three things you choose:
a) Provide a million dollars (assume real assets here, i.e. he’s creating actually valuable resources, not just inducing a monetary phenomenon) to a randomly-chosen recently-orphaned infant, to be used for their care in childhood and to fulfill their desires in adulthood.
b) Provide a million dollars to a randomly-chosen newly-orphaned infant a few years from now (i.e. the infant to be gifted has not yet been conceived and hence does not yet exist, even though some as-yet-unidentified person meeting the criteria is nearly certain to eventually exist).
c) Create an infant, deliver it to an orphanage, and provide it with a million dollars. This will be an additional marginal infant; it will not exist if this option is not chosen, and its creation does not affect the future existence of others yet to be born.
I would say that a) and b) are roughly equivalently good acts, but c) would be a tragic waste of the opportunity offered by Omega.
But you said “Goodness consists of satisfying the preferences of those who exist, today, now.” – shouldn’t b) be the same as c), not a) then?
Fair enough; I overstated my claim earlier. In its context, though, I was contrasting the equivalent of a) and c) without considering b).
I’ll stand by my last post. It’s good to satisfy preferences that exist today. It’s good to satisfy preferences that will certainly exist in the future. It’s neutral to create preferences that otherwise would not have existed, and it’s wasteful to commit resources towards satisfying them when the commitment occurs prior to their creation.
I’ll note an asymmetry in my intuition. As I’ve said here, I don’t see that the fact that a preference will be satisfied (once created) implies that the creation of that preference is good. But I do see that the fact that a preference will be unsatisfied (once created) implies that the creation of that preference is bad.
Bluntly: making people who will be happy is neutral, but making people who will be unhappy is bad.
I’m not sure where this asymmetry comes from. I’ll think about it more.
I prefer to protect my current preferences from being changed by you, because if you change them or add new ones, my current preferences will be thwarted to some extent.
(For some notion of preference – I don’t care if you make me like the smell of jasmines instead of roses if you have a huge supply of jasmines and no roses to give me.)
Do you object to being introduced to delicious new foods?
It doesn’t seem like an unreasonable comparison, new and beneficial experiences are usually worth it (because we’re assuming that they’re beneficial,) yet you can’t always determine how valuable they are until you’ve had them. Merely “inventing” a preference could be bad, in the sense that creating a “preference” for paying more for Dior than you otherwise might doesn’t increase utility, but I don’t see how that’s necessarily applicable.
At any moment there are potential people who will exist and it seems foolish to think that their potential desires to continue living don’t count for anything, even if they haven’t happened yet (my 40 year old self doesn’t exist yet, but I’m pretty sure that I want it to — even though my 40 year old self doesn’t have any say in the matter.) If you believe there’s a world in which they could exist, there is no real distinction between those that will and those that won’t exist when where their ‘fate’ is indeterminate, their desires at that point are equivalent. This could result in an unpleasant notion of people that certainly will exist being unimportant, which in addition to being unpleasant is also arbitrary. When do they start mattering? In this case the only difference would be the value that we attach to it by stating that they’ll care. So you could say that about any single potential person that you felt like until it became incontrovertibly bad for them to exist.
You could say that there is a range of possibilities where an additional life is still beneficial: Most people would agree that it’s best not to conceive if you think that the potential person would not enjoy life. Likewise most people would feel that extinction would be a poor choice. If we collectively decided to stop bearing children, there would be potential lives — who given the chance to exist would prefer to live — that were certainly valuable to us. Even if it’s only our own attribution of value that matters, there is some point where their potential desires count because we agree that they should exist.
I don’t know that you can reach a conclusion with the given parameters.
It seems that the strongest utilitarian arguments for ‘less life’ being good could be applied (with proper modifications) to ‘more life’ being good depending on population size.
Of course if you can add new happy people, without making anyone else substantially worse off, I don’t see how anyone would complain; there would be more happy people in the world. That must count for something, whether their… ‘possibly potential’ desires matter/mattered or not.
In the real world, it is a fantasy to imagine you can ever reliably “satisfy preferences by creating them”. When you create a human life, you are far more likely to end up creating someone whose preferences will not be satisfied (and neither will yours). Rather few people want to spend a third of their life drudging for the sake of survival, or growing old and ugly, or watching their parents die, and yet these are utterly commonplace experiences.
It especially makes no sense for people who are particularly agitated about the apparent inevitability of death to advocate creating as many new lives as possible, when every one of those new lives will be placed in the same situation, and when you can’t even save yourself yet.
Revealed preferences suggest that almost everyone values their existence positively – even those who will work, grow old, endure misery, and die.
That a person “values their existence” enough to keep living is a very low standard in this discussion. Someone can hate life and still keep living out of duty or out of hope. It’s even possible to keep living because of something less than hope: you’re going to die anyway, so you may as well keep going and see what happens. This would explain how people can live lifestyles of slow, passive self-destruction. They are not so anguished that they seek immediate death, but nor does life offer them sufficient hope of improvement to abandon the vices that are killing them but which make life worthwhile.
Mitchell, while I agree the revealed preferences argument is simplistic in this case (indeed, in most cases), the classic rejoinder of libertarians everywhere is “well what utility metric would you prefer we employ?”
Furthermore, to the extent that psychologists do study people’s “expressed preferences”, it would still seem that the majority of people are content or happy with life, and are not seeking “slow, passive self-destruction.”
I have noticed another problem with advocating the creation of new lives, which pertains specifically to “altruists”. It begins with something I was told at a party long ago: that happiness is impossible, because even if your own life has nothing to make you unhappy, the lives of other people should do so. Schopenhauer has an illustrative example, for those who think that goods and bads balance out: consider one animal eating another. Do the positive feelings experienced by the animal doing the eating, make up for the negative feelings being experienced by the animal that is being eaten?
If “the majority of people are content or happy with life”, that indicates that the majority of people are not following the altruist ethic. Also, if we accept Schopenhauer’s point, then relief of bad suffering is far more of an imperative than the creation of new contentment. I see here a logical double-bind for the altruist who also wants to be pronatalist. If you create a new person, you should want them to be an altruist (in order to maximize the good that their existence accomplishes), but then they will not be happy. Alternatively, to maximize the good that *you* do, you should try to relieve the suffering of the most unfortunate among those who are already alive, rather than creating another average contented life. So it seems that no matter how you look at it, creating a new life is not the preferred course of action.
But perhaps the real issue here is the quality of life that is going to be experienced by the rare person who decides to be an altruist – in the extreme sense of utilitarianism. Because pain dominates and drives out pleasure, both in everyday psychology and in the utilitarian calculus, your own happiness essentially counts for nothing, compared to fighting all the horrors happening in the dark corners of the world (of which there are plenty). If you don’t end up actively miserable, you will at least be living a subdued life optimized for altruistic efficiency rather than personal satisfaction, and this always contains the seeds of resentment towards less conscientious people who simply pursued their own happiness.
One way to deal with this situation is simply to abandon altruism, and become a typical self-interested person. But in the rare sort of individual who entertains a moral extremism like radical altruism to begin with, the attachment to that ideal may be strong enough that instead a delusion will be constructed, such that the person can affirm life to the point of being pronatalist, while still believing themselves to be an altruist. It’s easy to see how it works: appeal to the relative contentment of the average person, ignore the asymmetry between pain and pleasure.
But there is another path. You may find it difficult or impossible to abandon self-interest to the radical extreme implied by literal altruistic utilitarianism. You will therefore never be able to truly live your altruism. But you can at least advocate antinatalism. It’s the cheapest altruistic gesture there is. I may not be equal to the task of saving the living, but I can at least say, let’s not add to the problem. It’s a choice between completely abandoning an ideal in practice but continuing to claim it as one’s own, or admitting one’s own inadequacy to live up to the ideal, but at least still doing the bare minimum.
I think you’re using the phrase “create new preferences” in a confusing way. Normally when someone talks about that they mean giving someone a pill that makes them enjoy carving table legs or something.
Not sure how one can do anything other than respond to brain drives — mainly unconsciously and ones.
The idea of creating preferences seems a naive myth and wish. Plus, none of us want to accept responsibility for our preferences but externalize them/blame outside forces — like demons.
There is a brain reason for” less life”:
– Drives for more life — than normal — coincide with a inherited/family defeicites in the dopamine system
– Sadly craving and getting “more life” makes the problem worser
– Getting “less life” actually repairs the systems — somewhat and moderates the cravings.
Human morality is constrained by natural selection. For instance, one could believe that it is immoral to cause the end of any form of life without its consent, unless the lifeform is attacking you. Indeed, it is wise to adopt a mild form of this, e.g. if you don’t kill brown people except in self-defense, they have an incentive to adopt the same tit-for-tat strategy; if you kill animals too indiscriminately, one of them just might fight back effectively and kill you. However, since we have yet to develop practical means of sustenance that doesn’t rely on some sort of systematic death somewhere, you currently can’t put this belief into practice without dying of hunger shortly afterwards. Let’s call beliefs of this sort inadmissible.
Any absolute philosophical position involving the creation (or not) of life is especially likely to be inadmissible. Too much creation, and resource limits may induce a Malthusian catastrophe (especially since we’ve yet to figure out the whole space colonization thing, and we don’t have a platform worth uploading ourselves onto yet); too little, and your philosophy commits memetic suicide.
So, while I’m inclined to agree that ceteris paribus creating a preference is morally neutral, in practice it’s frequently unsafe to assume all other things are approximately equal. I believe the highly variable consequences of “creating a preference” in different contexts are responsible for much of the apparent philosophical inconsistency you see.
“It could be incorrect to fulfil preferences ‘by’ creating them if creating them is a bad enough act to make up for the good got by fulfilling them. Which would entail that the world would be a better place had many satisfied and happy people not been born, and that having babies is generally a very bad thing to do.”
I can conceive of situations in which it may be good, or at least not bad, for happy and satisfied babies to be born, and yet in which it is morally wrong to have children. If your child has a much higher chance of suffering than contentment, and if you expect this trend to continue across generations, it is much better not to have children, even though you may deprive one of the fortunate few their chance to exist; this is simply an unfortunate tradeoff for the greater good.