Category Archives: 1

Ethical experimentation

I suggested experimenting with different settings on personal characteristics that aren’t obviously good or bad. For instance, trying out being more or less perfectionistic for a day.

A particular variety of this that interests me is experimentation with different ethical principles, where opinion differs on which is correct. Both at levels of action (being a vegan for a week) and of abstract belief (being a virtue ethicist for a week).

I think this is a particularly non-obvious thing to do, because:

  1. You already have views on what is correct, and trading your good ethical principles for bad ethical principles seems unethical.
  2. Ethics seems like an area where experimentation is particularly unhelpful, being mostly about things outside of you that you don’t have direct access to, and also arguably inhabiting a separate realm that doesn’t interact with empirical facts. 

I think it is a good idea anyway. On 1, this is basically the same as the case for placebo controlled medical trials, assuming that the thing can actually help you be more ethical in the long run.

On 2, the main thing you have to go by on ethics is intuitions and arguments that are salient and moving to you. But people are notoriously bad at coming up with an unbiased selection of considerations to make salient on topics where they feel something, and it is easy to hear an argument and not really feel it. Actually trying to inhabit the different positions seems helpful for these.

I haven’t done this, but I have become a vegetarian for no great reason and in spite of my argument that it is not an effective use of effort, and then gone back to eating fish, and I found both things had pretty interesting effects on my intuitions about things and the arguments I thought about (possibly changing ethical positions for no good reason is especially good, because then your brain tries to make up its best reasons).

I was thinking of trying nihilism week soon, but then I got busy and maybe became a nihilist anyway, so we’ll see.

For the metaphors

I make use of a lot of analogies, for instance ‘like dancing’ and ‘the ice skating thing’ are particular phenomena I often think about, and I get value from thinking about meta-ethics as if it were romance, or saving the world as if it were a party. I wonder if providing a variety of concrete experiences that other things might be analogized to is a big source of value from doing new things.

For instance, recently I took up knitting and I think there are things about it that my other experiences don’t have. For instance, I got some knitting patterns, and they have this very brief and utilitarian jargon, and a bunch of concepts, and I got a sense of this rich world of actionable and actioned knowledge about how to do a concrete thing, with much doing of it, which is pretty unlike other things I engage in, I am sorry to say. 

I was also struck by the experience of being able to take a relatively simple substance (wool) and turn it into a useful object of the kind one buys in a store (a hat, or it seems like it will be a hat). 

These things are of course what I expect in the abstract, but it is something else to experience things.

I’m not sure how these new experiences compare to the value I have had so far from the activity of knitting, but it seems like much more than the value of a generic hat, and I only have maybe a quarter of one of those.

My current guess is that filling out my repertoire of concrete intuitions about specific kinds of occurrences or relationships between things is pretty helpful.

Self policing for self doubt

Sometimes it seems consequentially correct to do things that would also be good for you, if you were selfish. For instance, to save your money instead of giving it away this year, or to get yourself a really nice house that you expect will pay off pragmatically while also being delightful to live in. 

Some people are hesitant to do such things, and prefer for instance to keep a habit of donating every year, or err toward sparse accommodation more than seems optimal on the object level.  I think because if their behavior is indistinguishable from selfishness, it is hard for them to be sure themselves that they aren’t drifting into selfishness. Not that selfishness would be bad if the optimal behavior was in fact the selfish one, but the worry is that if a selfishness-identical conclusion will bring them great personal gains, then they will tend toward concluding it even if they should not have.

This all makes sense, but there is something about it that I don’t like. It seems good to be able be coherent and curious and strategic and to believe in yourself and what you are doing in ways that I think this is at odds with. For instance, under this kind of arrangement you don’t get to have a solid position on ‘is this house worth having?’. You have your object level reasoning, and then not even a meta-level reason to adjust it, but a meta-level reason to distrust your whole thinking process, which leaves you in the vague epistemic state of not allowed to have certain conclusions on the house at all, or allowed to have them but not act on them. And having views but not acting on them is a weird state, because you are knowingly doing what is worse for the broader world, out of misalignment with yourself. And all this is to fend off the possibility that your motives are actually bad, or will become bad. I kind of want to say, ‘if your motives are bad, maybe you should just go and do something bad instead of rigging up some complicated process to thwart yourself’, but presumably there is some complicated relationship between bad and good parts of you that are trying to negotiate some kind of arrangement here. And maybe that is the way it must be, for you to do good. But it sounds suffocating and enfeebling. 

On my preferred way of living, you do notice if you seem too excited about living in a nice house. But if you think you might have ‘the wrong values’ you address that problem head on, by object level inquiry into what your values are and what you think they ‘should be’. If you think you might be engaging in self-deception, you try to work out if that is true, and why, and stop it, rather than building a system that lets you move money through under the assumption that you are self-deceiving. 

Relatedly, I think people sometimes donate to causes they don’t work on, though their position is that the one they work on is better, or hesitate to spend the amounts of money implied by their usual evaluations on improving something in their usual line of work, out of a modest sense that they might be biased about their choice of work, and that money could really save lives for instance. On my preferred way of living, if you suspect that you are biased about your choice of cause to work in such that money is better spent on a different one, you sit down and figure that out and don’t waste your career, not just send your Christmas donation somewhere else and then get back to work. 

This all takes effort though, and won’t be perfect, and mileages vary, and everyone must do their best with whatever state of psychological mess they find themselves in. So quite possibly the ‘avoid non-sacrifice’ methods are better for some people.

But having to be this kind of creature, that can’t treat itself as an agent, that isn’t allowed certain beliefs, that second guesses itself and fears parts of itself and ties itself up to thwart them, seems like quite a cost, so I don’t think such strategies should be taken up by default or casually. 

This is all my sense, but I haven’t spent huge amounts of time thinking about it (e.g. note my own position is pretty vague), and may come around pretty easily.

Personal quality experimentation

Different people seem to have different strategies, which they use systematically across different parts of their lives, and that we recognize and talk about. For instance people vary on:

  • Spontaneity
  • Inclination toward explicit calculations
  • Tendency to go meta
  • Skepticism
  • Optimism
  • Tendency to look at the big picture vs. the details
  • Expressed confidence
  • Enacted patience

I don’t know of almost anyone experimenting with varying these axes, to see which setting is best for them, or even what different settings are like. Which seems like a natural thing to do in some sense, given the variation in starting positions and lack of consensus on which positions are best.

Possibly it is just very hard to change them, but my impression is that for at least some of them it is not hard to try, or to change them a bit for a short period, with some effort. (I have briefly tried making decisions faster and expressing more confidence.) And my guess is that that is enough to often be interesting. Also that if you effortfully force yourself to be more skeptical and it seems to go really well, you will find that it becomes appealing and thus easier to keep up and then get used to. 

I also haven’t done this much, and it isn’t very clear to me why. Maybe it just doesn’t occur to people that much for some reason. (It also doesn’t occur to people to choose their value of time via experimentation I think, a related suggestion I like, I think from Tyler Cowen a long time ago.) So here, I suggest it. Fun date activity, maybe: randomly reselect one personality trait each, and both try to guess which one the other person is putting on.

Normative reductionism

Here’s a concept that seems useful, but that I don’t remember ever hearing explicitly referred to (with my own tentative name for it—if it turns out to not already have one in some extensive philosophical literature, I might think more about whether it is a good name):

Normative reductionism: The value of a world history is equal to the value of its parts (for some definition of relevant parts).

For instance, if two world histories only differ between time t and time t’, according to NR you do not need to know what happened at other times to evaluate them in full. Similarly, the value of Alice’s life, or the value of Alice enjoying a nap, depend on the nature of her life or the nap, and not for instance on other people’s lives or events that took place before she was born with no effect on her (unless perhaps she has preferences about those events or they involve people having preferences about her, but still the total value can be decomposed into the value of different preferences being fulfilled or not). Straightforward hedonistic utilitarianism probably implies normative reductionism.

My impression is that people have different intuitions about this and vary in how much they assume it, and that it mostly isn’t entirely aligned with other axes of ethical view, either logically or sociologically, though is related to them. So it seems maybe worth noting explicitly.