For the metaphors

I make use of a lot of analogies, for instance ‘like dancing’ and ‘the ice skating thing’ are particular phenomena I often think about, and I get value from thinking about meta-ethics as if it were romance, or saving the world as if it were a party. I wonder if providing a variety of concrete experiences that other things might be analogized to is a big source of value from doing new things.

For instance, recently I took up knitting and I think there are things about it that my other experiences don’t have. For instance, I got some knitting patterns, and they have this very brief and utilitarian jargon, and a bunch of concepts, and I got a sense of this rich world of actionable and actioned knowledge about how to do a concrete thing, with much doing of it, which is pretty unlike other things I engage in, I am sorry to say. 

I was also struck by the experience of being able to take a relatively simple substance (wool) and turn it into a useful object of the kind one buys in a store (a hat, or it seems like it will be a hat). 

These things are of course what I expect in the abstract, but it is something else to experience things.

I’m not sure how these new experiences compare to the value I have had so far from the activity of knitting, but it seems like much more than the value of a generic hat, and I only have maybe a quarter of one of those.

My current guess is that filling out my repertoire of concrete intuitions about specific kinds of occurrences or relationships between things is pretty helpful.

Self policing for self doubt

Sometimes it seems consequentially correct to do things that would also be good for you, if you were selfish. For instance, to save your money instead of giving it away this year, or to get yourself a really nice house that you expect will pay off pragmatically while also being delightful to live in. 

Some people are hesitant to do such things, and prefer for instance to keep a habit of donating every year, or err toward sparse accommodation more than seems optimal on the object level.  I think because if their behavior is indistinguishable from selfishness, it is hard for them to be sure themselves that they aren’t drifting into selfishness. Not that selfishness would be bad if the optimal behavior was in fact the selfish one, but the worry is that if a selfishness-identical conclusion will bring them great personal gains, then they will tend toward concluding it even if they should not have.

This all makes sense, but there is something about it that I don’t like. It seems good to be able be coherent and curious and strategic and to believe in yourself and what you are doing in ways that I think this is at odds with. For instance, under this kind of arrangement you don’t get to have a solid position on ‘is this house worth having?’. You have your object level reasoning, and then not even a meta-level reason to adjust it, but a meta-level reason to distrust your whole thinking process, which leaves you in the vague epistemic state of not allowed to have certain conclusions on the house at all, or allowed to have them but not act on them. And having views but not acting on them is a weird state, because you are knowingly doing what is worse for the broader world, out of misalignment with yourself. And all this is to fend off the possibility that your motives are actually bad, or will become bad. I kind of want to say, ‘if your motives are bad, maybe you should just go and do something bad instead of rigging up some complicated process to thwart yourself’, but presumably there is some complicated relationship between bad and good parts of you that are trying to negotiate some kind of arrangement here. And maybe that is the way it must be, for you to do good. But it sounds suffocating and enfeebling. 

On my preferred way of living, you do notice if you seem too excited about living in a nice house. But if you think you might have ‘the wrong values’ you address that problem head on, by object level inquiry into what your values are and what you think they ‘should be’. If you think you might be engaging in self-deception, you try to work out if that is true, and why, and stop it, rather than building a system that lets you move money through under the assumption that you are self-deceiving. 

Relatedly, I think people sometimes donate to causes they don’t work on, though their position is that the one they work on is better, or hesitate to spend the amounts of money implied by their usual evaluations on improving something in their usual line of work, out of a modest sense that they might be biased about their choice of work, and that money could really save lives for instance. On my preferred way of living, if you suspect that you are biased about your choice of cause to work in such that money is better spent on a different one, you sit down and figure that out and don’t waste your career, not just send your Christmas donation somewhere else and then get back to work. 

This all takes effort though, and won’t be perfect, and mileages vary, and everyone must do their best with whatever state of psychological mess they find themselves in. So quite possibly the ‘avoid non-sacrifice’ methods are better for some people.

But having to be this kind of creature, that can’t treat itself as an agent, that isn’t allowed certain beliefs, that second guesses itself and fears parts of itself and ties itself up to thwart them, seems like quite a cost, so I don’t think such strategies should be taken up by default or casually. 

This is all my sense, but I haven’t spent huge amounts of time thinking about it (e.g. note my own position is pretty vague), and may come around pretty easily.

Personal quality experimentation

Different people seem to have different strategies, which they use systematically across different parts of their lives, and that we recognize and talk about. For instance people vary on:

  • Spontaneity
  • Inclination toward explicit calculations
  • Tendency to go meta
  • Skepticism
  • Optimism
  • Tendency to look at the big picture vs. the details
  • Expressed confidence
  • Enacted patience

I don’t know of almost anyone experimenting with varying these axes, to see which setting is best for them, or even what different settings are like. Which seems like a natural thing to do in some sense, given the variation in starting positions and lack of consensus on which positions are best.

Possibly it is just very hard to change them, but my impression is that for at least some of them it is not hard to try, or to change them a bit for a short period, with some effort. (I have briefly tried making decisions faster and expressing more confidence.) And my guess is that that is enough to often be interesting. Also that if you effortfully force yourself to be more skeptical and it seems to go really well, you will find that it becomes appealing and thus easier to keep up and then get used to. 

I also haven’t done this much, and it isn’t very clear to me why. Maybe it just doesn’t occur to people that much for some reason. (It also doesn’t occur to people to choose their value of time via experimentation I think, a related suggestion I like, I think from Tyler Cowen a long time ago.) So here, I suggest it. Fun date activity, maybe: randomly reselect one personality trait each, and both try to guess which one the other person is putting on.

Normative reductionism

Here’s a concept that seems useful, but that I don’t remember ever hearing explicitly referred to (with my own tentative name for it—if it turns out to not already have one in some extensive philosophical literature, I might think more about whether it is a good name):

Normative reductionism: The value of a world history is equal to the value of its parts (for some definition of relevant parts).

For instance, if two world histories only differ between time t and time t’, according to NR you do not need to know what happened at other times to evaluate them in full. Similarly, the value of Alice’s life, or the value of Alice enjoying a nap, depend on the nature of her life or the nap, and not for instance on other people’s lives or events that took place before she was born with no effect on her (unless perhaps she has preferences about those events or they involve people having preferences about her, but still the total value can be decomposed into the value of different preferences being fulfilled or not). Straightforward hedonistic utilitarianism probably implies normative reductionism.

My impression is that people have different intuitions about this and vary in how much they assume it, and that it mostly isn’t entirely aligned with other axes of ethical view, either logically or sociologically, though is related to them. So it seems maybe worth noting explicitly.

Total horse takeover

I hear a lot of talk of ‘taking over the world’. What is it to take over the world? Have I done it if I am king of the world? Have I done it if I burn the world? Have humans or the printing press or Google or the idea of ‘currency’ done it? 

Let’s start with something more tractable, and be clear on what it is to take over a horse. 

A natural theory is that to take over a horse is to be the arbiter of everything about the horse —to be the one deciding the horse’s every motion.

But you probably don’t actually want to control the horse’s every motion, because the horse’s own ability to move itself is a large part of its value-add. Flaccid horse mass isn’t that helpful, not even if we throw in the horse’s physical strength to move itself according to your commands, and some sort of magical ability for you to communicate muscle-level commands to it. If you were in command of the horse’s every muscle, it would fall over. (If you directed its cellular processes too, it would die; if you controlled its atoms, you wouldn’t even have a dead horse.) 

Information and computing capacity

The reason this isn’t so good is that balancing and maneuvering a thousand pounds of fast-moving horse flesh balanced on flexible supports is probably hard for you, at least via an interface of individual muscles, at least without more practice being a horse. I think for two reasons:

  • Lack of information e.g. about exactly where every part of the horse’s body is and where its hoofs are touching the ground how hard
  • Lack of computing power to dedicate to calculating desired horse muscle motions from the above information and your desired high level horse behavior

(Even if you have these things, you don’t obviously know how to use them to direct the horse well, but you can probably figure this out in finite time, so it doesn’t seem like a really fundamental problem.)

Tentative claim: holding more levers is good for you only insofar as you have the information and computing capacity to calculate which directions you should want those levers pushed. 

So, you seem to be getting a lot out of the horse and various horse subcomponents making their own decisions about steering and balance and breathing and snorting and mitosis and where electrons should go. That is, you seem to be getting a lot out of not being in control of the horse. In fact so far it seems like the more you are in control of the horse in this sense, the worse things go for you. 

Is there a better concept of ‘taking over’—a horse, or the world—such that someone relatively non-omniscient might actually benefit from it? (Maybe not—maybe extreme control is just bad if you aren’t near-omniscient, which would be good to know.) 

What riding a horse is like

Perhaps a good first question: is there any sort of power that won’t make things worse for you? Surely yes: training a horse to be ridden in the usual sense seems like ‘having control over’ the horse more than you would otherwise, and seems good for you. So what is this kind of control like? 

Well, maybe you want the horse to go to London with you on it, so you get on it and pull the reins to direct it to London. You don’t run into the problems above, because aside from directing its walking toward London, it sticks to its normal patterns of activity pretty closely (for instance, it continues breathing and keeping its body in an upright position and doing walking motions in roughly the direction its head is pointed).

So maybe in general: you want to command the horse by giving it a high level goal (‘take me to London’) then you want it to do the backchaining and fill in all the details (move right leg forward, hop over this log, breathe..). That’s not quite right though, because the horse has no ability to chart a path from here to London, due to its ignorance of maps and maybe London as a concept. So you are hoping to do the first step of the backchaining—figure out the route—and then to give the horse slightly lower level goals such as, ‘turn left here’, ‘go straight’, and for it to do the rest. Which still sounds like giving it a high level goal, then having it fill in the instrumental subgoals and do them.

But that isn’t quite right. You probably also want to steer the details there somewhat. You are moment-to-moment adjusting the horse’s motion to keep you on it, for instance. Or to avoid scaring some chickens. Or to keep to the side as another horse goes by. While not steering it entirely, at that level. You are relying on its own ability to avoid rocks and holes and to dodge if something flies toward it, and to put some effort into keeping you on it. How does this fit into our simple model? 

Perhaps you want the horse to behave as it would—rather than suddenly leaving every decision to you—but for you to be able to adjust any aspect of it, and have it again work out how to support that change with lower level choices. You push it to the left and it finds new places to put its feet to make that work, and adjusts its breathing and heart rate to make the foot motions work. You pull it to a halt, and it changes its leg muscle tautnesses and heart rate and breathing to make that work. 

Levers

On this model, in practice your power is limited by what kinds of changes the horse can and will fill in new details for. If you point its head in a new direction, or ask it to sit down, it can probably recalculate its finer motions and support that. Whereas if you decide that it should have have holes in its legs, it just doesn’t have an affordance for doing that. And if you do it, it will bleed a lot and run into trouble rather than changing its own bloodflow. If you decide it should move via a giant horse-sized bicycle, it probably can’t support that, even if in principle its physiology might allow it. If you hold up one of its legs so its foot is high in the air, it will ‘support’ that change by moving its leg back down again, which is perhaps not what you were going for.

This suggests that taking over a thing is not zero sum. There is not a fixed amount of control to be had by intentional agents. Because perhaps you have all the control that anyone has over a horse, in the sense that if the horse ever has a choice, it will try to support your commands to it. But still it just doesn’t know how to control its own heart rate consciously or ride a giant horse-sized bicycle. Then one day it learns these skills, and can let you adjust more of its actions. You had all the control the whole time, but all became more.

Consequences

One issue with this concept of taking over is that it isn’t clear what it means to ‘support’ a change. Each change has a number of consequences, and some of them are the point while others are undesirable side effects, such that averting them is an integral part of supporting the change. For instance, moving legs faster means using up blood oxygen and also traveling faster. If you gee up the horse, you want it to support this by replacing the missing blood oxygen, but not to jump on a treadmill to offset the faster travel.

For the horse to get this right in general, it seems that it needs to know about your higher level goals. In practice with horses, they are just built so that if they decide to run faster their respiratory system supplies more oxygen and they aren’t struck by a compulsion to get on a treadmill, and if that weren’t true we would look for a different animal to ride. The fact that they always assume one kind of thing is the goal of our intervention is fine, because in practice we do basically always want legs for motion and never for using up oxygen.

Maybe there is a systematic difference between desirable consequences and ones that should be offset—in the examples that I briefly think of, the desirable consequences seem more often to do with relationships with larger scale things, and the ones that need offsetting are to do with internal things, but that isn’t always true (I might travel because I want to be healthier, but I want to be in the same relationship with those who send me mail). If the situation seems to turn inputs into outputs, then the outputs are often the point, though that is also not always true (e.g. a garbage burner seeks to get rid of garbage, not create smoke). Both of these also seem maybe contingent on our world, whereas I’m interested in a general concept. 

Total takeover

I’ll set that aside, and for now define a desirable model of controlling a system as something like: the system behaves as it would, but you can adjust aspects of the system and have it support your adjustment, such that the adjustment forwards your goals. 

There isn’t a clear notion of ‘all the control’, since at any point there will be things that you can’t adjust (e.g. currently the shape of the horse’s mitochondria, for a long time the relationship between space and time in the horse system), either because you or the system don’t have a means of making the adjustment intentionally, or the system can’t support the adjustment usefully. However ‘all of the control that anyone has’ seems more straightforward, at least if we define who is counted in ‘anyone’. (If you can’t control the viral spread, is the virus a someone who has some of the universe’s control?)

I think whether having all of the control at a particular time gets at what I usually mean by having ‘taken over’ depends on what we expect to happen with new avenues of control that appear. If they automatically go to whoever had control, then having all of the control at one time seems like having taken over. If they get distributed more randomly (e.g. the horse learns to ride a bicycle, but keeps that power for itself, or a new agent is created with a power), so that your fraction of control deteriorates over time, that seems less like having taken over. If that is how our world is, I think I want to say that one cannot take it over.

***

This was a lot of abstract reasoning. I especially welcome correction from someone who feels they have successfully controlled a horse to a non-negligible degree.