Tag Archives: ethics

Does SI make everyone look like swimsuit models?

William Easterly believes Sports Illustrated’s swimsuit issue externalises toward women with their ‘relentless marketing of a “swimsuit” young female body type as sex object’. He doesn’t explain how this would happen.

As far as I can tell, the presumed effect is that pictures of women acting as ‘sex objects’ causes men to increase their credence that all other women are ‘sex objects’. I’m a bit puzzled about the causal path toward badness after that, since men do not seem on the whole less friendly when hoping for sex.

I think the important bit here must be about ‘objects’. I have no idea how one films someone as if they are an object. The women in SI don’t look inanimate, if that’s what it’s about. It’s also hard to make robots that good. I will guess that ‘sex object’ means something like ‘low status person to have sex with’, as opposed to just being sexually alluring. It seems unlikely that the concern is that women are taken to be sexier than they really are, so I think the problem is that they are taken to be low status in this particular sexy way.

If I guessed right so far, I think it is true that men increase their expectation that all other women are sex objects when they view videos of women being sex objects. I doubt this is a big effect, since they have masses of much better information about the sexiness and status of women around them. Nonetheless, I agree it is probably an effect.

However as usual, we are focussing on the tiny gender related speck of a much larger issue. Whenever a person has more than one characteristic, they give others the impression that those characteristics tend to go together, externalising to everyone else with those characteristics. When we show male criminals on the news, it is an externality to all other men. When we show clowns with big red noses it is an externality to all other people with big red noses. When I go outside it gives all onlookers a minuscule increase in their expectation that a tallish person will tend to be brown haired, female, dressed foreignly and not in possession of a car.

Most characteristics don’t end up delineating much of an externality, because we mostly don’t bother keeping track of all the expectations we could have connected to tallish people. What makes something like this a stronger effect is the viewers deciding that tallishness is more or less of a worthwhile category to accrue stereotypes about. I expect gender is well and truly forever high on the list of characteristics popularly considered worth stereotyping about, but people who look at everything with the intent of finding and advertising any hint of gender differential implied by it can only make this worse.

Or better. As I pointed out before, while expecting groups to be the same causes externalities, they are smaller ones than if everyone expected everyone to have average human characteristics until they had perfect information about them. If people make more good inferences from other people’s characteristics, they end up sooner treating the sex objects as sex objects and the formidable intellectuals as formidable intellectuals and so forth. So accurately informing people about every way in which the experiences of men and women differ can help others stereotype more accurately. However there are so many other ways to improve accurate categorisation, why obsess over the gender tinged corner of the issue?

In sum, I agree that women who look like ‘sex objects’ increase the expectation by viewers of more women being ‘sex objects’. I think this is a rational and socially useful response on the part of viewers, relative to continuing to believe in a lower rate of sex objects amongst women. I also think it is virtually certain that in any given case the women in question should go on advertising themselves as sex objects, since they clearly produce a lot of benefit for themselves and viewers that way, and the externality is likely minuscule. There is just as much reason to think that any other person categorisable in any way should not do anything low status, since the sex object issue is a small part of a ubiquitous externality. Obsessing over the gender aspect of such externalities (and everything else) probably helps draw attention to gender as a useful categorisation, perhaps ultimately for the best. As is often the case though, if you care about the issue, only being able to see the gender related part of it is probably not useful.

What do you think? Is concern over some women being pictured as sex objects just an example of people looking at a ubiquitous issue and seeing nothing but the absurdly tiny way in which it might affect women more than men sometimes? Or is there some reason it stands apart from every other way that people with multiple characteristics help and harm those who are like them?

Update: Robin Hanson also just responded to Easterly, investigating in more detail the possible causal mechanisms people could be picturing for women in swimsuits causing harm. Easterly responded to him, saying that empirical facts are irrelevant to his claim.

Population ethics and personal identity

Chocolate cake with chocolate frosting topped ...

Photo: Misocrazy

It seems most people think creating a life is a morally neutral thing to do while destroying one is terrible. This is apparently because prior to being alive and contingent on not being born, you can’t want to be alive, and nobody exists to accrue benefits or costs. For those who agree with these explanations, here’s a thought experiment.

The surprise cake thought experiment

You are sleeping dreamlessly. Your friends are eating a most delicious cake. They consider waking you and giving you a slice, before you all go back to sleep. They know you really like waking up in the night to eat delicious cakes with them and will have no trouble getting back to sleep. They are about to wake you when they realize that if they don’t give you the cake you you will be unconscious and thus unable to want to join them, or be helped or harmed. So they finish it themselves. When you awake the next day and are told how they almost wasted their cake on you, are you pleased they did not?

If not, one explanation is that you are a temporally extended creature who was awake and had preferences in the past, and that these things mean you currently have preferences. You still can’t accrue benefits or costs unless you get a bit more conscious, but it usually seems the concern is just whether there is an identity to whom the benefits and costs will apply. As an added benefit, this position would allow you to approve of resuscitating people who have collapsed.

To agree with this requires a notion of personal identity other than ‘collection of person-moments which I choose to define as me’, unless you would find the discretionary boundaries of such collections morally relevant enough to make murder into nothing at all. This kind of personal identity seems needed to make unconscious people who previously existed significantly different from those who have never existed.

It seems very unlikely to me that people have such identities. Nor do I see how it should matter if they did, but that’s another story. Perhaps those of you who think I should better defend my views on population ethics could tell me why I should change my mind on personal identity. These may or may not help.

Estimation is the best we have

This argument seems common to many debates:

‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.

This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.

Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.

A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’

In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best estimate in getting a good value of X, then you in fact believe yourself to have a better estimating-X strategy.

I have criticized this kind of argument before in the specific realm of valuing of human life, but it seems to apply more widely.  Another recent example: people’s attention spans vary between different activities, therefore there is no such thing as an attention span and we shouldn’t try to make it longer. Arguably similar to some lines of ‘people are good at different things, therefore there is no such thing as intelligence and we shouldn’t try to measure it or thereby improve it’.

Probabilistic risk assessment is claimed by some to be impossibly difficult. People are often wrong, and may fail to think of certain contingencies in advance. So if we want to know how prepared to be for a nuclear war for instance, we should do something qualitative with scenarios and the like. This could be a defensible position. Perhaps intuitions can better implicitly assess probabilities via some other activity than explicitly thinking about them. However I have not heard this claim accompanied by any such motivating evidence. Also if this were true, it would likely make sense to convert the qualitative assessments into quantitative ones and aggregate them with information from other sources rather than disregarding quantitative assessments all together.

Futarchy often prompts similar complaints that estimating what we want, so that our laws can provide it, would be impossibly difficult. Again, somehow some representation of what people want has to get into whatever system of government is used, for the result to not be unbelievably hellish. Having a large organization of virtually unknown people make the estimates implicitly in an unknown but messy fashion while they do other things is probably not more accurate than asking people what they want. It seems however that people think of the former as a successful way around the measurement problem, not a way to estimate welfare very poorly. Something similar appears to go on in the other examples. Do people really think this, or do they just feel uneasy making public judgments under uncertainty about anything important?

Why focus on making robots nice?

From Michael Anderson and Susan Leigh Anderson in Scientific American:

Today’s robots…face a host of ethical quandaries that push the boundaries of artificial intelligence, or AI, even in quite ordinary situations.

Imagine being a resident in an assisted-living facility…you ask the robot assistant in the dayroom for the remote …But another resident also wants the remote …The robot decides to hand the remote to her. …This anecdote is an example of an ordinary act of ethical decision making, but for a machine, it is a surprisingly tough feat to pull off.

We believe that the solution is to design robots able to apply ethical principles to new and unanticipated situations… for them to be welcome among us their actions should be perceived as fair, correct or simply kind. Their inventors, then, had better take the ethical ramifications of their programming into account…

It seems there are a lot of articles focussing on the problem that some of the small  decisions robots will make will be ‘ethical’. There are also many fearing that robots may want to do particularly unethical things, such as shoot people.

Working out how to make a robot behave ‘ethically’ in this narrow sense (arguably all behaviour has an ethical dimension) is an odd problem to set apart from the myriad other problems of making a robot behave usefully. Ethics doesn’t appear to pose unique technical problems. The aforementioned scenario is similar to ‘non-ethical’ problems of making a robot prioritise its behaviour. On the other hand, teaching a robot when to give a remote control to a certain woman is not especially generalisable to other ethical issues such as teaching it which sexual connotations it may use in front of children, except in sharing methods so broad as to also include many more non-ethical behaviours.

The authors suggests that robots will follow a few simple absolute ethical rules like Asimov’s. Perhaps this could unite ethical problems as worth considering together. However if robots are given such rules, they will presumably also be following big absolute rules for other things. For instance if ‘ethics’ is so narrowly defined as to include only choices such as when to kill people and how to be fair, there will presumably be other rules about the overall goals when not contemplating murder. These would matter much more than the ‘ethics’. So how to pick big rules and guess their far reaching effects would again not be an ethics-specific issue. On top of that, until anyone is close to a situation where they could be giving a robot such an abstract rule to work from, the design of said robots is so open as to make the question pretty pointless except as a novel way of saying ‘what ethics do I approve of?’.

I agree that it is useful to work out what you value (to some extent) before you program a robot to do it, particularly including overall aims. Similarly I think it’s a good idea to work out where you want to go before you program your driverless car to drive you there. This doesn’t mean there is any eerie issue of getting a car to appreciate highways when it can’t truly experience them. It also doesn’t present you with any problem you didn’t have when you had to drive your own car – it has just become a bit more pressing.

Rainbow Robot

Making rainbows has much in common with other manipulations of water vapor. Image by Jenn and Tony Bot via Flickr

Perhaps, on the contrary, ethical problems are similar in that humans have very nuanced ideas about them and can’t really specify satisfactory general principles to account for them. If the aim is for robots to learn how to behave just from seeing a lot of cases, without being told a rule, perhaps this is a useful category of problems to set apart? No – there are very few things humans deal with that they can specify directly. If a robot wanted to know the complete meaning of almost any word it would have to deal with a similarly complicated mess.

Neither are problems of teaching (narrow) ethics to robots united in being especially important, or important in similar ways, as far as I can tell. If the aim is about something like treating people well, people will be much happier if the robot gives the remote control to anyone rather than ignoring them all until it has finished sweeping the floors than if it gets the question of who to give it to correct. Yet how to get a robot to prioritise floor cleaning below remote allocating at the right times seems an uninteresting technicality, both to me and seemingly to authors of popular articles. It doesn’t excite any ‘ethics’ alarms. It’s like wondering how the control panel will be designed in our teleportation chamber: while the rest of the design is unclear, it’s a pretty uninteresting question. When the design is more clear, to most it will be an uninteresting technical matter. How robots will be ethical or kind is similar, yet it gets a lot of attention.

Why is it so exciting to talk about teaching robots narrow ethics? I have two guesses. One, ethics seems such a deep and human thing, it is engaging to frighten ourselves by associating it with robots. Two, we vastly overestimate the extent to which value of outcomes to reflects the virtue of motives, so we hope robots will be virtuous, whatever their day jobs are.

Poverty does not respond to incentives

I wrote a post a while back saying that preventing ‘exploitative’ trade is equivalent to preventing an armed threat by eliminating the ‘not getting shot in the head’ option. Some people countered this argument by saying that it doesn’t account for how others respond. If poor people take the option of being ‘exploited’, they won’t get offered such good alternatives in future as they will if they hold out.

This seems unlikely, but it reminds me of a real difference between these situations. If you forcibly prevent the person with the gun to their head from responding to the threat, the person holding the gun will generally want to escape making the threat, as now she has nothing to gain and everything to lose. The world on the other hand will not relent from making people poor if you prevent the poor people from responding to it.

I wonder if the misintuition about the world treating people better if they can’t give in to its ‘coercion’ is a result of familiarity with how single agent threateners behave in this situation. As a side note, this makes preventing ‘exploitative’ trade worse relative to preventing threatened parties acting on threats.