Don’t change your mind, just change your brain

The best way to dull hearts and win minds is with a scalpel.

Give up your outdated faith in the pen over the sword! With medical training and a sufficiently sharp but manoeuvrable object of your choice, you can change anyone’s mind on the most contentious of moral questions. All you need to make someone utilitarian is a nick to the Ventromedial Pre­frontal Cortex (VMPC), a part of the brain related to emotion.

When pondering whether you should kill an innocent child to save twenty strangers, eat your pets when they die, or approve of infertile siblings making love in private if they like, utilitar­ians are the people who say “do whatever, so long as the outcome maximises overall happiness.” Others think outcomes aren’t everything; some actions are just wrong. According to research, people with VMPC damage are far more likely to make utilitar­ian choices.

It turns out most people have conflicting urges: to act for the greater good or to obey rules they feel strongly about. This is the result of our brains being composed of interacting parts with different functions. The VMPC processes emotion, so in normal people it’s thought to compete with the parts of the brain that engage in moral rea­soning and see the greatest good for the greatest number as ideal. If the VMPC is damaged, the ra­tional, calculating sections are left unimpeded to dispassionate­ly assess the most compassionate course of action.

This presents practical oppor­tunities. We can never bring the world in line with our moral ide­als while we all have conflicting ones. The best way to get us all on the same moral page is to make everyone utilitarian. It is surely easier to sever the touchy feely moral centres of people’s brains than to teach them the value of utilitarianism. Also it will be for the common good; once we are all utilitarian we will act with everyone’s net benefit more in mind. Partial lo­botomies for the moralistic are probably much cheaper than policing all the behaviours such people tend to disapprove of.

You may think this still doesn’t make it a good thing. The real beauty is that after the procedure you would be fine with it. If we went the other way, everyone would end up saying ‘you shouldn’t alter other people’s brains, even if it does solve the world’s problems. It’s naughty and unnatural. Hmph.’

Unfortunately, VMPC dam­age also seems to dampen social emotions such as guilt and com­passion. The surgery makes utili­tarian reasoning easier, but so too complete immorality, mean­ing it might not be the answer for everyone just yet.

Some think the most impor­tant implications of the research are actually those for moral phi­losophy. The researchers suggest it shows humans are unfit to make utilitarian judgements. You don’t need to be a brain surgeon to figure that out though. Count the number of dollars you spend on unnecessary amusements each year in full knowledge peo­ple starving due to poverty.

In the past we could tell moral questions were prompting action in emotional parts of the brain, but it wasn’t clear whether the activity was influencing the deci­sion or just the result of it. If the latter, VMPC damage shouldn’t have changed actions. It does – so while non-utilitarianism is a fine theoretical position, it is seemingly practiced for egoistic reasons.

Can this insight into cognition settle the centuries of philosophical debate and show utilitarianism is a bad position? No. Why base your actions on what you feel like doing, dis­counting all other outcomes? All it says about utilitarianism is that it doesn’t come easily to the hu­man mind.

This research is just another bit of evidence that moral reasoning is guided by evolution and brain design, not some transcendental truth in the sky. It may still be useful of course, like other skills our mind provides us with, like a capacity to value things, a prefer­ence for being alive, and the abil­ity to tell pleasure from pain.

Next time you are in a mor­ally fraught argument, consider what Ghandi said: “Victory at­tained by violence is tantamount to a defeat, for it is momentary’” He’s right; genetic modification would be more long-lasting. Un­til this is available though, why not try something persuasive like a scalpel to the forehead?

….
Originally published in Woroni

Milk, bread, insert catheter…

Making lists to guide medical procedures saves lives but is unethical, say Americans.

What if a way was found to rescue hun­dreds of thousands of the sickest people in the world’s hospitals, at the cost of a sheet of paper each? Michigan would take up the idea, Spain and a couple of US states would be interested, and then it would be banned in the US for being unethical.

Being in intensive care is dan­gerous. Not only because having all your organs fail or your brain bleed everywhere is unhealthy, but also because the care is, well… intense. To look after a person in intensive care for a day, a hundred and seventy eight pro­cedures have to be done on av­erage. Each procedure involves multiple steps and is performed by a collection of professionals struggling to keep their patients alive as different parts of their body fail. Small chances of in­evitable human error add up, no matter how good the doctors and nurses are, amounting to about two errors per patient each day.

Finger pointing and suing doesn’t work to reduce these fig­ures, so what will? You could say human error is inevitable and congratulate doctors and nurses for keeping it as low as they do in a hectic and complex situation. Or, as Peter Pronovost, a critical care specialist at Johns Hopkins Hospital, realised, you could take the same precautions with criti­cally ill patients as you do with shopping or making a cake.

He made a list. It was a list for one procedure: putting in a cath­eter, the tube for getting fluids in and out of people. Four per cent of catheters develop infections, which means some eighty thou­sand people per year in the US. Between five and twenty eight percent, depending on circum­stances, subsequently die.

The list had five steps. It seemed so simple as to be use­less. Surely people performing cutting edge surgery can remem­ber to wash their hands before they do a routine job? For the first month he just gave his list to nurses and asked them to note how often the doctors missed a step. It turned out they missed at least one in about a third of cases. He then asked the nurses to remind the doctors when they missed a step. The catheter in­fection rate over the next year at Johns Hopkins Hospital dropped from eleven per cent to nothing.

Pronovost made more lists and asked doctors and nurses to make their own. These lists proved so effective that the av­erage length of patient stay in intensive care dropped by half in a few weeks. Pronovost trav­elled to other cities to spread his astounding results. People were unenthused. However Michigan agreed to try the idea in 2003 and in eighteen months saved fif­teen hundred lives and two hun­dred million dollars. Since then Rhode Island, New Jersey and Spain have become interested, and there is a new project at the World Health Organization to institute checklists internation­ally.

At the end of last year, how­ever, the project ceased in America. The Office for Human Research Protections (OHRP), a bureaucratic appendage charged with overseeing ethics in re­search, decided it was unethical. Their reasoning was that since careful records were being kept of results, it was research, and should have informed consent from every patient. They even judged it ‘potentially dangerous’, as records meant doctors’ poor practice might be exposed. Pro­tecting doctors from having their performance evaluated is appar­ently more ethically weighty than ensuring patients aren’t need­lessly killed.

After some argument OHRP repealed their ban this February, a decision made more significant as it allows similar projects in fu­ture. The checklist is still getting nothing like the attention and funds ineffective bits of equip­ment for similar purposes have elicited.

Atul Gawande, a surgeon who originally alerted the public to this story through the New York­er, suggests the disinterest might be because we like the idea of gal­lant doctors deftly coping with the complexity and risk the es­teemed job entails. Standardised list checking doesn’t fit into any­one’s ideal of heroism. For what­ever reason, thousands of people can now die of negligence rather than unyielding complexity, for which we have a remedy.

….
Originally published in Woroni

Criminal retribution

The US houses the highest proportion of its people in prison of any country, as Adam Liptak discusses thought provokingly. As expected, this appears to reduce the crime rates.

How much suffering should the guilty endure for a given reduction in suffering of the innocent? I think a 1:1 ratio maximum- that is, it doesn’t matter who suffers. Suffering should be minimised, even if that means the innocent suffer instead of the guilty. Punishment should only be to prevent greater suffering.

***

Liptak also draws attention to the relationship between more democratic appointment of judges in the US and harsher punishment, as people demand fierce retribution. I suspect demand for escalating punishment is a result of fear and angry desire for revenge, rather than widespread consideration of mechanism design for minimising harm, or anything mildly reasoned. I don’t think society should be allowed to inflict harm on its members arbitrarily like this. Should judge appointment be less democratic then?

Perhaps, but this decision can (and should?) only be reached through other democratic decision making. This is the same problem as arises everywhere. The public, through democracy, interferes with people where it has no right to, but the extent to which citizens should be able to interfere with one another through democracy hasn’t been agreed, and so must rely on democratic negotiation.

Redistributing fairness

From Kwame Anthony Appiah’s fascinating longer article on fairness in politics, via Greg Mankiw:

In the 1970s, the Nobel Prize-winning economist Thomas Schelling used to put some questions to his students at Harvard when he wanted to show how people’s ethical preferences on public policy can be turned around. Suppose, he said, that you were designing a tax code and wanted to provide a credit — a rebate, in effect — for couples with children. (I’m simplifying a bit.) In a progressive tax system such as ours, we try to ease the burden on the less well off, so it might make sense to adjust the child credit accordingly. Would it be fair, do you think, to give poor parents a bigger credit than rich parents? Schelling’s students were inclined to think so. If the credit was going to vary with income, it seemed fair to award struggling families the bigger tax break. It would certainly be unfair, they agreed, for richer families to get a bigger one.

Then Schelling asked his students to think about things in a different way. Instead of giving families with children a credit, you’d impose a surcharge on couples with no children. Now then: Would it be fair to make the childless rich pay a bigger surcharge than the childless poor? Schelling’s students thought so.

But — hang on a sec — a bonus for those who have a child amounts to a penalty for those who don’t have one. (Saying that those with children should be taxed less than the childless is another way of saying that the childless should be taxed more than those with children.) So when poor parents receive a smaller credit than rich ones, that is, in effect, the same as the childless poor paying a smaller surcharge than the childless rich. To many, the first deal sounds unfair and the second sounds fair — but they’re the very same tax scheme.

That’s a little disturbing, isn’t it?

Why do people respond this way? There’s no real paradox. The above questions seem to have elicited from the subjects a confusion of aims, in combination with a strong conceptually unpolished [IF rich THEN confiscate money] reflex.

Assume (very) hypothetically that a bonus or penalty should be applied. If it is as an incentive it should apply to rich and the poor equally, unless there is some reason to incentivise one economic class over the other (e.g. better for rich to procreate to help redistribute wealth, so a greater bonus to them) or unless you think the poor will respond to smaller incentives because it’s a larger proportion of their income (in which case give bigger bonus or penalty to rich). That redistribution of wealth is a great idea is no reason for it to be tangled up with this sort of incentive scheme. If a bonus is to be given for the purpose of redistributing wealth to where it is needed (rather than as an incentive, though realising it might be one too), it should go to the poorer presumably.

Confusion about the purpose of intervening leads to an overlooked problem with the conclusion that people are being inconsistent. If a greater penalty is applied to the rich, this is not the same as giving the rich with babies a larger bonus. They have a larger bonus relative to what they would otherwise have, but what they would otherwise have has been reduced more than it has for the poor baby owners. Thus it is not better than what the poor procreaters receive. It is a greater incentive, but irrelevant to wealth distribution between the filthily rich and poor. Similarly, giving a big bonus to poor babyholders is not the same as penalising the other poor, except in terms of incentives.

The above problem is problematic because where a bonus is paid people either assume it is for wealth redistribution or that wealth redistribution should be included in the incentive by habit. Where there is a penalty, it is assumed it as a disincentive. If it were to be for wealth redistribution, penalising the rich should not be considered as benefiting other rich (relative penalisations within a class are only relevant to incentives).

Why are religious societies more cohesive?

Reported by the Economist (and discussed on Overcoming Bias), religion brings social cooperation. Attempts to synthesise secular solidarity out of god-free rituals tend to fail. So why is this?

A hypothesis:

Social cohesion is a result of citizens sharing a desire to believe something they all have a tiny private inkling might seem less true if they thought about it too much. They subconsciously know belief is easier when ubiquitously reinforced in social surroundings, and also that their beliefs are more enjoyable than the alternative. Thus they have a strong interest in religious behaviour in others and in their own feeling of unshakable commitment to those who practice it. So they encourage it with enthusiastic participation and try to ensconce themselves as much as necessary to feel safe from reality. If we found conclusive evidence of a god, everyone would be safe, and could get back to non-cohesion; it’s the possibility that the sky is chockers with nothingness that gives everyone the incentive for solidarity.

To test hypothesis, compare cohesion across other groups with beliefs (religious or otherwise) of varying tenuousness and of varying importance to their believers.