Tag Archives: economics

Economic growth and parallelization of work

Eliezer suggests that increased economic growth is likely bad for the world, as it should speed up AI progress relative to work on AI safety. He reasons that this should happen because safety work is probably more dependent on insights building upon one another than AI work is in general. Thus work on safety should parallelize less well than work on AI, so should be at a disadvantage in a faster paced economy. Also, unfriendly AI should benefit more from brute computational power than friendly AI. He explains,

“Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. “…

“Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.”

I’m sympathetic to otherscriticisms of this argument, but would like to point out a more basic problem, granting all other assumptions. As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.

Economic growth, without substantial population growth, means that each person is doing more work in their life. This means the work that would have otherwise been done by a number of people can be done by a single person, in sequence. The number of AI researchers at a given time shouldn’t obviously change much if the economy overall is more productive. But each AI researcher will have effectively lived and worked for longer, before they are replaced by a different person starting off again ignorant. If you think research is better done by a small number of people working for a long time than a lot of people doing a little bit each, economic growth seems like a good thing.

On this view, economic growth is not like speeding up time – it is like speeding up how fast you can do things, which is like slowing down time. Robotic cars and more efficient coffee lids alike mean researchers (and everyone else) have more hours per day to do things other than navigate traffic and lid their coffee. I expect economic growth seems like speeding up time if you imagine it speeding up others’ abilities to do things and forget it also speeds up yours. Or alternatively if you think it speeds up some things everyone does, without speeding up some important things, such as people’s abilities to think and prepare. But that seems not obviously true, and would anyway be another argument.

Value realism

People have different ideas about how valuable things are. Before I was about fifteen the meaning of this was ambiguous. I think I assumed that a tree for instance has some inherent value, and that when one person wants to cut it down and another wants to protect it, they both have messy estimates of what its true value is. At least one of them had to be wrong. This was understandable because value was vague or hard to get at or something.

In my year 11 Environmental Science class it finally clicked that there wasn’t anything more to value than those ‘estimates’.  That a tree has some value to an environmentalist, and a different value to a clearfelling proponent. That it doesn’t have a real objective value somewhere inside it. Not even a vague or hard to know value that is estimated by different people’s ‘opinions’. That there is just nothing there. That even if there is something there, there is no way for me to know about it, so the values I deal with every day can’t be that sort. Value had to be a function of things: the item being valued and the person doing the valuing.

I was somewhat embarrassed to have ever assumed otherwise, and didn’t really think about it again until recently.  It occurred to me recently that a long list of strange things I notice people believing can be explained by the assumption that they disagree with me on whether things have objective values. So I hypothesize that many people believe that value is inherent in a thing, and doesn’t intrinsically depend on the agent doing the valuing.

Here’s my list of strange things people seem to believe. For each I give two explanations: why it is false, and why it is true if you believe in objective values. Note that these are generally beliefs that cause substantial harm:

When two people trade, one of them is almost certainly losing

People don’t necessarily say this explicitly, but often seem to implicitly believe it.

Why it’s false: In most cases where two people are willing to trade, this is because the values they assign to the items in question are such that both will gain by having the other person’s item instead of their own.

Why it’s believed: There’s a total amount of value shared somehow between the people’s posessions. Changing the distribution is very likely to harm one party or the other. It follows that people who engage in trade are suspicious, since trades must be mostly characterized by one party exploiting or fooling another.

Trade is often exploitative

Why it’s false: Assume exploiting someone implies making their life worse on net. Then in the cases where trade is exploitative, the exploited party will decline to participate, unless they don’t realize they are being exploited. Probably people sometimes don’t realize they are being exploited, but one is unlikely to persist in doing a job which makes one’s life actively worse for long without noticing. Free choice is a filter: it causes people who would benefit from an activity to do it while people who would not benefit do not.

Why it’s believed: If a person is desperate he might sell his labor for instance at a price below its true value. Since he is forced by circumstance to trade something more valuable for something less valuable, he is effectively robbed.

Prostitution etc should be prevented, because most people wouldn’t want to do it freely, so it must be pushed on those who do it:

Why it’s false: Again, free choice is a filter. The people who choose to do these things presumably find them better than their alternatives.

Why it’s believed: If most people wouldn’t be prostitutes, it follows that it is probably quite bad. If a small number of people do want to be prostitutes, they are probably wrong. The alternative is that they are correct, and the rest of society is wrong. It is less likely that a small number of people is correct than a large number. Since these people are wrong, and their being wrong will harm them (most people would really hate to be prostitutes), it is good to prevent them acting on either their false value estimates.

If being forced to do X is dreadful, X shouldn’t be allowed:

Why it’s false: Again, choice is a filter. For an arbitrary person doing X, it might terrible, but it is still often good for people who want it. Plus, being forced to do a thing often decreases its value.

Why it’s believed: Very similar to above. The value of X remains the same, regardless of who is thinking about it, whether they are forced to do it. That a person would choose to do a thing others are horrified to have pressed on them, that just indicates that the person is mentally dysfunctional in some way.

Being rich indicates that you are evil:

Why it’s false: On a simple model, most trades benefit both parties, so being rich indicates that you have contributed to others receiving a large amount of value.

Why it’s believed: On a value realism model, in every trade, someone wins and someone loses, anyone who has won at trading so many times is evidently an untrustworthy and manipulative character.

Poor countries are poor because rich countries are rich:

Why it’s false: In some sense it’s true—the rich countries don’t altruistically send a lot of aid into the poor countries. Beyond that there’s no obvious connection.

Why it’s believed: There’s a total amount of value to be had in the world. The poor can’t become richer without the rich giving up some value.

The primary result of promotion of products is that people buy things they don’t really want:

Why it’s not obviously true: The value of products depends on how people feel about them, so it is possible to create value by changing how people feel about products.

Why it’s believed: Products have a fixed value. Changing your perception of this in the direction of you buying more of them is cheatful sophistry.

***

Questions:

Is my hypothesis right? Do you think of value as a one or two place function? (Or more?) Which of the above beliefs do you hold? Are there legitimate or respectable cases for value realism out there? (Moral realism is arguably a subset).

Stop blaming efficiency

Andrew Sullivan, quoting and commenting on Adam Frank:

We’re more efficient than we’ve ever been, but extreme efficiency has drawbacks:

More efficient forestation means running through forests faster. More efficient fishing methods means running through natural fishing stocks faster. … The truth is that we have limits. True connections between family, friends and colleagues can not be compressed down to tightly scheduled “quality time.” The relentless logic of efficiency can unintentionally strip the most valued qualities of human life just as easily as it strips forests.

Under a common meaning, ‘efficiency’ is just getting more of what you want for a given cost. Since people want different things, what is efficient for you may be very inefficient for someone else. If you don’t want deforestation, then my efficient tree harvesting method is not an efficient way to pursue your goals. Often people seem to forget this and think of the fact that other people are efficiently pursuing goals they don’t like as a problem with the concept of efficiency. This can then prompt them to go back and reject the original goal of efficiency in their own endeavours. Which is a very bad idea, if they are hoping to get what they want, without wasting other things they want in the process. Which is very likely what they are hoping for.

For instance if ‘the most valued qualities of human life’ are stripped by spending most of your time say efficiently pursuing career productivity, the problem is not that efficiency is bad, the problem is that you are efficiently pursuing the wrong goals. i.e. goals that are not your own, or at least not all of what you value. Being inefficient about, say, work is a terrible strategy for improving your home life, since only a miniscule proportion of the ways to be inefficient at work involve any home life improvement, and most of those not efficient improvements. Fortunately people using this strategy probably know intuitively that they will have to aim at the set of ways of being inefficient at work that do help their family lives. But once you have got as far as pursuing the values you actually care about, being efficient about them has really got to help, no matter how much your enemies also like efficiency. Similarly, don’t abandon ‘succeeding’, just because bad people also like it.

***

Added: Another example.

When to explain

It is commonly claimed that humans’ explicit conscious faculties arose for explaining to others about themselves and their intentions. Similarly when people talk about designing robots that interact with people, they often mention the usefulness of designing such robots to be able to explain to you why it is they changed your investments or rearranged your kitchen.

Perhaps this is a generally useful principle for internally complex units dealing with each other: have some part that keeps an overview of what’s going on inside and can discuss it with others.

If so, the same seems like it should be true of companies. However my experience with companies is that they are often designed specifically to prevent you from being able to get any explanations out of them. Anyone who actually makes decisions regarding you seems to be guarded by layers of people who can’t be held accountable for anything. They can sweetly lament your frustrations, agree that the policies seem unreasonable, sincerely wish you a nice day, and most importantly, have nothing to do with the policies in question and so can’t be expected to justify them or change them based on any arguments or threats you might make.

I wondered why this strategy should be different for companies, and a friend pointed out that companies do often make an effort at more high level explanations of what they are doing, though not necessarily accurate: vision statements, advertisements etc. PR is often the metaphor for how the conscious mind works after all.

So it seems the company strategy is more complex: general explanations coupled with avoidance of being required to make more detailed ones of specific cases and policies. So, is this strategy generally useful? Is it how humans behave? Is it how successful robots will behave?*

Inspired by an interaction with ETS, evidenced lately by PNC and Verizon

*assuming there is more than one

Cheap signaling

Chocolates

Image by J. Paxon Reyes via Flickr

If all this stuff people do is for signaling, wouldn’t it be great if we could find ways of doing it more cheaply? At first glance, this sentiment seems a naive error; the whole point of paying a lot for a box of chocolates is to say you were willing to pay a lot. ‘Costly signaling’ is inherently costly.

But wait. In a signaling model, Type A people can be distinguished from Type B people because they do something that is too expensive for Type B people. One reason this action can be worthwhile for Type As and not for Type Bs is because type As have more to gain by it. A man who really loves his girlfriend cares more about showing her than man who is less smitten. A box of chocolates costs the same to both men, but hopefully only the first will buy it.

But there is another reason an action may be worthwhile for As and not for Bs: the cost is higher for type Bs. Relating some intimate gossip about a famous person is a good signal that you are in close with them because it is expensive for an ignorant person to fake, but very cheap for you to send.

Directly revealing your type can be thought of as an instance of this. Taking off your shirt to reveal your handsome muscles is extremely cheap if you have handsome muscles under your shirt and extremely expensive if you do not.

This kind of signaling can be very cheap. It only needs to be expensive for the kinds of people who don’t do it. And since they don’t do it, that cost is not realised. Whereas in the first kind of case I described (exemplified by chocolates), signaling must be relatively expensive. People of different types each have to pay more than the type below them cares enough to pay. i.e. what the person below thew would gain by being mistaken for the type above.

Cases of the second type, like gossip, are not always cheap. Sometimes it is cheaper for the type who sends the signal to send it, but they still have to pay quite a lot before they shake off the other type. If education is for signaling, it seems it is at least partly like this. University is much easier for smart, conscientious people, but if it were only a week long a lot of others would still put in the extra effort.

There can also be outside costs. For instance talking often works the second way. It is extremely cheap to honestly signal that you are an accountant by saying ‘I’m an accountant’, because the social repercussions of being found out to be lying are costly enough to put most people off lying about things where they would be discovered. While this is cheap both for the signalers and the non-signalers, setting up and maintaining the social surveillance that ensures a cost to liars may be expensive.

So if we wanted to waste less on signaling, one way to make signals cheaper would be to find actions with differences in costs to replace actions with differences in benefits. I’m not sure how to do that – just a thought.