Humans of today control everything. They can decide who gets born and what gets built. So you might think that they would basically get to decide the future. Nevertheless, there are some reasons to doubt this. In one way or another, resources threaten to escape our hands and land in the laps of others, fueling projects we don’t condone, in aid of values we don’t care for.
A big source of such concern is robots. The problem of getting unsupervised strangers to to carry out one’s will, rather than carrying out something almost but quite like one’s will, has eternally plagued everyone with a cent to tempt such a stranger with. There are reasons to suppose the advent of increasingly autonomous robots with potentially arbitrary goals and psychological tendencies will not improve this problem.
If we avoid being immediately trodden on by a suddenly super-superhuman AI with accidentally alien values, you might still expect a vast new labor class of diligent geniuses with exotic priorities would snatch a bit of influence here and there, and eventually do something you didn’t want with the future we employed them to help out with.
The best scenario for human values surviving far into an era artificial intelligences may be the brain emulation scenario. Here the robot minds start out as close replicas of human minds, naturally with the same values. But this seems bound to be short-lived. It would likely be a competitive world, with strong selection pressures. There would be the motivation and technology to muck around with the minds of existing emulations to produce more useful minds. Many changes that would make a person more useful for another person might involve altering that person’s values.
Regardless of robots, it seems humans will have more scope to change humans’ values in the future. Genetic technologies, drugs, and even simple behavioral hacks could alter values. In general, we understand ourselves better over time, and better understanding yields better control. At first it may seem that more control over the values of humans should cause values to stay more fixed. Designer babies could fall much closer to the tree than children traditionally have, so we might hope to pass our wealth and influence along to a more agreeable next generation.
However even if parents could choose their children to perfectly match their own values, selection effects would determine who had how many children – somewhat more strongly than they can now – and humanity’s values would drift over the years. If parents also choose based on other criteria – if they decide that their children could do without their own soft spot for fudge, and would benefit from a stronger work ethic – then values could change very fast. Or genetic engineering may just produce shifts in values as a byproduct. In the past we have had a safety net because every generation is basically the same genetically, and so we can’t erode what is fundamentally human about ourselves. But this could be unravelled.
Even if individual humans maintain the same values, you might expect innovations in institution design to shift the balance of power between them. For instance, what was once an even fight between selfishness and altruism within you could easily be tipped by the rest of the world making things easier for the side of altruism (as they might like to do, if they were either selfish or altruistic).
Even if you have very conservative expectations about the future, you probably face qualitatively similar changes. If things continue exactly as they have for the last thousands of years, your distant descendants’ values will be as strange to you as yours are to your own distant ancestors.
In sum, there is a general problem with the future: we seem likely lose control of a lot of it. And while in principle some technology seems like it should help with this problem, and it could also create an even tougher challenge.
These concerns have often been voiced, and seem plausible to me. But I summarize them mainly because I wanted to ask another question: what kinds of values are likely to lose influence in the future, and what kinds are likely to gain it? (Selfish values? Far mode values? Long term values? Biologically determined values?)
I expect there are many general predictions you could make about this. And as as a critical input into what the future looks like, future values seem like an excellent thing to make predictions about. I have predictions of my own; but before I tell you mine, what are yours?
Do you really see the issue of how your descendants values might differ from your values as “The problem of getting unsupervised strangers to to carry out one’s will”? With you their rightful ruler, and they being potentially disobedient slaves?
Are you implying that you don’t see things that way? Any consequentialist agent sees themselves and their values as the rightful ruler of the universe, including other people.
Of course there are reasons to not say this openly, and not act on it naively (eg consequentialist manipulation is not necessarily authoritarian control), but it’s still true.
Yep
Which establishes that consequentialism is irrational.
The descendants may enslave each other, declare themselves each others rightful rulers, and so on. Maybe if Katja’s values were preserved, this would be less likely. Prefering to get your values into the future is not endorsing slavery, especially if your values don’t endorse slavery.
In light of the (apparently impregnable) Doomsday Argument, it seems to me that speculation about the far future is unimportant.
(And the simulation arguments makes an AI takeoff extremely implausible [even apart from the Doomsday Argument]—although I understand that you don’t find it implausible that you are a simulation. [Quite a bullet to bite.])
You futurological types must have already considered these arguments thoroughly, but from a distance at a glance, that’s where matters seem to stand.
Eh, the Doomsday argument seems easily refuted to me. You can take it as a starting point, but it doesn’t take into account a lot of information that we have that would be relevant to the number of humans that we expect to ever exist. (For example, the observed lifetime of other species.)
The Doomsday argument assumes an exponentially expanding population. This allows the prior quickly to become arbitrarily small, outweighing all the other evidence.
It seems to me that the same argument can be restated nonanthropically: explosive processes have rapidly diminishing probabilities of continuation.
Bad news for the em-society forecast; possibly good news for humanity. The way out is a future without an expanding population. (My guess, a declining but extremely affluent population.)
If humanity can solve the population issue—the ease with which the Chinese accepted limitations to a single child are encouraging—(and it doesn’t self-destruct for other reasons), then technology should provide for a culture based on material abundance. The “laws of economics” would be superseded with the abolition of scarcity, and human values would take a communist form.
Hmm, interesting. Looking at history, it looks to me like things have been sort-of-steadily improving w.r.t. human rights, women’s rights, nonviolent conflict resolution, etc. I guess a useful thing to ask is, what subset of these “improvements” are real improvements (in the sense that most humans throughout history, and by extrapolation lots of almost-humans too, would consider them improvements), and what subset would be considered improvements just by us early-21st-century humans? I think this is very similar to the question you’re asking: which values are stable, and which are ephemeral?
In general, over sufficiently long timescales, I’d expect useful (in the natural selection sense) values to be stable, and others not. For example, it seems very useful to resolve conflicts nonviolently (but it only works if society can prevent people from gaining by being violent). Equal rights for all persons seems like a worse bet, since that’s likely to be maladaptive in the presence of significant differences between persons (and such differences seems hard to avoid once technology allows us to create or alter minds).
“what kinds of values are likely to lose influence in the future”
– fear of spiders, fear of snakes
– joy in nurturing cute little people
– joy in penis in vagina
– detecting faces in patterns
– reacting emotionally to idiosyncratic human facial expressions
– lust for food beyond what is functional
– sense of ego, sense of unique social self (not to be confused with self-model)
– empathy
Then again, maybe we invent checksums and prevent all mutations.
Are women any more free now then they where 30 years ago or do they simply believe they have more choices? My argument goes something like this, that if in-fact market forces means that families struggle to survive any longer on 1 income…. then more or less pushes women into the market place removing there own choice of if they want to work. We simply move them from one area of enslavement to another and call that freedom?
The problem is when we mess around with Market or technological forces we so often overcome the issue that we are dealing with but create a whole new set of problems in the process. (Perhaps men where actauly the ones traped via market forces and Women where free at home?)
Even if we could develop Ai that could take care of 95% of labor tasks would society be better with a higher unemployment rate or would more people become lost and not know what to do with there time, being removed from all set of self worth. Technology is changing society and rasing all sorts of issues for society to face at a ever quicking rate, with out adding AI into the mix. The beaty in life comes often from the explainable, perhaps that is something we can not produce Ai to understand. I don’t think we will fully understand the strength of technology and its positive changes into the future, nor can we understand fully the risks in the same way that Ernestine would not have predicted his work would be used as to develop the bomb.
We’re not really as in control of the present as we like to think.