Category Archives: 1

Owen Cotton-Barratt on GPP

I interviewed Owen Cotton-Barratt about the Global Priorities Project, as part of a ‘shallow investigation’ (inspired by Givewell) into cause prioritization which will be released soon. Notes from the conversation are cross-posted here from the 80000hours blog, and also available in other formats on my website.

***

Participants

  • Owen Cotton-Barratt: Lead Researcher, Global Priorities Project
  • Katja Grace: Research Assistant, Machine Intelligence Research Institute

Notes

This is a summary made by Katja of points made by Owen during a conversation on March 24 2014.

What the Global Priorities Project (GPP) does

The Global Priorities Project is new, and intends to experiment for a while with different types of projects and then work on those that appear highest value in the longer term. Their work will likely address questions about how to prioritize, improve arguments around different options, and will produce recommendations. It will probably be mostly research, but also include for instance some policy lobbying. They will likely do some work with concrete policy-relevant consequences and also some work on general high level arguments that apply to many things. Most features of the project are open to modification after early experimentation. There will be principally two audiences: policy makers and philanthropists, the latter including effective altruists and foundations. GPP has some access to moderately senior government and civil service policy people and are experimenting with the difficulty of pushing for high impact policies.

Research areas

Research topics will be driven by a combination of importance and comparative advantage. GPP is likely to focus on prioritizing broad areas rather than narrower interventions, though these things are closely linked. It is good to keep an eye on object level questions to ensure that you are thinking about things the right way. Owen is interested in developing frameworks for comparing things. This can produce value both in their own evaluations and through introducing metrics that others want to use, and so making proposals more comparable in general.

Work so far

Unprecedented technological risks

GPP has a draft report on unprecedented technological risks. They have shown it to several people involved in policy and received positive feedback. Somebody requested a stack of printed copies for their office, to hand out to people.

How to evaluate projects in ignorance of their difficulty

Owen is working on a paper about estimating returns from projects where we have little idea how difficult they are. Many research tasks seem to fall into this category. For instance, ‘how much money we should be putting into nuclear fusion?’ We have some idea of how good it would be, and not much idea of how hard it is. But we are forced to make decisions about this, so we make an implicit statement about likelihoods. But while it is implicit, we may get it wrong sometimes.

Short term changes

In the short time GPP has existed, it has moved to focus less on policy, because experimenting with others things seemed valuable.

Views on methodology

Long run effects

It has been suggested by others that research on long run consequences of policies is prohibitively difficult. Owen believes that improving our predictions in expectation about these long run consequences is hard but feasible. This is partly because our predictions are currently fairly bad.

There are already some informal arguments in the community surrounding GPP about long run implications which it would be good to write up. For instance, there is an argument that human welfare improvements will tend to be better than animal welfare improvements in the long run, because the former have benefits which compounds over time, while animal welfare does not appear to. This is a good case where short term benefits predictably decouple from long term benefits, while in other cases short term benefits may be a reasonable proxy.

GPP will likely focus on long run effects to some extent, but not solely. Owen believes they are very important. However he also thinks routes to impact involve bringing people on board with the general methodology of prioritization, and more speculative research is less persuasive. He thinks people interested in prioritization tend to think long run impacts dominate the value of interventions, but focusing there too strongly will cause others to write us off. He also thinks that we will ultimately need to use some short-term proxies for long term benefits.

Quantitativeness

Owen is in favor of a relatively high degree of quantification in this kind of research. However this has caveats, and he advocates awareness of possible dangers of quantification. We can be too trusting of the numbers produced in this way. We should be careful about models. Sometimes it is better to throw up our hands and say ‘we don’t know how to model this’ and give some qualitative considerations than proceeding with a bad model. For long term effects, we are at that stage where the best quantified models may be worse than qualitative arguments. However we should be working toward quantification. One benefit of quantification is that it improves conversations and truth seeking. Even before you know how to model a process, if you make explicit models then you can have explicit conversations with people about the models, and about what’s wrong with them or not.

Risks with quantification

Quantification is a natural tool if we want to make comparisons. If we want a shallow picture of what is going on, it is not clear that quantification will be useful.

Trying to break down intuitions into further details can make things worse. You can miss out factors and be unaware of the omission. You can pay too much attention to factors because you put them in your model, or disregard factors because you didn’t. You can confuse people into thinking you are more confident than you are. You can be duped by thinking you have a formula. Many people are quite bad at quantification, which makes it worse to advocate it in general. Then there are simple time costs: explicit quantification is time consuming.

Nonetheless, for questions we are interested in, Owen thinks it is important to try.

Methodological change and progress

GPP hopes to make methodological progress that will be applicable to any decisions. For instance their current work on evaluation under uncertainty about costs arose from their own work on unprecedented technological risks. After they have developed a general methodology there, they can try to apply it to further problems. Back and forth between concrete prioritization and abstract general questions is likely to characterize the work.

It seems generally useful when looking at more high level questions to pay attention to concrete cases, to check that your thinking is applicable and reasonable there.

Resources

The project currently uses much of Owen’s time and a small amount of several others’ time, perhaps summing to around one and a half full time people. It’s hard to estimate, because some of the meetings that involve other people would probably occur if GPP didn’t exist, under another project. Having a label probably makes somewhat more of these things happen. Niel Bowerman has has been putting a nontrivial fraction of his time into it, but he will be cutting back to work on outreach.

The expenses of the organization are largely about one person’s worth of employment, plus some overheads in terms of office rent and sharing administrative staff. Some of the work comes from people being employed by FHI.

Where is the value of cause prioritization in general?

Owen is optimistic about cause prioritization, because it is neglected, and obviously important.

Current best guesses vs. best

Owen thinks there is quite a large range of cost effectiveness between different things, but not absolutely enormous.

Finding new best interventions vs. marginally improving a lot of good spending

There are different routes to impact with cause prioritization. Owen thinks a major route to impact is through laying bricks of prioritisation methodology. This will help people in the future to do better prioritisation (and could be better than anything we manage in the near-term future).

Among direct effects on funding allocation, there are also substantially different kinds of impact you might hope for. You can uncover new very high impact interventions, and do them. Or you can just get a group of people who are currently doing quite good things with their money to do better things with their money. Owen is slightly more optimistic about the latter, but fairly uncertain.

Object vs. meta level research

Prioritization work should be focused in the short term on a mixture of object level research output and methodological progress. GPP’s time will be split fairly evenly between them, perhaps leaning toward the methodological. It can be hard to work on methodology without engaging with more concrete questions.

Why do others neglect cause prioritization?

Owen’s best explanations for the neglect of cause prioritization research in general are that it’s hard, that it’s hard to evaluate, and that academic incentives for research topic choice are not socially optimal. Also, like most research, its costs are concentrated while its benefits are distributed.

Terminology

The term ‘cause prioritization’ seems suboptimal to Owen, and also to others. Sometimes it is good, but it is used more broadly than how people have traditionally thought of ‘causes’ and confuses people. It is also confusing because people think it is about causation. Owen would sometimes talk about ‘intervention areas’. He doesn’t have a good solution in general, but thinks we should be more actively looking for better terms.

Routes to contributing to prioritization

Thoughts on other organizations

Overall Owen thinks the entire area is under-resourced, so it’s great when other people are working on it. Even unsuccessful work will be valuable as it helps us to learn what works.

GIVEWELL LABS

Owen thinks GiveWell Labs is laying a lot of useful groundwork for prioritization work. The ‘shallow investigations’ they have been focusing on so far have their value in aggregating knowledge about causes, by researching the funding landscape, who is working on problems, and what is broadly being done. This knowledge base can then be used by anyone who is thinking about cause prioritization, whether in GiveWell Labs or outside. So there are big positive externalities from making the in-progress research public.

GiveWell Labs haven’t yet turned this broad knowledge of existing work into comparisons between areas or prioritization between them. Owen is keen to see what their approach will be.

LEVERAGE

Leverage are probably doing some prioritization research, which may be very valuable. So far, however, they haven’t published much. Owen would love to see more of their analysis. Communicating is a cost, but not communicating bears the risk that research will be duplicated elsewhere or that things they discover won’t be built upon.

COPENHAGEN CONSENSUS CENTRE

Owen is a big fan of the work the CCC does. They essentially represent expert economic opinion on global prioritisation.

There are a few reasons not to simply use their conclusions. The cost-benefit analysis which underlies most of their recommendations can in some cases miss important indirect effects. They don’t have a methodology which is strong at evaluating speculative work. And their recommendations are from the stance of global policy, which may not be directly applicable to altruists or even national policy-makers. However, their work remains one of the best resources we have today.

Funding GPP

GPP would welcome more funding. It would spend additional money securing the future of the project and hiring more researchers. It’s not clear how hard it is to attract good researchers, as they do not have the funds to hire another person yet, so have not advertised. At the moment money is the limiting factor in scaling up this research.

They would hire people who would similarly work on a variety of small-scale projects which seem important. According to skills they might work more directly on research or on using the research to leverage additional attention and work from the wider community. They would also be interested in hiring someone with more policy expertise. Owen has looked at this a bit, but it is probably not his comparative advantage.

Other conceivable projects

Influencing foundation giving

There are projects to get foundations to share more of their internal research, such as Glasspockets and IssueLab. Since small amounts of prioritization are done inside foundations, one could try to get these sharing efforts to focus more on sharing prioritization research. This sort of project has occurred to Owen in the past, but since these projects (e.g. Glasspockets) are already doing something like this, it doesn’t seem that neglected, so he thought it is probably not the high impact opportunity. Also, what the foundations are doing is likely not what we are thinking of when we say ‘cause prioritization’. They pick an area to focus on, then sometimes try to prioritize based on cost effectiveness within that area.

In response to the suggestion that it is best to focus on getting funders to care about prioritization, Owen thinks that may be true one day, but we first need higher quality research to be persuasive.

Another approach to influencing foundation giving is to get people who think about prioritization the right way into the foundations.

Influencing academia

It might be valuable to try to get cause prioritization taken up within academia, and seen as an academically respectable thing to do. This would help both with making the conclusions look more respectable, and in getting more brainpower from the class of people who would like to work at universities.

Who else does things like this?

The economics profession

We should think of academic economics as a part of the reference class of cause prioritization. A lot of economics focuses on long term effects of things. Economists would think of themselves as the experts on how to prioritize things, fairly justifiably. They have a lot of knowledge, which Owen tries to be broadly familiar with.

Owen thinks despite the fact that economists do a lot of relevant work, they tend not to actually produce prioritization of causes. So there may be a large backlog of relevant work to use in prioritizing causes. Owen has some idea of the landscape, though an imperfect one. He thinks it would be great to get more economists working in the area.

Doing good in a very noisy world

Suppose you live in a world where every time you try to do something good, it gives rise to such a giant waterfall of side effects that half the time the net effect of your actions is bad, and half the time it is good but largely from sources you didn’t anticipate. Also suppose that the analogous thing would happen if  you tried, hypothetically, to do bad things.

It sometimes seems plausible that we do live in such a world, and this sometimes makes it seem that doing good is a hopeless affair.

However I propose that in the most plausible worlds like this, when you try to do good things, in expectation you do a bit of good, and the good is merely overwhelmed by a vast random term, with expected value zero. In which case, even though your actions cause net bad half the time, they have positive expected value, and are about as good in expectation as you thought before considering the side effects. Is that so hopelessness inducing?

If so, consider a related scenario. Every time you do anything, it has exactly the desired consequences, and no others of importance. Except that it also causes a random number generator to run, and add or subtract a random amount of utility from the world, with expected value zero. Does this seem hopeless, or do you just ignore the random number generator, and do good things?

If our world is very noisy like this, is the aforementioned model a good description?

Inspired by a conversation with Paul Christiano, in which he said something like this.

Intuitions and utilitarianism

Bryan Caplan:

When backed into a corner, most hard-line utilitarians concede that the standard counter-examples seem extremely persuasive.  They know they’re supposed to think that pushing one fat man in front of a trolley to save five skinny kids ismorally obligatory.  But the opposite moral intuition in their heads refuses to shut up.

Why can’t even utilitarians fully embrace their own theory? 

He raises this question to argue that ‘there was evolutionary pressure to avoid activities such as pushing people in front of trolleys’ is not an adequate debunking explanation of the moral intuition, since there was also plenty of evolutionary pressure to like not dying, and other things that we generally think of as legitimately good. 

I agree that one can’t easily explain away the intuition that it is bad to push fat men in front of trolleys with evolution, since evolution is presumably largely responsible for all intuitions, and I endorse intuitions that exist solely because of evolutionary pressures. 

Bryan’s original question doesn’t seem so hard to answer though. I don’t know about other utilitarian-leaning people, but while my intuitions do say something like:

‘It is very bad to push the fat man in front of the train, and I don’t want to do it’

They also say something like:

‘It is extremely important to save those five skinny kids! We must find a way!’

So while ‘the opposite intuition refuses to shut up’, if the so-called counterexample is persuasive, it is not in the sense that my intuitions agree that one should not push the fat man, and my moral stance insists on the opposite. My moral intuitions are on both sides.

Given that I have conflicting intuitions, it seems that any account would conflict with some intuitions. So seeing that utilitarianism conflicts with some intuitions here does not seem like much of a mark against utilitarianism. 

The closest an account might get to not conflicting with any intuitions would be if it said ‘pushing the fat man is terrible, and not saving the kids is terrible too. I will weigh up how terrible each is and choose the least bad option’. Which is what utilitarianism does. An account could probably concord more with these intuitions than utilitarianism does, if it weighed up the strength of the two intuitions instead of weighing up the number of people involved. 

I’m not presently opposed to an account like that I think, but first it would need to take into account some other intuitions I have, some of which are much stronger than the above intuitions: 

  • Five is five times larger than one
  • People’s lives are in expectation worth roughly the same amount as one another, all else equal
  • Youth and girth are not very relevant to the value of life (maybe worth a factor of two, for difference in life expectancy)
  • I will be held responsible if I kill anyone, and this will be extremely bad for me
  • People often underestimate how good for the world it would be if they did a thing that would be very bad for them.
  • I am probably like other people in a given way, in in expectation
  • I should try to make the future better
  • Doing a thing and failing to stop the thing have very similar effects on the future.
  • etc.

So in the end, this would end up much like utilitarianism.

Do others just have different moral intuitions? Is there anything wrong with this account of utilitarians not ‘fully embracing’ their own theory, and nonetheless having a good, and highly intuitive, theory?

Reminders without times

Many times in life, a person wants to do a thing at a different time. For this to happen, the person has to remember about this, at the different time.  We have very good systems this, as long as the time can be specified in terms of time. That is, if you can say ‘I want to do this in three days’ or ‘remind me at 2pm tomorrow’, then you can look at a calendar every day, or make alarms and electronic alerts and so on. We also have reasonable systems if the other time doesn’t have to be very specific, beyond ‘later’. One can make a to-do list, or just leave the bill in the middle of the floor.

As far as I know, we have no such excellent ways to remember things at a specific point if the point is known by some other feature, such as ‘the next time at which I’m talking to my mother’, ‘next time I visit Chicago’, or ‘when I’m in a conversation and it seems awkward’.

In general, it is hard to do things when some fact obtains. This is partly because you are unlikely to be constantly checking whether that fact obtains, especially if you have many facts to check for. You can’t just go around asking yourself ‘am I having an awkward conversation? Am I driving? Am I standing up? Am I with Michael?…’. You are of course aware of all of these things anyway, in some sense. If someone asked you whether you were just driving, you would be able to respond without checking. However this does not seem to be sufficient awareness for you to reliably do a thing that you intended to do when driving. Somehow you have to both be aware of the driving, and aware of the ‘if driving, then practice singing’ implication, at the same time, and make the connection.

I’ve thought a bit about how to improve various aspects of my life, and realized after a bit  that most of them are hindered by this problem, which is why it got my attention. It seems like I could shower faster, remember new names better, and improve my posture more, if only I noticed when I was in the correct situations to behave in the ways that I would like.

One basic problem is that you can describe a situation in many ways, so even if you ask yourself often ‘what am I doing?’, your description may not involve ‘I’m standing up’, so you won’t remember that you should adopt a good posture.

Here are some suggested solutions to this problem, in case you are interested. I don’t know if any are good, but thought I should share them, since I bothered to find them:

Incentives

Reward yourself. e.g. put some candy in your pocket, and every time you pay attention to whether your conversation partner is getting a word in, you get some. Alternatively, give yourself a ‘behavioral reward’ – smile or say ‘yay’ or something. Ideally, the reward should come quickly after the behavior. As well as reinforcement, a reward that you are aware of will occasionally remind you of the desired behavior probably. e.g. when I see the pack of strawberry buttons in my bag, I remember what I have to do to get them.

Introduce a reward that you will frequently want, which can be combined with the activity. e.g. take up nicotine gum, and only chew it when you have thought about whether you are going about your current activity in a sensible manner. Always get a coffee at lunch time, then don’t sip it unless you are wearing ear plugs.

Reward yourself for even noticing the context. e.g. if you are in a conversation, and someone says their name for the first time, if you manage to say to yourself ‘hey, a name!’ then you get a prize later. Once you can do this, move up to actually taking the intended action (e.g. remembering the name).

Offer a prize to others if they notice you in the context, without doing the correct thing. e.g. give a dollar to your partner every time they see you slouching.

If you know there will be a desirable thing present at the time you should remember, then make the desirable thing contingent on remembering – e.g. if you know that at the time when you will want yourself to close your email, you will also want to look at a webcomic, allow yourself to look at the webcomic if you close your email. Hopefully at the time when you are considering whether you should look at the webcomic, you will remember that you have a great excuse to, as long as you close your email.

Count times you do the thing, or don’t do it. For instance, if you don’t want to touch your face throughout the day, a tally of the number of times you do it can help.

Make sure you don’t feel bad when you do the thing correctly, for some exogenous reason. e.g. if every time you pay extra attention to what the other party in a conversation wants to talk about, you feel guilty for not doing this naturally, you may be dissuaded from paying attention.

Social effects

Tell others that you are in favor of this thing (though I’ve also heard that committing to things publicly is actively harmful). e.g. If you tell others that you endorse thinking carefully before taking on commitments, you might feel more like the kind of person who does that, and remember to pause and evaluate the next request before agreeing to it.

Associate with people who endorse the thing. e.g. if you want to remember to speak more loudly and clearly, perhaps spend a bit of time at Toastmasters or an acting group.

Other strengthening of mental connections

Choose a more salient contextual trigger, and remember (using any of these techniques) to look for the less salient one when you see the more salient one. e.g. when you are in a lift, remember to check whether you are thinking about something pointless or good.

Visualize the connection: vividly imagine the situation that you want to do the thing in, and imagine yourself doing the thing. Put in lots of details. e.g. if you want to remember to ask an economist a particular question, next time you are talking to an economist, then think about the economists you are likely to talk to, and what economists are like in general, and the kinds of things you might be talking about with one, and the places you might be, and the kind of little cocktail sausages you might be eating, and imagine your awkward segue into this question, and asking them it, and waiting for them to answer.

Offline practice. Actually do the thing you want yourself to do, a number of times. e.g. if you want to do pushups while you wait for the microwave, then go and put something in the microwave right now and do pushups until it finishes. Then do that again, several times. (Try not to get the thing too hot).

Say out loud what you are going to do. e.g. ‘whenever I’m eating, I’m going to watch machine learning lectures’.

External reminders

Modern phone capabilities. You might be able to set it to tell you the next time you are entering the supermarket, or driving a car, etc. If not now, perhaps next time you get a phone.

Large numbers of reminders at not particularly special times. e.g. an alert which comes up on your phone or computer twenty times a day, asking if you are currently hyperventilating. I know someone who just looks through a list of possible contexts that things to remember depend on, roughly every day. e.g. Am I going to New York today? Nope. Am I going to the dentist?…

Noticeable accoutrements. e.g. if you wear a shiny bracelet, or an annoying rubber band, or an itchy sweater, you might just notice it very often. Then every time you notice it, you can say to yourself ‘am I projecting my voice right now?’. This requires you to learn the connection between seeing the shiny bling and asking the question, but that might be easier.

Sticky notes in relevant places. e.g. in your car, ‘look at the road!’.

Make the thing be at a specifiable time. e.g. set an alarm for 6pm which tells you to both eat your meal and call your mother, instead of trying to remember to call your mother whenever you happen to be eating.

Situation design

Change the situation to be one where you will more likely do the thing. If you want to remember to take a tablet with your meal, put the tablets next to the plates. If you want to remember to work out while you watch TV, put the weights in front of the TV. This kind of thing is closely related to making things easier to do, such that you can do them most of the time when you remember them, instead of mostly putting them off.

Make your routine avoid things you don’t want to happen. e.g. if you want to remember to suppress your compulsion to wash your hands, put the soap in the cupboard.

***

I repeat: I don’t know which of these work. I haven’t put a huge amount of time into it.

Should altruists pay for profitable things?

People often claim that activities which are already done for profit are bad altruistic investments, because they will be done anyway, or at least because the low hanging fruit will already be taken. It seems to me that this argument doesn’t generally work, though something like it does sometimes. Paul has written at more length about altruistic investment in profitable ventures; here I want to address just this one specific intuition which seems false.

Suppose there are a large number of things you can invest in, and for each one you can measure private returns (which you get), public returns (which are good for the world, but you don’t control),  or total returns (the sum of those). Also, suppose all returns are diminishing, so if an activity is invested in, it pays off less the next time someone invests in it, both privately and publicly.

Suppose private industry invests in whatever has the highest private returns, until they have nothing left they want to invest. Then there is a market rate of return: on the margin more investment in anything gives the same private return, except for some things which always have lower private returns and are never invested in. This is shown in the below diagram as a line with a certain slope, on the private curve.

investment1 copy

Total returns and private returns to different levels of investment.

There won’t generally be a market rate of total returns, unless people use the total returns to make decisions, instead of the private returns.  But note that if total returns to an endeavor are generally some fraction larger than private returns (i.e. positive externalities are larger than negative ones), then the rates of total returns available across interventions that are invested in for private good  should generally be higher than the market rate of private returns.

So, after the market has invested in the privately profitable things, the slope of every private returns curve for a thing that was invested in at all will be the same, except those that were never invested in. What do you know about those things? That the their private returns slope must be flatter, and that they have been invested in less.

investment 2 copy

Private returns for four different endeavors. Dotted lines show how much people have invested in the endeavor before stopping. At the point where people stop, all of the endeavors have the same rate of returns (slope).

What does this imply about the total value from investing in these different options? This depends on the relationship between private value and total value.

Suppose you knew that private value was always a similar fraction of total value, say 10%. Then everything that had ever been invested in would produce 10x market returns on the margin, while everything that had not been would produce some unknown value which was less than that (since the private fraction would be less than market returns). Then the best social investments are those that have already been invested in by industry.

If, on the other hand, public value was completely unrelated to private value, then all you know about the social value of an endeavor that has already been funded is that it is less than it was initially (because of the diminishing returns). So now you should only fund things that have never been funded (unless you had some other information pointing to a good somewhat funded opportunity).

The real relationship between private value and total value would seem to lie between these extremes, and vary depending on how you choose endeavors to consider.

Note on replaceability

Replaceability complicates things, but it’s not obvious how much it changes the conclusions.

If you invest in something, you will lower the rate of return for the next investor in that endeavor, and so will often push other people out of that area, to invest in something else in the future.

If your altruistic investments tend to displace non-altruists, then the things they will invest in will less suit your goals than if you could have displaced an altruist. This is a downside to investing in profitable things: the area is full of people seeking profits. Whereas if altruists coordinate to only do non-profitable things, then when they displace someone, that person will move to something closer to what the displacing altruist likes.

In a world where social returns on unprofitable things are generally lower than social returns on profitable things though, it would be better to just displace a profit-seeking person who will go and do something else profitable and socially useful, unless you have more insights into the social value of different options than represented in the current model. If you do, then altruists might still do better by coordinating to focus on a small range of profitable and socially valuable activities.

For the first case above, where private value is a constant fraction of total value, replaceability is immaterial. If people move out of your area to invest in another area with equal private returns, they still create the same social value. Though note that with the slightly lower rate of returns on the margin, they will consume a bit more instead of investing. Nonetheless, as without the replaceability considerations, it is best here to invest in profitable ventures.

In the second case, where private and public returns are unrelated, investing in something private will push people to other profitable interventions with random social returns. This is less good than pushing altruists to other unprofitable interventions, but it was already better in this case to invest in non-profitable ventures, so again replaceability doesn’t change the conclusion.

Consider an intermediate case where total returns tend to be higher than private returns, but they are fairly varied. Here replaceability means that the value created from your  investment is basically the average social return on random profitable investments, not on the one you invest in in particular. On this model, that doesn’t change anything (since you were estimating social returns only from whether something was invested in or not), but if  you knew more it would. The basic point here though is that just knowing that something has been invested in is not obviously grounds to think it is more or less good as a social investment.

Conclusions

If you think the social value of an endeavor is at least likely to be greater than its private value, and it is being funded by private industry, you can at least lower bound its total value at market returns. Which is arguably a lot better than many giving opportunities that nobody has ever tried to profit from.

Note that in a specific circumstance you may know other things that can pin down the relationship between private and total value better. For instance, you might expect self-driving cars to produce total value that is many times greater than what companies can internalize, whereas you might expect providers of nootropics to internalize a larger fraction of their value (I’m not sure if this is true). So if hypothetically the former were privately invested in, and the latter not, you would probably like to invest more in the former.