Posts

david_reinstein's Shortform 2021-05-31T14:43:29.796Z
What are your top workflow 'blockers'? 2021-05-20T21:01:01.774Z
A corporate skills bake sale? 2019-04-13T15:49:40.178Z
Employee Giving incentives: A shared database... relevant for EA job-seekers and activists 2018-05-19T09:37:01.877Z
Wiki/Survey: Experiences in fundraising/convincing people/organisations to support EA causes 2017-11-25T19:34:06.732Z
Give if you win (innovation in fundraising) 2017-05-26T19:36:09.542Z

Comments

Comment by david_reinstein on Intervention Report: Charter Cities · 2021-06-17T19:56:17.789Z · EA · GW

I finished the second part (podcast 'Found in the struce')

I even read the comments (but not this one)!

Comment by david_reinstein on Resources to learn how to do research · 2021-06-17T13:30:35.891Z · EA · GW

This web book guide I wrote (and hope to continue to build) might be relevant and helpful: Researching and writing for Economics students (Reinstein’s guide; web book, not EA focused, relevant to areas adjacent to Economics) bit.ly/econwriting

(Fixed link)

Comment by david_reinstein on A ranked list of all EA-relevant (audio)books I've read · 2021-06-17T13:16:33.085Z · EA · GW

Thanks for this list Michael. I just wanted to mention that most of these seem to be available as audiobooks on the “all you can eat” service Scribd. May end up being less expensive than Audible for many.

Comment by david_reinstein on Intervention Report: Charter Cities · 2021-06-14T16:47:34.701Z · EA · GW

For those who like audio, see my podcast reading of this post HERE

  • Minimal commentary and explainers (trying to cut back)

  • I got through about half the report, tbc soon

Comment by david_reinstein on Intervention Report: Charter Cities · 2021-06-14T16:46:56.956Z · EA · GW

I'm on it. Podcast reading HERE

  • Minimal commentary and explainers (trying to cut back)
  • I got through about half the report, tbc soon
Comment by david_reinstein on Why I've come to think global priorities research is even more important than I thought · 2021-06-07T17:14:26.389Z · EA · GW

I'm doing a series of recordings of EA Forum posts on my "found in the struce" podcast, also delving into the links and with my own comments.

I've just done an episode on the present post HERE

I also did one on @weeatquice's post HERE

Let me know your thoughts, and if its useful. I think you can also engage directly with the Anchor app leaving a voice response or something.

Comment by david_reinstein on The case of the missing cause prioritisation research · 2021-06-07T17:11:11.181Z · EA · GW

I'm doing a series of recordings of EA Forum posts on my "found in the struce" podcast, also delving into the links and with my own comments.

  • I've just done an episode on the present post HERE

  • I also did one on Ben Todd's post HERE

  • Next I'll do one on the comments section on this post, I think

Let me know your thoughts, and if its useful. I think you can also engage directly with the Anchor app leaving a voice response or something.

Comment by david_reinstein on Charity Navigator acquired ImpactMatters and is starting to mention "cost-effectiveness" as important · 2021-06-03T00:10:30.460Z · EA · GW

I have some comments following up on this in this shortform here. (By the way, I wrote that before seeing your post)

So far the outcomes don't seem great to me, but I think there is still room for things to improve. I hope to keep at this.

Comment by david_reinstein on david_reinstein's Shortform · 2021-06-02T18:04:57.351Z · EA · GW

Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post

That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.

I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.

Comment by david_reinstein on A central directory for open research questions · 2021-06-02T00:03:00.484Z · EA · GW

Noticeable lack of Global Health and Development lists/topics, particularly as this is where most of the individual giving where most of the individual EA giving is going. Hope I can help with this at some point.

Comment by david_reinstein on Announcing ImpactMatters: Auditing Charity Impact across Causes · 2021-05-31T14:50:51.453Z · EA · GW

Note they have been taken over by Charity Navigator; this has strong potential, but I'm concerned that it may not be done right. (See linked shortform post).

Comment by david_reinstein on david_reinstein's Shortform · 2021-05-31T14:43:30.072Z · EA · GW

ImpactMatters acquired by CharityNavigator; but is it being incorporated/presented/used in a good way?

ImpactMatters was founded in 2015 by Dean Karlan and Elijah Goldberg. They brought evidence-based impact ratings to a wider set of charities than GiveWell. Rather than focusing only on the very most effective charities they investigated impact and effectiveness across a wider range of charities willing to participate. (In some ways, this resembled SoGive). E.g., "in November 2019, ImpactMatters released over 1,000 ratings."

I saw strong potential for Impact Matters to move an EA-adjacent impactfulness metric beyond a small list of Givewell and ACE charities, to move the conversation, get charities to compete on this basis, and ‘raise awareness’ (ugh, hate that expression). (I was not so happy about their rating much-less-impactful USA-based charities with international charities without making clear the distinction, but perhaps a necessary evil).

In late 2020 CharityNavigator acquired Impact Matters. They have added "Impact and Results Scores" for 100 or more charities and this is incorporated into their 'Encompass Rating' but not into their basic and most prominent and famous "stars system", if I understand (it is complicated).

I think this has great positive potential, for the same reasons I thought Impact Matters had potential... and even more 'bringing this into the mainstream.

However, I'm not fully satisfied with the way things are presented:

  1. The Impact Ratings don't seem to convey a GiveWell-like 'impact per dollar' measure
  2. In the presentation, they are a bit folded into and mixed up with the Encompass ratings. E.g., I couldn't figure out how to sort or filter charities by their 'Impact and Results Score' itself.
  3. Impact Ratings are not prominent or mentioned when one is looking through most categories of charities (e.g., my mother was looking for charities her organization could support dealing with "Human trafficking, COVID-19, hunger, or the environment" and nothing about impact came up)
  4. In some presentations on their page cause categories with order-of-magnitude differences in impact are presented side-by-side, but only comparable with 'within-category ratings. Thus, a charity building wells in Africa may receive a much lower score, and thus appear to be much less effective, than a charity giving university scholarships to students in the USA.
  5. They only have impact ratings for eight charities working internationally (vs 186 ratings for charities that only work within regions in the USA, I believe), and no animal or otherwise EA-relevant, as far as I know.

What do you think? Is this being used well? How could it be done better? How could we push them in the right direction?

Comment by david_reinstein on What are your top workflow 'blockers'? · 2021-05-22T14:57:38.366Z · EA · GW

Or maybe these are the mediators and not the prime movers. When I am in certain mental states most of the problems that I mention above can become much more severe or they can disappear entirely.

Comment by david_reinstein on How much do you (actually) work? · 2021-05-20T20:57:50.907Z · EA · GW

Can you make this an anonymous poll? (My employers read this). The answers will be more useful/useable.

  1. When I've toggltracked etc I usually get 40-60 hours per week including weekends, but it's very hard to break it down as there are so many distractions.

  2. Probably only 8-10 hours per week of 'real deep work excluding coding'

  3. Maybe 1/3 break time, really hard to know.

Comment by david_reinstein on EA Survey 2020: How People Get Involved in EA · 2021-05-20T15:37:54.648Z · EA · GW

IMO it is hard know what inference to draw from these comparisons.

Firstly, making multiple comparisons obviously raises the risk of a "false-positive" ... a result that is merely due to chance/sampling.

Secondly, with 'multiple hurdles' it's hard to know how to compare like for like....

The share of not highly engaged non-males which had 'personal connection' as an important factor for involvement was slightly higher than the male counterpart

--> But note that the involvement factors may be driving engagement itself, and doing so differently for males and females

Comment by david_reinstein on EA Survey 2020: How People Get Involved in EA · 2021-05-20T01:17:50.532Z · EA · GW

Also note that females tend to be less engaged, although the differences are not extremely large. See GRAPHS HERE -- that link is a preview of results we will put out in our forthcoming 'Engagement' post.

Comment by david_reinstein on EA Survey 2020: How People Get Involved in EA · 2021-05-20T01:14:48.314Z · EA · GW

I think this is what you want?.

(R note: I haven't figured out how to purrr:map list splits of the dataframe to ggplot so I can't automate these easily yet)

Comment by david_reinstein on Fact checking comparison between trachoma surgeries and guide dogs · 2021-05-19T23:13:28.074Z · EA · GW

I like these examples but they do have some limitations.

I'm still searching for some better examples that are empirically robust as well as intuitively powerful.

(I'm looking for the strongest references to give to my claim here that "there is a strong case that most donations go to charities that improve well-being far less per-dollar than others." (of course I'm willing to admit there's some possibility that we don't have strong evidence for this)

1) Differences in income: This will not be terribly convincing to anyone who doesn't already accept the idea of vastly diminishing marginal utility, and there is the standard (inadequate but hard to easily rebut) objection that "things are much cheaper in developing countries".

2) The cost to save a life: Yes, rich country governments factor this into their calculations, but is this indeed the calculation that is relevant when considering "typical charities operating in rich countries?" It also does not identify a particular intervention that is "much less efficient".

3) Cost per QALY/ UK NHS: Similar limitations as in case 2.

What is the strongest statistic or comparison for making this point? Perhaps Sanjay Joshi of SoGive has some suggestions?

Perhaps making a comparison based on the tables near the end of Jamison, D. T. et al (2006). Disease control priorities in developing countries? 2006 was a long time ago, however.

Comment by david_reinstein on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2021-05-19T22:57:32.358Z · EA · GW

@cole_haus: I really like this approach.

One thing that is not clear to me:

  • Do you work at or with Givewell?
  • if you shared or discussed this work with anyone at Givewell?

I think it's something they should be attuned to, and I'd like to see them go more in the direction of open, transparent, and cleanly-coded models.

Comment by david_reinstein on A Research Framework to Improve Real-World Giving Behavior · 2021-05-19T22:46:42.953Z · EA · GW

@jon_behar: at least a handful of the links are not working, such as the one on "1) Improving the quality of giving (getting people to do more good with each dollar given)" ... can you fix these?

Comment by david_reinstein on EA Survey 2020: Demographics · 2021-05-19T22:42:42.966Z · EA · GW

Let's presume that the 'share non-straight is' a robust empirical finding and not an artifact of sample selection or of how the question was asked, or of the nonresponse etc. (We could dig into this further if it merited the effort)...

It is indeed somewhat surprised, but I am not who surprised, as I expect a group that is very different in some ways from the general population may likely be very different in other ways, and we may not always have a clear story for why. If we did want to look into it further it further, we might look into what share of the vegan population, or of the 'computer science population', in this mainly very-young age group, is not straight-identified. (of course those numbers may also be very very difficult together, particularly because of the difficulty of getting a representative sample of small populations, as I discuss here.

This may be very interesting from a sociological point of view but I am not sure if it is a first order important for us right now. That said, if we have time we may be able to get back to it.

Comment by david_reinstein on EA Survey 2020: Demographics · 2021-05-15T17:44:45.452Z · EA · GW

I was also surprised, but obviously we are far from a random sample of the population, there is a very unusual 'selection' process to

  • know about EA
  • identify with EA
  • take the survey

E.g., (and its not a completely fair analogy but) about 30% of 2019 respondents said they were vegan vs about vs about 1-3% of comparable populations

Perhaps better analogy: Looking quickly at the 2018-2019 data, roughly half of responded studied computer science. This compares to about 5% of the US degrees granted , 10% if we include all engineering degrees.

But is this worth pursuing further? Should we dig into surprising was the EA/EA-survey population differs from the general population?

Comment by david_reinstein on Possible misconceptions about (strong) longtermism · 2021-04-13T19:21:47.265Z · EA · GW

In response, you stated:

However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future.

It might be worth re-stating this. Thinking about objective functions and constraints, either

R1. SLT implies that resources should be devoted in a way that does less to reduce current suffering (i.e., implies more current suffering than absent SLT) or

R2. SLT does not change our objective function, or it coincidentally implies an allocation that does has no differential effect on current suffering (a 'measure zero', i.e., coincidental result)

R3. SLT implies that resources should be devoted in a way that leads to less current suffering

R3 seems unlikely to be the case, particularly if we imagine bounds on altruistic capacity. And, if there were an approach that could use the same resources to reduce current suffering even more, it already should have been chosen in the absence of SLT.

If R2 is the case then SLT is not important for our resource decision so we can ignore it.

If R1 holds (which seems most likely to me), then following SLT does imply an increase in current suffering, and we are back to the main objection

Comment by david_reinstein on Possible misconceptions about (strong) longtermism · 2021-04-13T19:11:58.544Z · EA · GW

Possible misconception: “Greaves and MacAskill say we can ignore short-term effects. That means longtermists will never reduce current suffering. This seems repugnant.”

'This seems repugnant' doesn't seem like a justifiable objection to me, so not something an advocate of SLT should be obliged to take on directly.

If I said "this doctor's theory of liver deterioration suggests that I should reduce my alcohol intake, which seems repugnant to me", you would not feel compelled to respond that "actually, some of the things the doctor is advocating could allow you to drink more alcohol".

(I suspect that beyond the "this seems repugnant" there is a more coherent critique -- and that is the critique we should focus on.)

Comment by david_reinstein on Should EA Buy Distribution Rights for Foundational Books? · 2021-04-02T02:46:07.670Z · EA · GW

Good arguments. I'd personally love if we found a way to move to a different economic model for all information goods. But particularly here, free distribution seems important.

Possibly worth considering: motivating future authors with prize-based incentives; prize based on number of downloads/reads/upvotes of their books. Of course the authors may be credit-constrained, but perhaps others could finance them by buying shares in the future potential prizes?

Comment by david_reinstein on EA Survey 2018 Series: Donation Data · 2021-03-25T23:15:13.049Z · EA · GW

Thanks Greg, I appreciate the feedback.

Some of this depends on what our goal is here. Is it to maximize 'prediction' and if so, why? Or is it something else? ... Maybe to identify particularly relevant associations in the population of interest.

For prediction, I agree it’s good to start with the largest amount of features (variables) you can find (as long as they are truly ex-ante) and then do a fancy dance of cross-validation and regularisation, before you do your final ‘validation’ of the model on set-aside data.

But that doesn’t easily give you the ability to make strong inferential statements (causal or not), about things like ‘age is likely to be strongly associated with satisfaction measures in the true population’. Why not? If I understand correctly:

The model you end up with, which does a great job at predicting your outcome

  1. … may have dropped age entirely or “regularized it” in a way that does not yield an unbiased or consistent estimator of the actual impact of age on your outcome. Remember, the goal here was prediction, not making inferences about the relationship between of any particular variable or sets of variables …

  2. … may include too many variables that are highly correlated with the age variable, thus making the age coefficient very imprecise

  3. … may include variables that are actually ‘part of the age effect you cared about, because they are things that go naturally with age, such as mental agility’

  4. Finally, the standard ‘statistical inference’ (how you can quantify your uncertainty) does not work for these learning models (although there are new techniques being developed)

Comment by david_reinstein on A list of EA-related podcasts · 2021-02-06T04:59:33.277Z · EA · GW

Thanks for this. I've integrated this list as well as @pablo's and a couple I added ('Not Overthinking' and 'Great.com Talks With') into an Airtable

View only

or you can on collaborate this base HERE

Comment by david_reinstein on LSE EA’s Fellowship Application Scores Moderately Predicted Engagement and Discussion Quality · 2021-02-05T18:27:38.877Z · EA · GW

I think we tend to confuse 'lack of strong statistical significance' with 'no predictive power'.

A small amount of evidence can substantially improve our decision-making...

... even if we cannot conclude that 'data with a correlation this large or larger would be very unlikely to be generated (p<0.05) if there were no correlation in the true population.

  1. We, very reasonably, substantially update our beliefs and guide our decisions based on small amounts of data. See, e.g., the 'Bayes rule' chapter of Algorithms to Live By

  2. I believe that for optimization problems and decision-making problems we should use a different approach both to design and to assessing results... relative to when we are trying to measure and test for scientific purposes.

This relates to 'reinforcement learning' and to 'exploration sampling'.

We need to make a decision in one direction or another, and we need to consider costs and benefits of collecting and using these measures I believe we should be taking a Bayesian approach, updating our belief distribution,

... and considering the value of the information generated (in industry, the 'lift', 'profit curve' etc) in terms of how it improves our decision-making.

Note: I am exploring these ideas and hoping to learn, share and communicate more. Maybe others in this forum have more expertise in 'reinforcement learning' etc.

Comment by david_reinstein on EA Survey 2019 Series: How many people are there in the EA community? · 2021-02-02T23:06:58.516Z · EA · GW

Whether or not it's good or bad, it's a cool idea!

Comment by david_reinstein on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-30T03:01:30.472Z · EA · GW

Thanks for sharing the confidence intervals. I guess it might be reasonable to conclude from your experience that the interview scores have not been informative enough to justify their cost.

What I am saying is that it doesn't seem (to me) that the data and evidence presented allows you to say that. (But maybe other analysis or inference from your experience might in fact drive that conclusion, the 'other people in San Francisco' in your example.)

But if I glance at just the evidence/confidence intervals it suggests to me that there may be a substantial probability that in fact there is a strongly positive relationship and the results are a fluke.

On the other hand I might be wrong. I hope to get a chance to follow up on this:

  • We could simulate a case where the measure has 'the minimum correlation to the outcome to make it worth using for selecting on', and see how likely it would be, in such a case, to observe the correlations as low as you observed

  • Or we could start with a minimally informative 'prior' over our beliefs about the measure, and do a Bayesian updating exercise in light of your observations; we could then consider the posterior probability distribution and consider whether it might justify discontinuing the use of these scores

Comment by david_reinstein on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-29T15:49:00.075Z · EA · GW

Interesting, but based on the small sample and limited range of scores (and I also agree with the points made by Moss and Rhys-Bernard) ...

I'm not sure whether you have enough data/statistical power to say anything substantially informative/conclusive. Even saying 'we have evidence that there is not a strong relation' may be too strong.

To help us understand this, can you report (frequentist) confidence intervals around your estimates? (Or even better, a Bayesian approach involving a flat but informative prior and a posterior distribution in light of the data?)

I'll try to say more on this later. A good reference is: Harms and Lakens (2018), “Making ‘null effects’ informative: statistical techniques and inferential frameworks”

Also, even 'insignificant' results may actually be rather informative for practical decision-making... if they cause us to reasonably substantially update our beliefs. We rationally make inferences and adjust our choices based on small amount of data all the time, even if we can't say something like 'it is less than 1% likely that what I just saw would have observed by chance'. Maybe 12% (p>0.05 !) of the time the dark cloud I see in the sky will fade away, but seeing this cloud still makes me decide to carry an umbrella... as now the expected benefits outweigh the costs..

Comment by david_reinstein on EAG survey data analysis · 2021-01-25T13:22:23.931Z · EA · GW

Can you share the data and or code?

Comment by david_reinstein on EAG survey data analysis · 2021-01-25T13:21:53.630Z · EA · GW

Our demographic analysis of community-related variables yielded no significant differences based on participants’ age, ethnicity, or gender.

Lack of power or a tightly bounded null effect? Note that even without “strong statistical significance” in standard tests we can meaningfully update our beliefs.

Of course we may need to adjust for multiple testing.

Also, statistical inference after machine learning presents some challenges. Relevant here?

Comment by david_reinstein on EAG survey data analysis · 2021-01-25T13:19:43.554Z · EA · GW

As our models “only” examine the correlation between different variables and LTR, we cannot make any conclusive statements about a causal relationship linking "higher scores for community-related activities" to "higher LTR". Technically, it is possible that the connections we found are spurious and do not represent the true causal mechanism which leads people to recommend an event more highly.

Perhaps this could be presented earlier on— to me it seems likely that there is reverse causality, other channels, etc.

Comment by david_reinstein on EA Survey Series 2019: EA cities and the cost of living · 2021-01-21T14:21:55.517Z · EA · GW

this seems to confirm that most cities with large EA populations are quite expensive, which may serve as a barrier to EAs wishing to live near a large number of other EAs.

Does this suggest a coordination problem, or are there important countervailing reasons why EAs live in the expensive places?

Comment by david_reinstein on EA Survey Series 2019: EA cities and the cost of living · 2021-01-21T14:21:30.817Z · EA · GW

Can someone share the data tables? Thanks.

Comment by david_reinstein on EA Survey 2019 Series: Donation Data · 2021-01-13T22:23:18.371Z · EA · GW

I'm involved with doing this analysis this year, andI hope we can go in this direction. Perhaps not in the first iteration, but as we refine it.

Comment by david_reinstein on Are there EA-aligned organizations working on improving the effectiveness of corporate social responsibility/corporate giving strategies? · 2021-01-13T17:50:08.499Z · EA · GW

Would love to touch base on this, let's chat.

I had a project doing some scoping related to this called innovationsinfundraising.org -- see especially the links HERE

Comment by david_reinstein on The case of the missing cause prioritisation research · 2020-08-21T15:35:52.139Z · EA · GW

Great post! I laid down a variety of comments and suggestions within your post using hypothes.is. If you want to check it out (you need to install the browser ad-in and get a free account to see these.

I prefer to comment within the text rather than here at the bottom, cutting and pasting quotes. Anyone else here tried hypothes.is?

(By the way, I'm an academic economist. I don't have any stake in hypothes.is. I just like it.)

Comment by david_reinstein on Growth and the case against randomista development · 2020-08-19T13:56:17.896Z · EA · GW

I am an academic economist. I agree that economic development is important and is likely responsible for the majority of welfare gains in poor countries (although the spread of medical treatments, eliminating polio etc., are also huge). Yes, we have some good evidence that certain policies substantially inhibit development. And we should advocate against these policies.

However, some parts of the argument seem a bit overstated or unfair to me. Some points

  1. "Randomista": that is not the term the advocates would prefer, is it?

  2. Even if the best policies are pursued, the benefits will be slow and uneven. In the meanwhile, donations to prevent malaria, fund micronutrients, and even provide fistula and eye surgery can have a huge impact/$.

    • I don't think many donors will decide between giving $ to bednets and giving it to fund advocacy for pro-trade policies. However, presenting the benefits of the former as 'a drop in a vast ocean' will discourage giving overall
  3. The benefits of these health interventions are not primarily their impact on boosting economic growth/income. They yield direct welfare benefits. The comparisons you highlight above make it seem as if the main intention of these is to boost growth/income.

  4. The main issue: You state

friendly economic policies can often be orders of magnitude more cost-effective than direct funding of evidence-based interventions.

Perhaps, but that is not the issue from a donor point of view. The issue is the cost-effectiveness of money donated to support these policies. I see very little reason to believe that "funding a bunch more economists" (again, I say this as an economist myself) would have a substantial beneficial impact, much less on a per-dollar basis.

Maybe it would, but I think there are orders of magnitude of uncertainty over this impact. The assumptions for this in the spreadsheet seem simply like guesses to me.

My reason to be a bit skeptical... we have many many economists out there. I don't see how more economists–or even more think tanks–will do much to clearly advance the argument against the known-to-be-bad growth policies.

Comment by david_reinstein on Growth and the case against randomista development · 2020-08-19T13:33:59.338Z · EA · GW

Agreed. Also you should call people by what they refer to themselves as. I think 'Randomista' comes across of a pejorative.

Comment by david_reinstein on Important EA-related questions EA would like to know from general public · 2019-12-23T09:52:06.272Z · EA · GW

I just added some links to the shared google doc also

Comment by david_reinstein on Important EA-related questions EA would like to know from general public · 2019-12-23T09:41:12.111Z · EA · GW

I may be too late to the game (been away and @DavidJanku only recently alerted me to this ) but some quick thoughts:

The current version seems to have many questions that will tell you about how people either 'consciously answer this question to themselves' or how they want to present themselves. It may not reveal their true motivations. There's a lot of work point in this direction.

I would try to focus more on very specific questions that permit less constructed justification and 'lying to oneself.'

It may be very helpful to present simple and yet that ask for a hypothetical response such as "which of the following charities would you be more likely to donate to?" and "how does the following information make you feel?" (although the latter may also offering for motivated reasoning). My recent paper with Robin Bergh some of this but with real donation choices; it still would be interesting to consider hypothetical choices and responses.

I have a wiki/hub that attempts to summarize much of the evidence on Charitable giving, with a particular focus on the consideration of effectiveness.

See INNOVATIONSINFUNDRAISING.ORG. There is also an underlying database I can share (with more detail and recent updates) if you message me at daaronr AT gmail.com

I have also done a lot of work recently summarizing the evidence on "How do people respond to effectiveness information".

E.g., pasting some text from a recent grant application:

So far, we have limited evidence on these questions, and the existing evidence is far from systematic or consistent Mixed results, e.g.,- Small et al vs Karlan, Parsons, and Reinstein et al work with Donor Voice; a Previous studies have largely relied on hypothetical and small-scale lab-based experiments(Metzger & Günther, 2015, Berman et al., 2018, Verkaik, 2016). Only a few large scale natural field experiments have been run. Karlan and Wood (2017) simultaneously varied emotional and cost-effectiveness information, with the latter presented largely qualitatively and in a particular ‘scientific’ credentials frame. Parson (2007) presented accounting information (uninformative about per-dollar effectiveness). In contrast, our field experiment project aims at large sample sizes in real donation environment, testing a set of particularly relevant and practical framings of real per-dollar impact information in the presence/absence of an emotional appeal (further measuring interaction effects, as in Bergh and Reinstein, 2019).

Please message me for more detail.

Comment by david_reinstein on A Framework for Thinking about the EA Labor Market · 2019-05-11T20:20:09.222Z · EA · GW

Well written. I agree with most of the points. A minor quibble: I'm not sure I'd consider the wages in the nonprofit sector/EA to be 'structurally suppressed'. There are other considerations (both good and bad) limiting wages, particularly:

  1. Donors may be repelled by high salaries (but this is less likely to be true in EA imho)

  2. There is a case (and several good academic papers arguing this, e.g., Steinberg, 2008; Delfgauw and Dur, 2002 ) that a lower salary can screen for more cause-motivated employees; where performance and outcomes can not be as easily monitored and incentivized, intrinsic motivation is important. This is not to say that I think a higher salary will attract less-motivated employees, but these can be hard to distinguish from others. In net the gain from higher salary may not be as strong in the nonprofit/EA sector.

You note:

The Centre for Effective Altruism is hiring a new CEO. Should it restrict its search to candidates willing to show their commitment by pledging everything they earn above a modest amount to effective charities? (Purely hypothetical question, I have no reason to think CEA is doing this).

As we discussed in the doc, I suggest there is a middle ground: Give preference to hiring people who have committed to donate at least a certain share of their income (and have demonstrably fulfilled this pledge).

Comment by david_reinstein on Salary Negotiation for Earning to Give · 2019-04-13T14:58:40.367Z · EA · GW
How would you ensure people stick to their promise to donate and don't just use the advice/time for non-earning-to-give causes.

1. We could offer this only to those who already have a public verified record of substantial EA giving.

This would seem to be a reasonable filter/screen on honesty. It is possible that such people would take advantage and not keep the promise to donate the additional amount, but it seems unlikely. Perhaps there are people with a consequentialist ethic who want to help effectively and donate a lot, but are nonetheless willing to be dishonest and swindle fellow-EA-ers, but it doesn't seem terribly possible.

Note that even if people did not consistently donate the additional negotiated salary, this would still serve as a 'reward' for public EA donors, perhaps encouraging others to follow suit.

2. I would suggest that the negotiator/EA sponsor ask them to state their expected salary and salary range and then afterwards to state the amount they were able to negotiate and the amount they had donated. People will probably be less willing to default on their promise if doing so requires explicitly stating this or lying about it.

Comment by david_reinstein on Salary Negotiation for Earning to Give · 2019-04-13T14:52:24.365Z · EA · GW

Trigger warning: contains some academic economics palaver and self-promotion.

Classical economics arguments

The case (as in 'no Lean Season') seems to depend on inefficient behavior/job applicants leaving money on the table. If there were such great gains to negotiating why wouldn't the applicants always hire a negotiator? This lends some credence to those saying that there is a cost in terms of rescinded offers. In some sense, this would mean that if the EA community offered free negotiating services in exchange for such a pledge, they would be gambling with the applicant's funds.

*So what might be the case to still justify this?*

Behavioral and modern economics/psychology

1. Psychology/biases in giving

This is not necessarily a bad thing. If the applicant is willing to take such a risk, this might be a good way to indirectly elicit donations. It also relates to the give if you win mode I have been researching.

2. Biases in negotiating

This also might be a 'nudge towards negotiating'; perhaps people are reluctant to stick their neck out and negotiate for themselves because of some intrinsic psychological bias, but they might be willing to do so with the support of the EA community, and knowing that it would lead to helping effective causes, well bringing them some positive reputation in the process.

3. Psychology and 'biases' in volunteering

This may unlock the volunteer services of expert negotiators in a particularly effective way. Because of the signaling benefits (it's more public!), corporate rewards, and internalised feeling of impact people may be more willing to volunteer than to donate the equivalent amount in terms of the value of the time. This relates to my proposal for the Corporate bake sale.

4. Synergies enabled by cooperation between altruists

In Principal-Agent problems there is a well-known inefficiency that results from the combination of hidden information and either limited-liability or asymmetric risk-preferences. This is essentially why economists believe (and have some evidence) that real estate agents usually get a lower price when they sell a house for someone else vs. their own house.

However, if the negotiator here is EA-aligned, their interests will better converge, and there is an efficiency gain to be had here. (A bunch of papers make this case ... about the efficiency gains resulting from altruism on one side or the other, including my own paper on the theoretical argument for 'fair trade'.

Comment by david_reinstein on Wiki/Survey: Experiences in fundraising/convincing people/organisations to support EA causes · 2017-11-29T11:00:40.844Z · EA · GW

Thank you; I had not seen this before. It is very helpful and I will strive to incorporate this.

Comment by david_reinstein on Give if you win (innovation in fundraising) · 2017-06-05T12:00:21.382Z · EA · GW

For banks and big corporations to want to join, there probably needs to be a greater sense of assurance that their signing up will actually lead to the publicity you suggest there would be.

I agree. It would be good to think of ways to line up endorsement and positive publicity in advance. Still, I think it depends on the cost-benefit calculation. If they can try this without much effort or risk, they might be willing to do so internally and roll out the PR gradually.

That in mind, it's plausible that 1. cancer charities would do better than an investment in something westerners aren't personally affected by, such as schistosomiasis, and 2. that one big check to one big organization will garner more attention than many checks to a myriad of organizations.

Domestic charities and charities like CRUK will typically tend to do better in general, I suspect. However, i. increasing the overall volume of giving should increase effective giving at least proportionally and ii. more so if we focus on this in the promotions and work with EA supporters in organisations.

Developing approaches to get people outside of the EA movement to support EA charities is a separate and very important one (e.g., Deloitte could have at least one international/effective charity partnership. I'm working on this as well (I hope to update soon about the wiki and other things people can engage in.)

I would be very keen to work with a big, known charity. It may not be the highest-rated EA charity, but it would be good to partner with one that is at least somewhere on the EA spectrum even if not perfect (an Oxfam, MSF, Comic Relief, etc).

Comment by david_reinstein on The value of money going to different groups · 2017-05-24T17:19:59.095Z · EA · GW

Thank you Toby. The 'preference over gambles' as a way of measuring diminishing marginal utility will depend strongly on the expected utility maximization assumption; in practice, it could be vulnerable to reference-point effects I believe. (Also the logarithmic utility function is obviously an imposed parametric assumption, but a good start.)

Still, these approaches seem reasonable, especially insofar as broadly similar results come from varying contexts.

Comment by david_reinstein on Are Giving Games a better way to teach philanthropy? · 2017-05-24T17:14:23.435Z · EA · GW

Thank you. It sounds somewhat similar to some economics experiments involving charity that I have seen, but of course with a different goal in mind. I will look into this -- I am curious also about the evidence one might collect from such games, especially about which arguments people have found convincing, and which approaches have convinced people to choose the more effective charities.