Posts

Unjournal Evals: "Advance Market Commitments: Insights from Theory and Experience" 2023-03-21T16:59:28.466Z
Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space 2023-03-17T20:20:52.684Z
Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours 2023-02-06T19:18:12.801Z
Idea: Curated database of quick-win tangible, attributable projects (update: link to Airtable WIP) 2023-01-13T22:21:54.183Z
EA Houses: Airtable, Map, tech projects 2023-01-11T20:30:54.327Z
EA Market Testing: Summary of your feedback 2023-01-05T21:09:22.978Z
Planning and documentation: should we do more (or less)? 2023-01-02T17:35:36.545Z
MacKenzie Scott's grantmaking data 2022-12-15T21:19:03.708Z
Proposed: donation mechanism for people doing direct work (USA tax relevant) 2022-11-28T20:06:40.499Z
Jeff Bezos announces donation plans (in response to question) 2022-11-14T14:31:53.922Z
Marketing Messages Trial for GWWC Giving Guide Campaign 2022-09-08T16:22:43.860Z
"Two-factor" voting ("two dimensional": karma, agreement) for EA forum? 2022-06-25T11:10:59.814Z
Where to donate goods/textbooks/items ~effectively in the UK/US/beyond? 2022-06-18T17:32:04.115Z
Unjournal: Call for participants and research 2022-06-13T11:11:06.955Z
Giving What We Can - Pledge page trial (EA Market Testing) 2022-05-16T22:39:25.473Z
Has anyone actually talked to conservatives* about EA? 2022-05-05T19:05:43.734Z
What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) 2022-04-28T15:49:09.379Z
Tools/methods for finding most pivotal unjournal-worthy research 2022-04-23T01:10:11.997Z
Improve/promote a post in situ. 2022-04-22T22:03:51.671Z
Should you do an economics PhD (or master's)? 2022-04-19T20:04:24.202Z
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? 2022-04-19T16:09:41.214Z
EA "CB Radio" anytime chat? 2022-04-18T01:28:21.710Z
Do we have any *lists* of 'academics/research groups relevant/adjacent to EA' ... and open science? 2022-03-30T15:36:16.973Z
ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) 2022-01-13T19:00:31.229Z
Seeing the effects of your donation and making incremental choices 2021-11-05T21:46:39.173Z
Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post) 2021-11-03T20:16:02.421Z
EA Survey 2020 Series: Donation Data 2021-10-26T15:31:05.563Z
EA Market Testing 2021-09-30T15:17:51.011Z
[Link] Reading the EA Forum; audio content 2021-06-29T21:29:15.133Z
david_reinstein's Shortform 2021-05-31T14:43:29.796Z
What are your top workflow 'blockers'? 2021-05-20T21:01:01.774Z
A corporate skills bake sale? 2019-04-13T15:49:40.178Z
Employee Giving incentives: A shared database... relevant for EA job-seekers and activists 2018-05-19T09:37:01.877Z
Wiki/Survey: Experiences in fundraising/convincing people/organisations to support EA causes 2017-11-25T19:34:06.732Z
Give if you win (innovation in fundraising) 2017-05-26T19:36:09.542Z

Comments

Comment by david_reinstein on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2023-03-24T22:30:17.030Z · EA · GW

The baseline simple model is

estimateQALYs = initialPrisonPopulation * reductionInPrisonPopulation * badnessOfPrisonInQALYs * counterfactualAccelerationInYears * probabilityOfSuccess * counterfactualImpactOfGrant

cost = 2B to 20B

This seems to depend on:

Say that $2B to $20B, or 10x to 100x the amount that Open Philanthropy has already spent, would have a 1 to 10% chance of succeeding at that goal [5].

But shouldn't this be simplified to include fewer variables? In particular:

1. Why do we need cost as a variable on it's own?

cost is basically a choice variable. Presumably the more that is spent, the greater the probabilityOfSuccess. The uncertainty surrounds 'benefit per dollar spent'. But that 'slope of benefit in cost' is really only a single uncertainty, not two uncertainties. Wouldn't it be better to just pick a middle reasonable 'amount spent', perhaps the amount that seems ex-ante optimal?

2. Acceleration * reductionInPrisonPopulation * probabilityOfSuccess:

These seem likely to be highly (negatively?) correlated to each other, and positively correlated to cost. For a given expenditure, if we target a lower 'reduction in prison population', or we a slower rate-of-change, I expect a greater probabilityOfSuccess.

Would it make sense to instead think of something like 'reduction in total prison-years as a percent of current prison population?' Perhaps, feeding into this, some combinations of expenditure, acceleration, reduction percent, and prob(success) that jointly seem plausible?

Comment by david_reinstein on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2023-03-24T22:19:05.319Z · EA · GW

@weeatquince and all:

Do you know what the best research (or aggregated subjective beliefs) synthesis we have on the 'costs of achieving policy change'...

  • perhaps differentiated by area
  • and by the economic magnitude of the policy?

My impressions was that Nuno's

" $2B to $20B, or 10x to 100x the amount that Open Philanthropy has already spent, would have a 1 to 10% chance of succeeding at that goal"

Seemed plausible, but I suspect that if they had said a 1-3% chance or a 10-50% chance, I might have found these equally plausible. (At least without other benchmarks).

Comment by david_reinstein on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2023-03-24T21:43:26.600Z · EA · GW

Small suggestion: Could someone edit the post so that the hover footnotes work; makes it more readable? (Update: More importantly I'm not sure the footnote numbers are correct.)

Larger suggestion: This is a great post and provides a lot of methodological tools. I'd love to see it continue to be improved. E.g., detailed annotation, here or elsewhere, with the reasoning behind each element of the model.

It might help to turn the modeling part into a separate post linked to this one?

I have some 'point by point' questions and comments, particularly on the justification for the elements of the simple models. I don't want to add them all here because of the clutter, so I will add them as public https://hypothes.is/ notes. Unless anyone has a better suggestion for how to discuss this?

Comment by david_reinstein on Time-Sensitive Opportunity to Make $8000 EV in 4 hours in Massachusetts via Online Sports Betting · 2023-03-23T14:39:55.041Z · EA · GW

If you are coming from our is state and you need a place to crash or to work on this in lower Western Massachusetts reach out to me (I’m in Monson, near the Connecticut border)

Comment by david_reinstein on Time-Sensitive Opportunity to Make $8000 EV in 4 hours in Massachusetts via Online Sports Betting · 2023-03-23T14:36:29.235Z · EA · GW

It seems like if you’ve made accounts for these same sites In other states you can’t get these promotions in Massachusetts. Does everyone agree?

Comment by david_reinstein on Donation offsets for ChatGPT Plus subscriptions · 2023-03-17T01:48:18.094Z · EA · GW

Let's say the value of your time is $500 / hour.

I'm not sure it was worth taking the time to think this through so carefully.

But:

  1. J is thinking this through and posting it to give insight to others, not just for his own case.

  2. If J’s time is so valuable, it may be because his insight is highly valuable, including on this very question

Comment by david_reinstein on Offer an option to Muslim donors; grow effective giving · 2023-03-16T17:17:12.126Z · EA · GW

Very cool. But do we have any data or reasoning transparent estimates of the relative effectiveness of the largest existing Islamic affiliated charities ? Prima favor I suspect giving cash is at least as good as giving food, tents, etc., it how much better?

Comment by david_reinstein on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2023-03-10T20:36:41.935Z · EA · GW

I looked into the Caesars offer a bit.

With the promo code--> your first bet up to $1500 is 'insured' and refunded in case of loss. IIRC you need to bet the refunded $1500 in another single eligible gamble. I think the net expected value of this offer (after taxes and spreads) is somewhere between about $500 and $700 if your first bet is a 50/50 bet, but closer to $1200-$1400 if you take a highly risky first-bet.

Some followup...

If you are 'playing it reasonably safe' and make 50-50 bets each time the expected value gain is probably around $700 or so, maybe $600 after taxes. (I think this is a conservative calculation).

2 people could also hedge their bets as noted in posts , reducing the max loss to just a few hundred $ (taxes, house spread).

But "Accountholders who have an existing Caesars Sportsbook and Casino or William Hill account in any other location as of the start of the Promotion Period are not eligible" ... so you may not qualify if you did this in a different state.

Comment by david_reinstein on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2023-03-10T20:04:58.219Z · EA · GW

It looks like this is coming to Massachusetts now. My impression is that the Caesars offer is the most lucrative.

Caesars: See thread below.

If anyone wants to do this for effective charity, or for funds to cover their own time doing direct work... and happens to need a place to stay or to stop by for the day in Western Massachusetts (I'm between Springfield and Worcester) let me know.

Or reach out anyways if you are interested.

Comment by david_reinstein on Redirecting private foundation grants to effective charities · 2023-03-07T18:21:04.240Z · EA · GW

Posting some content from our email exchange, in case others benefit or want to weigh in. I think this idea is very promising. 


A relevant list/Airtable 

 
See THIS airtable view, sort or filter on the last tickbox ... 'foundation-relevant' ... for a start. 
 

Key potential partners/orgs

From that list, my memory, and conversations on relevant Slacks,  there are at least a few organizations that seem to be relevant here:

1. Charity Entrepreneurship: "are working with a small number of foundations"
 

2. Generation Pledge: Does interact with private foundations. Inside contact: But it's hard to 'break into this'/it's delicate and personal touch work
 

3. Effective Giving: I believe they have been in this business for a while

Possibly/likely:

4. Open Philanthropy; they may help other foundations. 

5. The Total Portfolio Project; focuses on 'impact investing',  seems to have an EA/ITN framework
 

Comment by david_reinstein on Book Giveaway Impact Analysis (Doing Good Better in NZ) · 2023-03-07T18:11:19.405Z · EA · GW

This post provides an overview and analysis of the Doing Good Better  book giveaway through Effective Altruism New Zealand (EANZ). The analysis covers data collected from survey responses between 05-Jan-17 and 17-Dec-19, for which there were a total of 298 responses, with appreciable variance in the amount of the survey which was completed. This analysis was initially completed around Jan 2020 so any reference to "to date" refers to then. 

DR: A 'sidebar' comment I will delete


Also 'outside the quote text bar' to see if that does anything different.

Comment by david_reinstein on Overview of effective giving organisations · 2023-03-07T16:32:26.506Z · EA · GW

I integrated this into my Airtable. HERE is a relevant view, focusing on 'giving-related' orgs.

Comment by david_reinstein on Idea: Curated database of quick-win tangible, attributable projects (update: link to Airtable WIP) · 2023-03-07T00:12:33.243Z · EA · GW

A colleague suggested making connections with https://www.super-linear.org/

Comment by david_reinstein on Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact · 2023-03-06T17:13:45.923Z · EA · GW

Just seeing this, now that it's taken effect.

Pro's of Smile (cons of it's shutting down)

  1. Some of the funds could/did go to effective charities, or charities I think are likely to be somewhat effective even if not assessed by GiveWell

Cons of Smile

I suspect there may have been some moral-licensing or crowding out ere on both sides:

  • People might have given less to charity 'because I'm already giving through Amazon Smile', and may have overestimated how much this was

  • Amazon/Bezos might also have done this ('we are giving through Amazon Smile, that is taking care of our obligation and PR')

Other puzzles and thoughts

  • Amazon's page notifying people about this seems to decidedly suggest a lack of a future effectiveness-focus

    • In particular, all the charities and initiatives they list seem to be USA-focused only
  • I've seen (can't find the link atm) to a claim/research that 'Amazon Smile was net-profitable for Amazon, because it generated more than 0.5% in additional profits'. If that was true then shutting down Smile would be Amazon shooting itself in the foot. But that might have been a motivated explanation; I'm not sure who/how could have actually estimated this and shared it publicly.

Comment by david_reinstein on Book Giveaway Impact Analysis (Doing Good Better in NZ) · 2023-03-06T14:52:59.311Z · EA · GW

The book giveaway is accessed through an online form on the EANZ website. The online form did not ask specific questions on donations - so part of the analysis had to be done by comparing survey results with donation data from the EANZ Charitable Trust (which forwards tax-deductible donations to a select number of GiveWell top charities).

 

The first link is dead

Comment by david_reinstein on Book Giveaway Impact Analysis (Doing Good Better in NZ) · 2023-03-06T14:50:48.051Z · EA · GW

The book giveaway is accessed through an online form on the EANZ website. This is a dead link

 

Comment by david_reinstein on Idea: Curated database of quick-win tangible, attributable projects (update: link to Airtable WIP) · 2023-03-05T21:09:00.434Z · EA · GW

Thanks. If this gets going we might integrate it with a bounty board.

Update 5 Mar 2023: starting an Airtable https://airtable.com/shrNps2rJwQxR0PVS as a prototype (WIP) to make things more concrete and motivate discussion.

Comment by david_reinstein on david_reinstein's Shortform · 2023-03-02T16:12:35.399Z · EA · GW

That could also be an interesting promo tag ... 'are you smarter than a professor of international development' :)

Comment by david_reinstein on david_reinstein's Shortform · 2023-03-02T15:46:29.405Z · EA · GW

Project Idea: 'Cost to save a life' interactive calculator promotion


What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.

 This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator? 


The case 

  1. People might really be interested in this… it’s super-compelling (a bit click-baity, maybe, but the payoff is not click bait)!
  2. May make some news headlines too (it’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!)
  3. if people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this conception… to help us build a reality-based impact-based evidence-based community and society of donors
  4. similarly, it could get people thinking about ‘how to really measure impact’ --> consider EA-aligned evaluations more seriously

While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive  in the way I suggest above, and I doubt  they market it heavily.

GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication).  But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it. 

It could also mesh well with academic-linked research so I may have  some ‘Meta academic support ads’ funds that could work with this.
 

Tags/backlinks (~testing out this new feature) 
@GiveWell  @Giving What We Can
Projects I'd like to see 

EA Projects I'd Like to See 
 Idea: Curated database of quick-win tangible, attributable projects 

Comment by david_reinstein on david_reinstein's Shortform · 2023-03-02T15:40:25.547Z · EA · GW

Right, I suspect you don’t have the design/engineering time for this (not to mention refining this for accuracy and communication) But if this was something you would be willing to consider making part of your site I can see people being interested in this! Also the sort of thing that could mesh well with ~academic research so I could use some ‘FB academic support ads’ funds. I’m considering whether the best approach to this would be: Put out a request for a volunteer to help take this on to get to a PoC/MVP first Where to recruit? Maybe you already have people in your database, I’d be happy to encourage/guide Apply for direct funding for this; possibly working with the quantitative uncertainty and ‘build your own cost-effectiveness’ people, or possibly with SoGive

Comment by david_reinstein on Conference on EA hubs and offices, expression of interest · 2023-03-01T15:40:43.683Z · EA · GW

TLDR please: are you proposing a conference to discuss conferences (and hubs and coworking etc?)

Comment by david_reinstein on Conference on EA hubs and offices, expression of interest · 2023-03-01T15:39:53.128Z · EA · GW

What does SOL stand for?

Comment by david_reinstein on Make RCTs cheaper: smaller treatment, bigger control groups · 2023-02-28T13:18:45.104Z · EA · GW

The “problems caused by unbalanced samples” doesn’t seem coherent to me; I’m not sure what they are talking about.

If the underlying variance is different between the treatment and the control group:

  • That might justify a larger sample for the group with larger variance
  • But I would expect the expected variance to tend to be larger for the treatment group in many/most relevant cases
  • Overall, there will still tend to be some efficiency advantage of having more of the less-costly group, generally the control group
Comment by david_reinstein on Help GiveDirectly beat "teach a man to fish" · 2023-02-27T17:36:02.232Z · EA · GW

I think that disengaging from developing countries would be a negative, at least if we include trade, services, tourism and immigration/remittances (not sure if that should count under 'not doing harm')

But OK:

  • ODA is about $180 billion per year; about $50 billion to Africa link

  • Plus about 8 billion in private philanthropy per year

  • Africa's debt service payments are about $70 billion per year

  • If that debt was all forgiven it might do as much good as the ODA. So if 'not doing harm' means 'forgiving debt' you might be right. (But that is not what GiveDirectly is involved in.)

Comment by david_reinstein on Help GiveDirectly beat "teach a man to fish" · 2023-02-27T16:13:39.416Z · EA · GW

Strong disagree downvoted because:

  1. This blame-the-west narrative may alienate people (and I don’t think it’s a great explanation for poverty, but that’s debatable but also not the main point)

  2. This suggests the solution is simply ‘not doing harm/not getting involved’

Comment by david_reinstein on EA London Hackathon Retrospective · 2023-02-26T19:57:23.735Z · EA · GW

GPT-Automator, a voice-controlled Mac assistant capable of completing various complex tasks based off speech input. See the blog posts by Luke and Chidi.

This seems super useful and something I’ve been waiting for. But what is the EA or AI safety connection)

Comment by david_reinstein on Posts we recommend from last week (Digest #125) · 2023-02-26T19:52:25.533Z · EA · GW

I like this and like that you also signal boost a classic post. Maybe consider boosting non-new posts with new comments, or those that seem particularly relevant to current priorities?

Comment by david_reinstein on Make RCTs cheaper: smaller treatment, bigger control groups · 2023-02-25T19:14:50.057Z · EA · GW

I assumed more people were aware of this. I'm using it in a trial we're about to start. But as others have said, in many trials the treatment is not particularly more costly. But probably a factor in detailed interventions in poverty and health in poor countries. Have you looked into how many studies in development economics and GH&D with costly interventions do this?

Comment by david_reinstein on How can we improve discussions on the Forum? · 2023-02-23T13:11:54.318Z · EA · GW

Keep posts “going” longer and interacting with and building the wikis more… rather than starting new posts that cover the same ground. Focusing on new insights on the themes/issues/questions . (Ok I’m emphasising the content/wiki here a bit more than the discussions per se)

Some specific suggestions:

  • emphasize and reward the wikis more and contributions to them… have these show up in feeds more.

  • suggest post authors add to specific wiki entries? Specific karma bounties for this?

  • karma for tagging others’ posts well (not sure how to make that work though)

  • autosuggestions for wiki connections and previous posts while you are composing new posts (As in Stackoverflow)

  • if I interact with a post (comment, like, like a comment, vote) … new comments and edits to the post show up in my feed

  • tags for shortforms, shortforms in feeds for tags you care about … enabling wiki content to build up from targeted short forms … perhaps specifically tagged as “wiki” and with that tag?

  • hackathons, sessions, rewarding “we created or improved this wiki entry” at live events and meetings

  • tools to make it easier to add to wiki from second brain and wiki systems (obsidian, roam, notion, etc., maybe even slack )

Comment by david_reinstein on How effective are the "Best Charities by Cause" organizations recommended by Charity Navigator? · 2023-02-22T16:56:52.062Z · EA · GW

Quick-ish answer:

Not all of these are effective. More or less, at the moment, the charities GiveWell rates as effective do make it to those lists, but other charities that are clearly less effective also make it to those lists. The lists are not-cause-neutral or cosmopolitan; they tend to include US-operating charities that are likely to be an order of magnitude less effective

In general I would ~not trust their standards much (from an EA perspective) outside of the Charities Working in High-Impact Causes list. But IIRC that list depends on the union of GiveWell, Founders Pledge, and some other EA and EA-adjacent lists, and thus it's arguably pretty good, depending on who is using it and how. Most/all of the charities on that list are plausibly effective (at least someone in EA would probably argue that they are).

In general CN's recommendations include many things we would not see as important for the effectiveness or ultimate outcome, like 'overhead ratio'. See some other posts I made on this (I just added the 'Charity Navigator' tag and a wiki stub fwiw).

Note that there are some EA people in or working with CN, and I see some real promise! Hopefully one of them will chime in here too.

Comment by david_reinstein on Should we tell people they are morally obligated to give to charity? [Recent Paper] · 2023-02-22T14:40:09.987Z · EA · GW

'Conditional on positive' results are less reliable because of the potential for differential selection, but that is still a bit interesting. (But it could be e.g., 'bigger push to get people to donate means you attract less interested people on average, so they respond with smaller amounts.)

The equivalence testing is close to what I meant (do you want to expand on/link those), but no, not quite the same.

Quickly, what I had in mind is a 'Bayesian regression'. You input a model, priors over all parameters (perhaps 'weakly informative' priors centered at a 0 effect) and you can then compute the posterior belief for these parameters. R's BRMS package is good for this. Then you can report 'what share of the posterior falls into each of the categories I mentioned below'.

I'll try to follow up on this more specifically, and perhaps share some code.

Comment by david_reinstein on Should we tell people they are morally obligated to give to charity? [Recent Paper] · 2023-02-21T17:16:43.835Z · EA · GW

But you don’t want to imply that the morally demanding argument backfired either. Donations were higher in the morally demanding case, no?

So we should update our beliefs in that direction I think, even if you don’t have statistical power to “rule out” that this difference was due to chance.

Can you tell us: in a simple bayesian updating model what is the ~ posterior probability that the strong moral demandingness condition performed

  • equal or worse than the regular moral argument
  • no more than 10% better
  • more than 10% better (1- the last thing)
  • more than 20% better ?
Comment by david_reinstein on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-19T16:38:38.006Z · EA · GW

By the way I added an ‘updates’ page to the gitbook HERE

I plan is to update this every ~fortnight and try to integrate this with an email subscription list, RSS, and social media. Not sure if the MailChimp signup thing is working yet.

Comment by david_reinstein on What is the impact of animal agriculture on wild animal suffering? · 2023-02-12T23:57:42.093Z · EA · GW

I agree this merits further serious and careful analysis. But even if we were to believe Tomasik's claims at face value (that the wild animals that would use the space and resources generated by less animal farming have substantial moral weight and would be suffering in net, etc.)

... he still says:

pork, chicken, eggs, farmed fish: I would avoid because these foods cause significant farm-animal suffering and have unclear net impact on wild-animal suffering.

And these are the majority of the farmed animals people eat worldwide, both by weight and by numbers[1]


  1. OK, maybe this excludes farmed insects, which he also advises against. ↩︎

Comment by david_reinstein on Proposed: donation mechanism for people doing direct work (USA tax relevant) · 2023-02-10T21:52:43.142Z · EA · GW

Donating equity seems a bit different but also interesting! Thanks

Comment by david_reinstein on Help me recommend effective charities to people who want to donate to specific causes and populations · 2023-02-09T23:12:32.832Z · EA · GW

I have no actual suggestions but some meta-ones.

Need for quant evaluations outside the top/most transparent

I don't think there is a strong base of rigorous evaluations of 'which non-top (or multi-intervention) charities are closer to being impactful' and 'what is a good range of estimated impacts for these. [1] I think this would be a good thing to have for the most part. I was hoping SoGive or ImpactMatters could fill this niche, but it hasn't happened.

E.g., I think there are people that might never give to AMF but might be convinced to give to Oxfam or MSF instead of Save the Children or St. Judes Hospital. What would be the value of this... is it worth our effort? We don't know.

Engage but push back a bit, and get them thinking about being quantitative

As you say, elephants/trans rights/US diabetics are likely to be orders of magnitude less impactful per $. Is it still worth your time to engage? Maybe, if you can do so in a way that...

  • Gets these people thinking about measuring impact,
  • and expanded moral circles,
  • makes them explicitly note they are playing favorites, and likely to be neglecting something much more effective.

... You can say 'here is the argument for donating to a GiveWell charity' (or an ACE charity, etc.), but I understand you have particular reasons to want to support elephants. (And maybe say a little more about this, engage them in a discussion.)

This may not get them to donate to AMF, or even UNICEF now, but it may shift their thinking going forward.


  1. Or 'which do better at harder-to-measure outcomes' ... with credible quantified measures and uncertainty. ↩︎

Comment by david_reinstein on [deleted post] 2023-02-09T14:56:15.222Z

I agree. In case you want to make this a (obviously non-representative) poll, you could make a "Yes" and a "No" comment and people could 'agreement vote' on it.

Comment by david_reinstein on If there was a marketplace where you could see products/services offerred by EAs, would you use it? · 2023-02-08T20:20:34.917Z · EA · GW

Yes especially for earn-to-give people /donors/pledgers because I would hope a share of the 'profit' would go to a good cause

But maybe this can be done on existing platforms with correct labelling linking.

Where do I 'agree vote'? I can't figure this out.

Comment by david_reinstein on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-08T00:08:51.877Z · EA · GW

I expect that once there's been quite a few unjournal reviews, that people will attempt to compare scores across projects. I.e., I can imagine a world in which a report / paper receives a 40/100 and people point out "This is below the 60/100 of the average article in the same area". How useful do you expect such comparisons to be?

I'm hoping these will be very useful, if we scaled up enough. I also want to work to make these scores more concretely grounded and tied to specific benchmarks and comparison groups. And hope to do better to operationalize specific predictions,[1], and use well-justified tools for aggregating individual evaluation ratings into reliable metrics. (E.g., potentially partnering with initiatives like RepliCATS.

Do you think there could / should be a way to control for the depth of the analysis?

Not sure precisely what you mean by 'control for the depth'. The ratings we currently elicit are multidimensional, and the depth should be captured in some of these. But these were basically a considered first-pass; I suspect we can do better to come up with a well-grounded and meaningful set of categories to rate. (And I hope to discuss this further and incorporate the ideas of particular meta-science researchers and initiatives)


  1. For things like citations, measures of impact, replicability, votes X years on 'how impactful was this paper' etc., perhaps leveraging prediction markets. ↩︎

Comment by david_reinstein on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-08T00:00:10.524Z · EA · GW

What is the "depth" of research you think is best suited for the Unjournal? It seems like the vibe is "at least econ working paper level of rigor".

In the first stage, that is the idea.  In the second stage, I propose to expand into other tracks.

Background: The Unjournal is trying to do a few things, and I think there are synergies:  (see the Theory of Change sketch here)

1. Make academic research evaluation better, more efficient, and more informative (but focusing on the 'impactful' part of academic research)

2. Bring more academic attention and rigor to impactful research

3. Have academics focus on more impactful topics, and report their work in a way that makes it more impactful

For this to have the biggest impact, changing the systems and leveraging this...  we need academics and academic reward systems to buy into it. It needs to be seen as rigorous, serious, ambitious, and practical. We need powerful academics and institutions to back it. But even for more modest goal of getting academic experts to (publicly) evaluate niche EA-relevant work, it's still important to be seen as serious, rigorous, credible, etc. That's why we're aiming for the 'rigor stuff' for now, and will probably want to continue this, at least as a flagship  tier/stream into the future

But it seems like a great amount of EA work is shallower, or more weirdly formatted than a working paper. I.e., Happier Lives Insitute reports are probably a bit below that level of depth (and we spend a lot more time than many others) and GiveWell's CEAs have no dedicated write-ups. Would either of these research projects be suitable for the Unjournal?

Would need to look into specific cases. My guess is that your work largely would be, at least 1. If and when we launch the second stream and 2. For the more in-depth stuff that you might think "I could submit this to a conventional journal but it's too much hassle".

GiveWell's CEAs have no dedicated write-ups.

I think they should have more dedicated writeups (or other presentation formats), and perhaps more transparent formats, with clear reasoning transparent justifications for their choices, and robustness calculations, etc. Their recent contest goes in the right direction, though.

In terms of 'weird formats' it depends what you mean by weird. We are eager to evaluate work that is not the typical 'frozen pdf prison' but is presented (e.g.) as a web site offering foldable explanations for reasoning transparency. And in particular, open-science friendly dynamic documents where the code (or calculations) producing each results can be clearly unfolded,  with a clear data pipeline, and where all results can be replicated. This would an improvement over the current journal formats: less prone to error, easier to check, easier to follow, easier to re-use, etc.

I take the theory of change here, as you say to be "make rigorous work more impactful". But that seems to rely on getting institutional buy in from academia, which sounds quite uphill.

I agree that it's a challenge, but I think that this is a change whose time has come. Most academics I've talk to individually think public evaluation/rating would be better than the dominant (and inefficient) traditional journal system, but everyone thinks "I can't go outside of this system on my own". I  outline how I think we might be able to crack this (collective action problem and inertia)  HERE.  (I should probably expand on this.).  And I think that we  (non-university linked EA researchers and funders) might be in a unique position to help solve this problem.  And I think there would be considerable rewards and influence to 'being the ones who changed this'.  But still ...

An alternative path is "to make impactful (i.e., EA) work more rigorous". I'd guess there's already a large appetite for this in EA. How do you see the tradeoffs here?

I agree this would be valuable, and might be an easier path to pursue. And there may indeed be some tradeoffs (time/attention).   I want to continue to consider, discuss, and respond to this in more detail. 

For now, some off-the-cuff justifications for the current path:

  • We can try the high-rigor path first and then could pivot or branch to the 'bring rigorous evaluation to niche and less formal EA-policy stuff' later.  But I think the reverse ordering would be more difficult (first impressions what what).
  • IMO a lot of academic work  is  highly impactful, or could be made highly impactful with a bit of TLC.[1] 
  • Related to that, I'm fairly sympathetic to the points made HERE about valuing expertise and rigor, and not always trying to reinvent the wheel. I think we could do a lot to connect academic/non-EA expertise with EA-aligned goals.
  • My background/ideas and the setup of The Unjournal might be more suited to doing the 'make rigorous research more impactful' (and improving research evaluation) part?

 

(I'll respond to your final point in another comment, as it's fairly distinct) 

  1. ^

    Reporting the right outcomes and CEAs, more reasoning-transparency, offering tools/sharing data and models, etc.

Comment by david_reinstein on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-07T18:12:00.598Z · EA · GW

Atm not but we're working on something like an email newsletter. And one thing you can do is follow Unjournal's Sciety Group -- click the 'follow' button. I think that gives you updates, but you need to have a Twitter account

Comment by david_reinstein on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-06T20:51:47.795Z · EA · GW

Thanks and good question.

Are you intending on doing evaluation of 'canonical non-peer reviewed EA work?

Short answer, probably down the road a little bit, after our pilot phase ends.

We're currently mainly focused on getting academic and academic-linked researchers involved. Because of this, we are leaning towards targeting conventionally-prestigious and rigorous academic and policy work that also has the potential to be highly impactful.

In a sense, the Denkenberger paper is an exception to this, in that it is somewhat niche work that is particularly of interest to EAs and longtermists.

Most of the rest of our 'current batch' of priority papers to evaluate are NBER working papers or something of this nature. That aligns with the "to make rigorous work more impactful" part of our mission.

But going forward we would indeed like to do more of exactly what you are suggesting. To bring academic (and non-EA policy) expertise to EA-driven work; this is the ~"to make impactful work more rigorous" part of our mission. This might be done as part of a separate stream of work; we are still working out the formula.

Comment by david_reinstein on Project Idea: Lots of Cause-area-specific Online Unconferences · 2023-02-06T15:21:34.207Z · EA · GW

Could you add some tags to this? This seems like something that should be integrated into to our wiki/information infrastructure/knowledge base.

Comment by david_reinstein on Overview of effective giving organisations · 2023-02-04T14:46:49.328Z · EA · GW

That makes sense to me

Comment by david_reinstein on I doubt "Neartermist" is a monicker people want. What would you like to be called? · 2023-02-04T14:45:57.085Z · EA · GW

I see what you mean. I guess my point is “neartermist” sounds like it’s a coherent ideology in opposition to longtermism. “Not longtermist” is not a banner to march behind or a team, it’s just a factual description (in lower case).

Comment by david_reinstein on Let's advertise EA infrastructure projects, Feb 2023 · 2023-02-04T14:36:29.255Z · EA · GW

Let’s promote the wiki and make it more visible!

Comment by david_reinstein on I doubt "Neartermist" is a monicker people want. What would you like to be called? · 2023-02-03T21:20:11.097Z · EA · GW

"Not longtermist"

My previous discussion

Comment by david_reinstein on I doubt "Neartermist" is a monicker people want. What would you like to be called? · 2023-02-03T00:42:28.594Z · EA · GW

"Not longtermist" ... this was my take

Comment by david_reinstein on Native English speaker EAs: could you please speak slower? · 2023-01-29T16:34:19.817Z · EA · GW

Yes I’ve tried that. It’s very helpful. it should be easy to pipe into a translation bot too. Dual mode (both languages appear) would be ideal.

Comment by david_reinstein on Native English speaker EAs: could you please speak slower? · 2023-01-29T04:52:38.711Z · EA · GW

I think these tips are generally good. But one more we might experiment with, especially when online/remote:

Use voice-to-text speech recognition, and possibly also even translation.

My experience/impression is that this takes a bit of setup (good microphones, clear speaking voice, quiet setting etc), but actually works surprisingly well. I suspect we might be coming to the point soon where this actually leads to better and faster conversation between people with different native languages.

It’s hard to process new and complex ideas while also dealing with a linguistic burden (having to translate a foreign language, or speak your own language extra precisely, and adapt to unfamiliar pronunciations.)

Using this machine tech may be somewhat awkward and seem less natural. But the benefit lower cognitive load, and being able to focus onto issues may outweigh this.