Posts

ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) 2022-01-13T19:00:31.229Z
Seeing the effects of your donation and making incremental choices 2021-11-05T21:46:39.173Z
Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post) 2021-11-03T20:16:02.421Z
EA Survey 2020 Series: Donation Data 2021-10-26T15:31:05.563Z
EA Market Testing 2021-09-30T15:17:51.011Z
[Link] Reading the EA Forum; audio content 2021-06-29T21:29:15.133Z
david_reinstein's Shortform 2021-05-31T14:43:29.796Z
What are your top workflow 'blockers'? 2021-05-20T21:01:01.774Z
A corporate skills bake sale? 2019-04-13T15:49:40.178Z
Employee Giving incentives: A shared database... relevant for EA job-seekers and activists 2018-05-19T09:37:01.877Z
Wiki/Survey: Experiences in fundraising/convincing people/organisations to support EA causes 2017-11-25T19:34:06.732Z
Give if you win (innovation in fundraising) 2017-05-26T19:36:09.542Z

Comments

Comment by david_reinstein on What are some artworks relevant to EA? · 2022-01-17T03:08:30.279Z · EA · GW

I thought you might also highlight classic, historical, and even ancient works that convey important ideas?

Comment by david_reinstein on ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) · 2022-01-13T19:53:05.859Z · EA · GW

Thanks. It's still moderately-early days. I just don't like the direction it seems to be going in.

Comment by david_reinstein on ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) · 2022-01-13T19:32:45.281Z · EA · GW

David Moss:

There was some discussion of the original acquisition here.

Historically, Charity Navigator has been extremely hostile to effective altruism, as you probably know, so perhaps this isn't surprising

My response

Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post

That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.

I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.

Comment by david_reinstein on ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) · 2022-01-13T19:31:18.737Z · EA · GW

I don't see much in the way of improvement ... does anyone else?

Comment by david_reinstein on ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) · 2022-01-13T19:30:31.520Z · EA · GW

Moving some comments from the Shortform...

Aaron Gertler wrote:

I spent a few minutes looking at the impact feature, and I... will also go with "not satisfied".

From their review of Village Enterprise:

Impact & Results scores of livelihood support programs are based on income generated relative to cost. Programs receive an Impact & Results score of 100 if they increase income for a beneficiary by more than $1.50 for every $1 spent and a score of 75 if income increases by more than $0.85 for every $1 spent. If a nonprofit reports impact but doesn't meet the threshold for cost-effectiveness, it earns a score of 50.

My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).

But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money.

(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)


Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.

Comment by david_reinstein on [deleted post] 2022-01-13T18:16:24.100Z

Effective altruism is the project of using evidence and reason to find out how to do the most good and to act on these findings.

I think this is slightly better than CEA's statement

Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.

But I'd still want perhaps to moderate this a bit, if there's a way of doing so while still being clear and concise

"to find out how to do the most good"

... We can only aim to figure out how to do the most good. We will never know with certainty.

"how to do the most good and to act on these findings"

... This seems better than CEA's “as much as possible” which suggests that we must be completely self-sacrificing.

...But still 'the most good' seems a bar too high. Perhaps something like “as much good as possible, to the best of our knowledge, given the amount of our resources we are willing and able to contribute.” ?

Comment by david_reinstein on [Creative Writing Contest][Fiction] Do Good Better · 2022-01-06T20:23:41.146Z · EA · GW

Sorry, it got set as a draft when I edited it. Should be up again now here

Comment by david_reinstein on The EA Forum Podcast is up and running · 2022-01-06T05:24:31.682Z · EA · GW

I think I'm going to give this a rest on my end. It's not clear to me that people prefer this over the Nonlinear Library tts to make it worth doing.

I personally prefer/like hearing human narrators (especially if read-by-author), but this doesn't mean other people like it, or like hearing me read stuff.

Comment by david_reinstein on Name for the larger EA+adjacent ecosystem? · 2022-01-05T15:51:13.624Z · EA · GW

As I mentioned above, cf “Brights”

Comment by david_reinstein on Name for the larger EA+adjacent ecosystem? · 2022-01-05T15:50:20.720Z · EA · GW

Not bad but maybe not catchy enough? I’m also worried about the connotation of “pearl” as in a prized thing.

Worried about analogue where some atheists and rationalists started calling themselves “Brights” and everyone threw up in their mouth a little. :)

Comment by david_reinstein on Is there a market for products mixing plant-based and animal protein? Is advocating for "selective omnivores" / reducitarianism / mixed diets neglected - with regards to animal welfare? · 2022-01-05T15:46:25.686Z · EA · GW

Minor data point: Linda McCartney hamburgers often mix soy protein and cow cheese.

Comment by david_reinstein on [Feature Announcement] Rich Text Editor Footnotes · 2022-01-05T15:41:16.945Z · EA · GW

Ah that rendered…

 

so it’s text^[contents of footnote]

Comment by david_reinstein on [Feature Announcement] Rich Text Editor Footnotes · 2022-01-05T15:39:48.916Z · EA · GW

Typically it’s

Text.[1]


  1. content of footnote ↩︎

Comment by david_reinstein on The Unweaving of a Beautiful Thing · 2022-01-05T15:38:51.747Z · EA · GW

It might be helpful for your writing to “listen to how people hear it”? (Well not people, just one person, but still…)

Comment by david_reinstein on [Feature Announcement] Rich Text Editor Footnotes · 2022-01-05T01:22:58.160Z · EA · GW

Hover-over is the big win here! Without hover-over, it's a chore to digest footnotes.

Comment by david_reinstein on Pilot study results: Cost-effectiveness information did not increase interest in EA · 2022-01-03T19:22:28.999Z · EA · GW

Another consideration here: participants knew they were in an experiment, and probably had a good sense of what you were aiming at.

The difference in treatment and control was whether people

  1. were asked to estimate the difference in cost-effectiveness between average and highly effective charities – or they had a control task."

and whether (2x2 here)

  1. "participants either were told that the difference in cost-effectiveness is 100x – or they were told control, irrelevant information. "

If either of these increased their stated interest in EA or their giving behavior, it would be informative, but we still might want to be careful in making inferences to the impact of these activities and this 'de-biasing' in real world contexts.

Either of these tasks might have heightened the 'desirability bias' or the extent to which people considered their choices in a particular analytical way that they might not have done had they not known they were in an experiment.

Comment by david_reinstein on Pilot study results: Cost-effectiveness information did not increase interest in EA · 2022-01-03T19:19:51.708Z · EA · GW

Thanks for sharing and for putting this on OSF. Some thoughts and suggestions, echoing those below.

Maybe consider rewriting/re-titling this? To say "did not increase" seems too strong and definitive.

You "failed to find a statistically significant effect" in standard tests that were basically underpowered. This is not strong evidence of a near-zero true effect. If anything, you found evidence suggesting a positive effect, at least on the donation 'action' (if I read Aaron's comment carefully).

You might consider a Bayesian approach, and then put some confidence bounds on the true effects, given a reasonably flat/informative prior. (You can do something similar with 'CIs' in a standard frequentist approach.)

Then you will be able to say something about 'with this prior, our posterior 80% credible interval over the true effect is between -X% and +X%' (perhaps stated in terms of Cook's d or something relatable) ... if that interval rules out a 'substantial effect' then you could make a more meaningful statement. (With appropriate caveats about the nature of the sample, the context, etc., as you do.)

(Also, if you rewrite, can you break this into shorter paragraphs -- the long paragraph chunks become overwhelming to read.)

Comment by david_reinstein on Pedant, a type checker for Cost Effectiveness Analysis · 2022-01-02T17:56:15.619Z · EA · GW

Could you give a quick clarification on the difference between Pedant and Squiggle?

Comment by david_reinstein on Pedant, a type checker for Cost Effectiveness Analysis · 2022-01-02T17:48:00.090Z · EA · GW

What are your thoughts on Causal in this mix?

Although it doesn’t do the type checking afaik, it seems a good interface for incorporating and enabling

  1. Explicitly modelled uncertainty (montecarlo)

  2. Easy user input of moral and epistemic parameters, and easy to see sensitivity to this

  3. Clear presentation of the (probability distribution) results?

I suspect something like Causal would be better at bringing Givewell and the community on board, but I agree that the formal coding and type checking involved in your approach is also very important.

Maybe some way to integrate these? (DM me, I’m having conversations with a few ppl in this space)

Comment by david_reinstein on [Creative Writing Contest][Fiction] Do Good Better · 2022-01-02T17:21:30.156Z · EA · GW

Thanks for the great story ... you are now in audio too on the EA forum podcast

Comment by david_reinstein on What's wrong with the EA-aligned research pipeline? · 2022-01-02T00:47:26.270Z · EA · GW

I just listen to this again on the ea forum podcast https://open.spotify.com/episode/6i2pVhJIF0wrF2OPFjaL2n?si=e3jTibMfRY6G9r99zveFzA Having only skimmed the written version. Somehow I got more out of the audio.

Anyways, I want to add a few impressions.

I think there is some under emphasis on the extent to which “regular” researchers are, or could be induced to do things that are in fact very closely aligned to EA research priorities. Why do people get into research and academia? (Other than amenities, prestige, and to pay the bills?)

Some key reasons are 1. Love of knowledge and love of intellectual puzzles and 2. The desire to make the world a better place through the research. That at least was my impression of why people going to areas like Economics, most social sciences, Philosophy, and biology. I think that some of these researchers may have more parochial concerns, or nonutilitarian values (e.g. social justice) and not be fully convinced by the desire to maximise The long-term good for people and sentient beings everywhere. However, I think that academics and researchers tend to be much further down this road then the general public. Perhaps an interesting project (for my team in fellow travellers) could involve “convincing and communicating with academics/researchers”.

I think we can do more to leverage and “convert” research and researchers who are not affiliated with EA (yet).

Comment by david_reinstein on Two Podcast Opportunities · 2022-01-01T18:16:31.621Z · EA · GW

Also, on the Toby Ord episode, I've found a way to strip most/all of my comments from previous episodes. I've done this and posted a new version here ... (and on all podcatchers).

This is an edited repost of a previous reading (reader David Reinstein); I removed my commentary so you can listen to the full essay uninterrupted.

Let me know what you think I may do this for other prior episodes (remove the comments), if there is interest. I'm thinking for future readings to save my comments for the end, if at all (maybe still including a few explainers if they don't distract from the flow.) ... but if I were to do the MIRI thing I probably wouldn't add any comments or even explainers, as I have little or know expertise there.

Comment by david_reinstein on david_reinstein's Shortform · 2022-01-01T18:13:58.358Z · EA · GW

Also source of consciousness ("which configurations of matter are conscious?") seems a bit different from moral status ("which configurations of matter do we care about?").

A paperclip maximiser could have consciousness, that doesn't have to mean we care too much about it or are willing to sacrifice our lives to ensure its survival.

But why not? How do we justify that?

Basically I think humans just care about anything that looks similar to human beings. (Which makes sense evolutionarily.)

That may be what we do care about, but how can we justify that in terms of what we should care about?

Comment by david_reinstein on david_reinstein's Shortform · 2022-01-01T18:12:42.666Z · EA · GW

When you say "feeling", are you referring to conscious experience of the AI, or mechanistic positive and negative signals?

The former. The latter has know moral patienthood I guess

  • If consciousness, super-high uncertainty on what consciousness even is, what the correct ontology for it is. But can be discussed.

I've been reading more about this and I realize there is great disagreement

If positive and negative reward signals, then AI today already runs based on positive and negative reward signals as you mention.

Of course, but their 'concious experience' of these signals need not agree with how they are coded in the algorithm. They could 'feel pain from maximizing' ... we just don't know.

Comment by david_reinstein on The value of money going to different groups · 2022-01-01T18:10:06.016Z · EA · GW

I've put up another version of this on the EA Forum podcast without any commentary here

Comment by david_reinstein on Creative Writing Contest: The Winning Entries · 2022-01-01T05:04:12.384Z · EA · GW

oops. some audio issues -- a silence gap. Should be fixed now.

Comment by david_reinstein on Two Podcast Opportunities · 2022-01-01T03:42:41.698Z · EA · GW

To clarify, I would not take any funding for this myself (conflict of interest), but I think there are a good set of readers/audio editors in our group who should be compensated for their time.

Comment by david_reinstein on Two Podcast Opportunities · 2021-12-31T22:13:40.960Z · EA · GW

The commenting (including clarifications) and the audio switching was all my doing, I think. I don’t think any of the other readers did this. I tried to denote these episodes separately. For the Toby Ord episode I had a lot to say and some knowledge in this area and I wanted to get my thoughts out there. So it was a “different sort of thing”.

For some other readings I commented less, or almost not at all.

The left/right thing was meant to distinguish between the original text and the comments. Perhaps there is some better way of doing this?

I can definitely see the argument for “reading without commenting” in many cases, or for putting these separate.

For me it was more like “I want to read and comment on these and I thought that while doing so it would be worth posting them”.

Anyways the main point is that ea forum podcast could probably handle doing these, at least some of them and we wouldn’t add commentary… at least if the funding were there I think we could get these done.

Comment by david_reinstein on Two Podcast Opportunities · 2021-12-30T20:23:28.681Z · EA · GW

Michael -- we have a bunch of the infrastructure in place for this, at least an Airtable system and an Anchor-hosted podcast... might be worth linking arms here ... with the EA Forum podcast

Shared airtable

signup to read or edit form

Comment by david_reinstein on Two Podcast Opportunities · 2021-12-30T20:20:05.472Z · EA · GW

I've already started on this for the EA Forum podcast.

Finm -- let's join forces here? Maybe we cross-post on both podcasts?

Here's the first story I recorded

If anyone wants to be a reader (authors could be great readers too!), they can ... fill out this airtable-linked form

... or just add their info to the shared airtable, itself and 'claim' a story (or other EA Forum post) to read in the forum_posts_episodes table (minimal_entry view if you want to be quick)

Comment by david_reinstein on Creative Writing Contest: The Winning Entries · 2021-12-30T20:14:04.626Z · EA · GW

I hope we can have narrations of these stories, it would be great! (Authors: if you want to read your own work, I could help post it/edit it?)

I read the winning entry into the EA Forum podcast here ... Spotify link here.

I hope more are forthcoming .

Comment by david_reinstein on The Unweaving of a Beautiful Thing · 2021-12-30T20:11:30.299Z · EA · GW

Updated: the audio was a bit low, I boosted the volumes

Comment by david_reinstein on Have you considered switching countries to save money? · 2021-12-30T20:10:22.025Z · EA · GW

My impression is that Spain, perhaps the Canary Islands, have a very good combination of quality of life, amenities, and safety, and relatively low cost of living. But perhaps taxes are higher than elsewhere? I'm not sure.

I'm curious what other places people would consider. Having a good hub of interesting people around could make a big difference.

Comment by david_reinstein on EA Market Testing · 2021-12-30T19:50:33.393Z · EA · GW

Update: We received 10 responses to the survey. Thanks very much to all who responded. I intend to report in on this soon.

Comment by david_reinstein on The Unweaving of a Beautiful Thing · 2021-12-30T02:11:41.650Z · EA · GW

I read this story into the EA Forum podcast HERE

Comment by david_reinstein on Seeing the effects of your donation and making incremental choices · 2021-12-24T21:40:48.454Z · EA · GW

Audio fans/visual foes: I now read this in my podcast here

Fwiw, I read my essay on the Economics of the Gift in my podcast HERE

Comment by david_reinstein on Effective Altruism is a Question (not an ideology) · 2021-12-24T01:15:06.040Z · EA · GW

Can you elaborate on where or how it conflates the 'is' and the 'ought'?

Comment by david_reinstein on Lizka's Shortform · 2021-12-24T01:11:47.381Z · EA · GW

Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.

I really think there's a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that's being entertained is worth the time

I agree, but making uncertainty explicit makes it even better. (And I think it's an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.

Comment by david_reinstein on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2021-12-24T01:08:23.408Z · EA · GW

Further opinions/endorsement ...

I think "this approach" (making uncertainty explicit) is important, necessary, and correct...

I'd pair it with "letting the user specify parameters/distributions over moral uncertainty things" (and perhaps even subjective beliefs about different types of evidence).

I think (epistemic basis -- mostly gut feeling) it will likely will make a difference in how charities and interventions rank against each other. At first pass, it may lead to 'basically the same ranking' (or at least, not a strong change). But I suspect that if it is made part of a longer-term careful practice, some things will switch order, and this is meaningful.

It will also enable evaluation of a wider set of charities/interventions. If we make uncertainty explicit, we can feel more comfortable evaluating cases where there is much less empirical evidence.

So I think 'some organization' should be doing this, and I expect this will happen soon; whether that is GiveWell doing it or someone else.

Comment by david_reinstein on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2021-12-20T01:05:09.371Z · EA · GW

By the way I added a few comments and suggestions here and in your blog using hypothes.is, a little browser plugin. I like to do that as you can add comments (even small ones) as highlights/annotations directly in the text

Comment by david_reinstein on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2021-12-20T01:04:16.704Z · EA · GW

How did this/how is this going? I'm chatting with Taimur of Causal tomorrow and I wanted to bring this up

Comment by david_reinstein on High School Seniors React to 80k Advice · 2021-12-18T20:01:27.897Z · EA · GW

Related to this, I am wondering the extent to which (I'm being slightly hyperbolic here)

  • they accepted you had won the argument logically, but were looking for ways to recover and marshall another rhetorical attack, and/or
  • you 'browbeat them' into thinking 'it's easier to agree with this'.

I'd be very curious to know whether these students really follow up on this... .... whether they would take time consuming/costly steps to pursue and learn about impactful careers ... on their own time, after you leave the room, in the future.

Comment by david_reinstein on Listen to more EA content with The Nonlinear Library · 2021-12-17T23:39:00.807Z · EA · GW

Why, because it’s a nonhuman voice?

Comment by david_reinstein on Flimsy Pet Theories, Enormous Initiatives · 2021-12-17T22:36:01.627Z · EA · GW

It's just a judgement call. Something I thought seemed obvious to most people, but perhaps not so obvious.

As I noted below, my epistemic basis is only that when he first started spouting that “everything should be social” stuff it looked like he was just mouthing the words.

Still, according to Encyclopedia Britannica

it began at Harvard University in 2003 as Facemash, an online service for students to judge the attractiveness of their fellow students.

So it's pretty clear that the founding was not a grand prosocial vision. Whether the later talk was 'greenwashing' or 'realization that actually this can do lots of good' is perhaps less clear-cut.

Comment by david_reinstein on Listen to more EA content with The Nonlinear Library · 2021-12-17T18:17:21.975Z · EA · GW

Why have a digital version if there is already an AC10 podcast with a human reader?

Comment by david_reinstein on Listen to more EA content with The Nonlinear Library · 2021-12-17T18:16:51.230Z · EA · GW

Will you tag each EA forum post that is added to the Nonlinear library with the 'audio' tag? I want to make sure we don't duplicate these on the EA Forum Podcast or on Found in the Struce or elsewhere.

Of course sometimes a human reading might be worth doing on top of a machine-read one... but probably better to first focus on spreading the audio widely.

Comment by david_reinstein on Do sour grapes apply to morality? · 2021-12-17T03:09:35.253Z · EA · GW

Are the bars standard errors or confidence intervals? In either case they seem rather wide. While the theory seems plausible I don’t think you can make strong inferences from this data

Comment by david_reinstein on Flimsy Pet Theories, Enormous Initiatives · 2021-12-15T23:31:27.174Z · EA · GW

Maybe our friend Mark believes it now, but if so I think it’s because he convinced himself/motivated reasoning. My epistemically basis: when he first started spouting that “everything should be social” stuff it looked like he was just mouthing the words.

Comment by david_reinstein on Lizka's Shortform · 2021-12-15T13:34:04.319Z · EA · GW

I could do something more formal with confidence intervals and the like

I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.

(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)

“Worst case all parameters” is very unlikely. So is “best case everything”.

See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.

Comment by david_reinstein on Flimsy Pet Theories, Enormous Initiatives · 2021-12-13T22:24:34.604Z · EA · GW

Small suggestion: drop the Facebook example and find a better one. Facebook was obviously not founded out of a grand prosocial vision, that was a pretty clear case of “greenwashing“ afterwards.