Posts

Open Phil EA/LT Survey 2020: Other Findings 2021-09-09T01:01:43.449Z
Open Phil EA/LT Survey 2020: How Our Respondents First Learned About EA/EA-Adjacent Ideas 2021-09-06T01:01:39.504Z
Open Phil EA/LT Survey 2020: EA Groups 2021-09-01T01:01:36.083Z
Open Phil EA/LT Survey 2020: What Helped and Hindered Our Respondents 2021-08-29T07:00:00.000Z
Open Phil EA/LT Survey 2020: Respondent Info 2021-08-24T17:32:46.082Z
Open Phil EA/LT Survey 2020: Methodology 2021-08-23T01:01:23.775Z
Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways 2021-08-19T01:01:19.503Z
How effective were vote trading schemes in the 2016 U.S. presidential election? 2020-03-02T23:15:39.321Z
Do impact certificates help if you're not sure your work is effective? 2020-02-12T14:13:25.689Z
What analysis has been done of space colonization as a cause area? 2019-10-09T20:33:27.473Z
What actions would obviously decrease x-risk? 2019-10-06T21:00:24.025Z
How effective is household recycling? 2019-08-29T06:13:46.296Z
What is the current best estimate of the cumulative elasticity of chicken? 2019-05-03T03:27:57.603Z
Confused about AI research as a means of addressing AI risk 2019-02-21T00:07:36.390Z
[Offer, Paid] Help me estimate the social impact of the startup I work for. 2019-01-03T05:16:48.710Z

Comments

Comment by reallyeli on Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways · 2021-08-20T03:41:09.462Z · EA · GW

Ah, glad this seems valuable! : )

Comment by reallyeli on EA Survey 2020: How People Get Involved in EA · 2021-07-16T17:36:20.127Z · EA · GW

Sorry, I neglected to say thank you for this previously!

Comment by reallyeli on Linch's Shortform · 2021-07-13T20:50:41.545Z · EA · GW

This idea sounds really cool. Brainstorming: a variant could be several people red teaming the same paper and not conferring until the end.

Comment by reallyeli on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-06T00:51:41.609Z · EA · GW

Viewership as in YouTube viewers? Where are you getting that stat from?

Comment by reallyeli on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-05T07:37:46.082Z · EA · GW

Looks like this already happened, in March 2020: https://lexfridman.com/william-macaskill/

Comment by reallyeli on EA Survey 2020: How People Get Involved in EA · 2021-06-08T02:57:21.701Z · EA · GW

It looks like Sam Harris interviewed Will MacAskill this year. He also interviewed Will in 2016. How might we tell if the previous interview created a similar number of new EA-survey-takers, or if this year's was particularly successful? The data from that year https://forum.effectivealtruism.org/posts/Cyuq6Yyp5bcpPfRuN/ea-survey-2017-series-how-do-people-get-into-ea doesn't seem to include a "podcast" option.

Comment by reallyeli on Buck's Shortform · 2021-06-06T19:26:00.915Z · EA · GW

Quick take is this sounds like a pretty good bet, mostly for the indirect effects. You could do it with a 'contest' framing instead of a 'I pay you to produce book reviews' framing; idk whether that's meaningfully better.

Comment by reallyeli on Is there evidence that recommender systems are changing users' preferences? · 2021-05-27T21:34:06.535Z · EA · GW

Yeah, I agree this is unclear. But, staying away from the word 'intention' entirely, I think we can & should still ask: what is the best explanation for why this model is the one that minimizes the loss function during training? Does that explanation involve this argument about changing user preferences, or not?

One concrete experiment that could feed into this: if it were the case that feeding users extreme political content did not cause their views to become more predictable, would training select a model that didn't feed people as much extreme political content? I'd guess training would select the same model anyway, because extreme political content gets clicks in the short-term too. (But I might be wrong.)

Comment by reallyeli on EA Survey 2020: Demographics · 2021-05-13T08:32:28.210Z · EA · GW

I was surprised to see that this Gallup poll found no difference between college graduates and college nongraduates (in the US).

Comment by reallyeli on EA Survey 2020: Demographics · 2021-05-13T08:28:09.493Z · EA · GW

Younger people and more liberal people are much more likely to identify as not-straight, and EAs are generally young and liberal. I wonder how far this gets you to explaining this difference, which does need a lot of explaining since it's so big. Some stats on this (in the US).

Comment by reallyeli on Draft report on existential risk from power-seeking AI · 2021-05-13T06:20:11.157Z · EA · GW

Thanks for this work!

I'm wondering about "crazy teenager builds misaligned APS system in a basement" scenarios and to what extent you see the considerations in this report as bearing on those.

To be a bit more precise: I'm thinking about worlds where "alignment is easy" for society at large (i.e. your claim 3 is not true), but building powerful AI is feasible even for people who are not interested in taking the slightest precautions, even those that would be recommended by ordinary self-interest. I think mostly about individuals or small groups rather than organizations.

I think these scenarios are distinct from misuse scenarios (which you mention below your report is not intended to cover), though the line is blurry. If someone who wanted to see enormous damage to the world built an AI with the intent of causing such damage, and was successful, I'd call that "misuse." But I'm interested more in "crazy" than "omnicidal" here, where I don't think it's clear whether to call this "misuse" or not.

Maybe you see this as a pretty separate type of worry than what the report is intended to cover.

Comment by reallyeli on I am Seth Baum, AMA! · 2021-04-22T01:36:18.919Z · EA · GW

Well, I guess he did say you could ask him anything.

Comment by reallyeli on Is there evidence that recommender systems are changing users' preferences? · 2021-04-14T07:26:23.611Z · EA · GW

From reading the summary in this post, it doesn't look like the YouTube video discussed bears on the question of whether the algorithm is radicalizing people 'intentionally,' which I take to be the interesting part of Russell's claim.

Comment by reallyeli on Is there evidence that recommender systems are changing users' preferences? · 2021-04-13T06:17:32.182Z · EA · GW

I just don't think we've seen anything that favors the hypothesis "algorithm 'intentionally' radicalizes people in order to get more clicks from them in the long run" over the hypothesis "algorithm shows people what they will click on the most (which is often extreme political content, and this causes them to become more radical, in a self-reinforcing cycle.)"

Comment by reallyeli on Is there evidence that recommender systems are changing users' preferences? · 2021-04-12T19:51:10.615Z · EA · GW

I think that experiment wouldn't prove anything about the algorithm's "intentions," which seem to be the interesting part of the claim. One experiment that maybe would (I have no idea if this is practical) is giving the algorithm the chance to recommend two pieces of content: a) high likelihood of being clicked on, b) lower likelihood of being clicked on, but makes the people who do click on it more polarized. Not sure if a natural example of such a piece of content exists.

Comment by reallyeli on Is there evidence that recommender systems are changing users' preferences? · 2021-04-12T19:47:35.434Z · EA · GW

Good question. I'm not sure why you'd privilege Russell's explanation over the explanation "people click on extreme political content, so the click-maximizing algorithm feeds them extreme political context."

Comment by reallyeli on How much does performance differ between people? · 2021-03-26T07:03:23.012Z · EA · GW

Agreed. The slight initial edge that drives the eventual enormous success in the winner-takes-most market can also be provided by something other than talent — that is, by something other than people trying to do things and succeeding at what they tried to do. For example, the success of Fifty Shades of Grey  seems best explained by luck.

Comment by reallyeli on What drew me to EA: Reflections on EA as relief, growth, and community · 2021-03-26T06:49:40.876Z · EA · GW

The "EA as relief" framing resonated with me (though my background is different) and I appreciate your naming it!

Comment by reallyeli on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-19T07:18:27.472Z · EA · GW

"There is a genius for impoverishment always at work in the world. And it has its way, as if its proceedings were not only necessary but even sensible. Its rationale, its battle cry, is Competition."

— Marilynne Robinson

Comment by reallyeli on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-19T07:10:03.864Z · EA · GW

Strong upvote for Month of May.

Comment by reallyeli on The Vegan Value Asymmetry and its Consequences · 2020-10-26T01:58:37.968Z · EA · GW

To the extent which reducing demand for chicken prevents or delays the slaughtering of existing chickens, I don't see why there is an asymmetry. I place positive value on chickens living their chicken lives (when those lives are net-positive, whatever that means). Go beyond that and you get into population ethics.

But more importantly,  I think this post uses the term "good action" strictly to mean "action which has positive expected value," while the common usage of "good" is broader and can include actions which are merely less negative than an alternative.

Comment by reallyeli on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-14T08:34:14.469Z · EA · GW

I don't think the focus here should be only on suffering. Sometimes, I seek out art/media that depicts human flourishing, out of a desire to increase my altruistic motivation by reminding myself just what it is that we're working to protect + create.

Obviously a ton of art/media contains "people being happy," but when I'm looking for this, I look specifically for depictions of people who are very different from each other and from me, that show these people as being unique and weird and not at all how you thought they would be. Good examples are the tv show High Maintenance and the documentary In Jackson Heights. It's a certain aesthetic that increases my altruistic motivation because it reminds me, by showing me more of it than I normally see, of what a vast expanse human experience really is.

(For animals, it's more socially acceptable to just watch them intently for long periods of time.)

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-05-27T02:49:29.697Z · EA · GW

I suppose an example would be that increasing economic growth in a country doesn't matter if the country later gets blown up or something.

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-17T17:42:05.081Z · EA · GW
Like how would I know if the world was more absorber-y or more sensitive to small changes?

I'm not sure; that's a pretty interesting question.

Here's a tentative idea: using the evolution of brains, we can conclude that whatever sensitivity the world has to small changes, it can't show up *too* quickly. You could imagine a totally chaotic world, where the whole state at time t+(1 second) is radically different depending on minute variations in the state at time t. Building models of such a world that were useful on 1 second timescales would be impossible. But brains are devices for modelling the world that are useful on 1 second timescales. Brains evolved; hence they conferred some evolutionary advantage. Hence we don't live in this totally chaotic world; the world must be less chaotic than that.

It seems like this argument gets less strong the longer your timescales are, as our brains perhaps faced less evolutionary pressure to be good at prediction on timescales of like 1 year, and still less to be good at prediction on timescales of 100 years. But I'm not sure; I'd like to think about this more.

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-17T17:29:28.667Z · EA · GW

Hey, glad this was helpful! : )

To apply this to conception events - imagine we changed conception events so that girls were much more likely to be conceived than boys (say because in the near-term that had some good effects eg. say women tended to be happier at the time). My intuition here is that there could be long-term effects of indeterminate sign (eg. from increased/decreased population growth) which might dominate the near-term effects. Does that match your intuition?

Yes, that matches my intuition. This action creates a sweeping change a really complex system; I would be surprised if there were no unexpected effects.

But I don't see why we should believe all actions are like this. I'm raising the "long-term effects don't persist" objection, arguing that it seems true of *some* actions.

Comment by reallyeli on What would a pre-mortem for the long-termist project look like? · 2020-04-13T03:33:35.713Z · EA · GW

Makes sense!

Comment by reallyeli on What would a pre-mortem for the long-termist project look like? · 2020-04-12T14:39:36.439Z · EA · GW
I'd maybe give a 10% probability to long-termism just being wrong.

What could you observe that would cause you to think that longtermism is wrong? (I ask out of interest; I think it's a subtle question.)

Comment by reallyeli on What are some historical examples of people and organizations who've influenced people to do more good? · 2020-04-12T14:28:32.070Z · EA · GW

Florence Nightingale? Martin Luther King Jr. ? Leaders of social movements? It seems to me that a lot of "standard examples of good people" are like this; did you have something else in mind?

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-12T14:24:10.456Z · EA · GW

Sweet links, thanks!

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-10T02:47:34.391Z · EA · GW

(Focusing on a subtopic of yours, rather than engaging with the entire argument.)

All actions we take have huge effects on the future. One way of seeing this is by considering identity-altering actions. Imagine that I pass my friend on the street and I stop to chat. She and I will now be on a different trajectory than we would have been otherwise. We will interact with different people, at a different time, in a different place, or in a different way than if we hadn’t paused. This will eventually change the circumstances of a conception event such that a different person will now be born because we paused to speak on the street.

I'm not so sure "all actions we take have huge effects on the future." It seems like a pretty interesting empirical question. I don't find this analogy supremely convincing; it seems that life contains both "absorbers" and "amplifiers" of randomness, and I'm not sure which are more common.

In your example, I stop to chat with my friend vs. not doing so. But then I just go to my job, where I'm not meeting any new people. Maybe I always just slack off until my 9:30am meeting, so it doesn't matter whether I arrive at 9am or at 9:10am after stopping to chat. I just read the Internet for ten more minutes. It looks like there's an "absorber" here.

Re: conception events — I've noticed that discussion of this topic tends to use conception as a stock example of an amplifier. (I'm thinking of Tyler Cowen's Stubborn Attachments.) Notably, it's an empirical fact that conception works that way (e.g. with many sperm, all with different genomes, competing to fertilize the same egg). If conception did not work that way, would we lower our belief in "all actions we take have huge effects on the future" ? What sort of evidence would cause us to lower our beliefs in that?

Now, when the person who is conceived takes actions, I will be causally responsible for those actions and their effects. I am also causally responsible for all the effects flowing from those effects.

Sure, but what about the counterfactual? How much does it matter to the wider world what this person's traits are like? You want JFK to be patient and levelheaded, so he can handle the Cuban Missile Crisis. JFK's traits seem to matter. But most people aren't JFK.

You might also have "absorbers," in the form of selection effects, operating even in the JFK case. If we've set up a great political system such that the only people who can become President are patient and levelheaded, it matters not at all whether JFK in particular has those traits.

Looking at history with my layman's eyes, it seems like JFK was groomed to be president by virtue of his birth, so it did actually matter what he was like. At the extreme of this, kings seem pretty high-variance. So affecting the conception of a king matters. But now what we're doing looks more like ordinary cause prioritization.

Comment by reallyeli on What is the average EA salary? · 2020-04-09T18:03:13.513Z · EA · GW

I don't know — sounds like you might have stronger views on this than me! : )

Comment by reallyeli on What is the average EA salary? · 2020-04-05T06:18:11.528Z · EA · GW

This is gonna vary a lot because there's not a "typical EA organization" — salary is determined in large part by what the market rate for a position is, so I'd expect e.g. a software engineer at an EA organization to be paid about the same as a software engineer at any organization.

Is there a more specific version of your question to ask? Why do you want to know / what's the context?

Comment by reallyeli on Effective Altruism and Free Riding · 2020-04-02T16:10:02.497Z · EA · GW

Gotcha. So your main concern is not that EA defecting will make us miss out on good stuff that we could have gotten via the climate change movement deciding to help us on our goals, but rather that it might be bad if EA-type thinking became very popular?

Comment by reallyeli on Effective Altruism and Free Riding · 2020-04-01T23:37:36.305Z · EA · GW

I don't buy your example on 80k's advice re: climate change. You want to cooperate in prisoner's dilemmas if you think that it will cause the agent you are cooperating with to cooperate more with you in the future. So there needs to a) be another coherent agent, which b) notices your actions, c) takes actions in response to yours, and d) might plausibly cooperate with you in the future. In the climate change case, what is the agent you'd be cooperating with here and does it meet these criteria?

Is it the climate change movement? It doesn't seem to me that "the climate change movement" is enough of a coherent agent to do things like decide "let's help EA with their goals."

Or is it individual people who care about climate change? Are they able to help you with your goals? What is it you want from them?

Comment by reallyeli on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-20T16:35:58.274Z · EA · GW

I'm interested in the $10 million per minute number. What is the model? Is that for the whole world?

Quick check is that U.S. GNP for one year is $10^12 ( source: https://www.google.com/search?q=us+gnp ), $10 million = $10^7 and there are about 10^6 minutes in a year, so we're saying that the shutdown would be equivalent to turning off the entire US economy.

Comment by reallyeli on How effective were vote trading schemes in the 2016 U.S. presidential election? · 2020-03-04T04:17:57.509Z · EA · GW

Sweet, better than I could have hoped for!

Any sense of what organizations/people are working on it this year? I wasn't able to find an email address for Steve Hull so I posted an issue — https://github.com/sdhull/strategic_voting/issues/20 — no response yet.

I'll also contact Ben.

Comment by reallyeli on How effective were vote trading schemes in the 2016 U.S. presidential election? · 2020-03-03T04:55:24.051Z · EA · GW

Thanks. I realized it should have been a Question but too late — was there a way for me to upgrade it myself after posting?

Comment by reallyeli on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T14:19:03.977Z · EA · GW

Thanks for the pointer to "independence of irrelevant alternatives."

I'm curious to know how you think about "some normative weight." I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?

Comment by reallyeli on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-22T04:15:31.664Z · EA · GW

Link to discussion on Facebook: https://www.facebook.com/groups/eahangout/permalink/2845485492205023/

Comment by reallyeli on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T17:00:22.215Z · EA · GW

I think this math is interesting, and I appreciate the good pedagogy here. But I don't think this type of reasoning is relevant to my effective altruism (defined as "figuring out how to do the most good"). In particular, I disagree that this is an "argument for utilitarianism" in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.

(I really do mean "me" and "my" in that sentence; other people may find that this argument can indeed convince them of this, and that's a fact about them I have no quarrel with. I'm posting this because I just want to put a signpost saying "some people in EA believe this," in case others feel the same way.)

Following Richard Ngo's post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/arguments-for-moral-indefinability, I don't think that human moral preferences can be made free of contradiction. Although I don't like contradictions and I don't want to have them, I also don't like things like the repugnant conclusion, and I'm not sure why the distaste towards contradictions should be the one that always triumphs.

Since VNM-rationality is based on transitive preferences, and I disagree that human preferences can or "should" be transitive, I interpret things like this as without normative weight.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-21T16:29:49.447Z · EA · GW

What is meant by "not my problem"? My understanding is that what is meant is "what I care about is no better off if I worry about this thing than if I don't." Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don't.

It sounds like you're saying that, if I'm working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is "not my problem" in this sense. Here obviously I care about things other than $$.

This doesn't seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.

Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.

So this is an argument that if everyone collectively agrees to change their incentives, we'd get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I'm sure I agree with that, I just haven't thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T23:51:46.456Z · EA · GW

I'm saying we need to specify more than, "The chance that the full stack of individual propositions evaluates as true in the relevant direction." I'm not sure if we're disagreeing, or ... ?

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T02:48:09.832Z · EA · GW

Suppose you're in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?

There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.

The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that's why Buck's number seems crazy high to Gordon.

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-13T22:40:48.527Z · EA · GW

I agree with this intuition. I suspect the question that needs to be asked is "14% chance of what?"

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:34:08.242Z · EA · GW

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:29:45.121Z · EA · GW

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T17:00:03.593Z · EA · GW

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T00:28:18.286Z · EA · GW

I agree with this. I wasn't trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than 'means' and 'ends' here.)

I think you've clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That's the situation I'm concerned about.

For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).

Comment by reallyeli on Short-Term AI Alignment as a Priority Cause · 2020-02-12T16:55:39.871Z · EA · GW

I think this is an interesting topic. However, I downvoted because if you're going to claim something is the "greatest priority cause," which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.

(Basically I agree with MichaelStJules's comment, except I think the analysis need not be quantitative.)

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-12T15:59:08.749Z · EA · GW

Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.

Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)

Some reasons I can imagine:

- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I'd want to be clear that I'm deferring.

- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I'd want a robust market, probably better than PredictIt.