Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages 2022-08-23T23:52:31.088Z
[AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects 2022-08-17T04:01:45.117Z
reallyeli's Shortform 2022-04-23T23:26:26.331Z
Open Phil’s longtermist EA movement-building team is hiring 2022-02-25T21:43:02.788Z
How does the simulation hypothesis deal with the 'problem of the dust'? 2021-11-16T08:37:38.452Z
Open Phil EA/LT Survey 2020: Other Findings 2021-09-09T01:01:43.449Z
Open Phil EA/LT Survey 2020: How Our Respondents First Learned About EA/EA-Adjacent Ideas 2021-09-06T01:01:39.504Z
Open Phil EA/LT Survey 2020: EA Groups 2021-09-01T01:01:36.083Z
Open Phil EA/LT Survey 2020: What Helped and Hindered Our Respondents 2021-08-29T07:00:00.000Z
Open Phil EA/LT Survey 2020: Respondent Info 2021-08-24T17:32:46.082Z
Open Phil EA/LT Survey 2020: Methodology 2021-08-23T01:01:23.775Z
Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways 2021-08-19T01:01:19.503Z
How effective were vote trading schemes in the 2016 U.S. presidential election? 2020-03-02T23:15:39.321Z
Do impact certificates help if you're not sure your work is effective? 2020-02-12T14:13:25.689Z
What analysis has been done of space colonization as a cause area? 2019-10-09T20:33:27.473Z
What actions would obviously decrease x-risk? 2019-10-06T21:00:24.025Z
How effective is household recycling? 2019-08-29T06:13:46.296Z
What is the current best estimate of the cumulative elasticity of chicken? 2019-05-03T03:27:57.603Z
Confused about AI research as a means of addressing AI risk 2019-02-21T00:07:36.390Z
[Offer, Paid] Help me estimate the social impact of the startup I work for. 2019-01-03T05:16:48.710Z


Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-09-14T22:12:24.967Z · EA · GW

We've been paying people based on time spent, rather than by word. The amounts are based on our assessment of market rates for high-quality freelance translators for the language in question online, though my guess is this will be a more attractive than being a freelance translator because it's a source of steady work for a long period of time (e.g. 6 months).

Comment by Eli Rose (reallyeli) on Contra Appiah Defending Defending "Climate Villains" · 2022-09-12T15:49:37.414Z · EA · GW

Have you considered writing a letter to the editor? I think actual worked examples of naive consequentialism failing are kind of rare and cool for people to see.

Comment by Eli Rose (reallyeli) on Say “nay!” to the Bay (as the default)! · 2022-09-07T22:00:39.889Z · EA · GW

Hmm yeah, I went East Coast --> Bay and I somewhat miss the irony.

Comment by Eli Rose (reallyeli) on [AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects · 2022-09-01T00:44:18.366Z · EA · GW
  1. We're interested in increasing the diversity of the longtermist community along many different axes. It's hard to give a unified 'strategy' at this abstract level, but one thing we've been particularly excited about recently is outreach in non-Western and non-English-speaking countries.

  2. Yes, you can apply for a grant under these circumstances. It's possible that we'll ask you to come back once more aspects of the plan are figured out, but we have no hard rules about that. And yes, it's possible to apply for funding conditional on some event and later return the money/adjust the amount you want downwards if the event doesn't happen.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T22:06:07.890Z · EA · GW

I'll stand by the title here. I think a bilingual person without specific training in translation can have good taste in determining whether or not a given translation is high-quality. These seem like distinct skills, e.g. in English I'm able to recognize a work badly translated from French even if I don't speak French and couldn't produce a better one. And having good taste seems like the most important skill for someone who is vetting and contracting with professional translators.

Separately, I also think that many (but not all) bilingual people without specific training in translation can themselves do good translation work. The results of our pilot project moved me towards this view (from a prior position that put a decent amount of weight on it).

As a high-level note, I see the goal here as enabling people to engage with EA ideas where they couldn't before. It's important that quality be high enough that the ideas are transmitted with good fidelity. But I don't think we need to adhere to an extremely high and rigorous standard of the type one might have when translating a literary work, e.g. I don't think we need translations to read so fluently that one forgets the material was originally written in English. I think this work is urgent and important, and I think the opportunity costs of imposing that kind of standard would be significant.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T22:00:47.563Z · EA · GW

Hi Zakariyau. This seems like it definitely meets the criteria of a language with >5m speakers — I don't have the context, but I don't think English being the official language would be a barrier of any kind.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T21:31:41.761Z · EA · GW

Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, there's a small number of data points, and there's a ton of noise from other factors that language communities vary along.

Fortunately I think we'll have additional context that will help us assess the impacts of these grants beyond a black-box "did this input lead to this output" analysis.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T21:24:30.609Z · EA · GW

Hi Nathan — I think that probably wouldn't make sense in this case, as I think it's important for the person leading a given translation project to understand EA and related ideas well, even if translators they hire do not.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T21:19:54.045Z · EA · GW

Yep, this list isn't intended to rule anything out. We'd certainly be interested in getting applications from people who want to get content translated into Hindi or other Indian languages.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T18:21:14.496Z · EA · GW

Ah, that's my bad — thanks, fixed.

Comment by Eli Rose (reallyeli) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T18:17:10.574Z · EA · GW

Thanks, really appreciate the concrete suggestion! This seems like a good lead for anyone who wants to supervise Polish translation.

Comment by Eli Rose (reallyeli) on [AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects · 2022-08-19T17:30:48.967Z · EA · GW

Cool, looking forward to talking about these.

Comment by Eli Rose (reallyeli) on Could realistic depictions of catastrophic AI risks effectively reduce said risks? · 2022-08-18T05:27:24.480Z · EA · GW

I think this Wikipedia claim is from Reagan's autobiography. But according to The Dead Hand, written by a third-party historian, Reagan was already very concerned about nuclear war by this time, and had been at least since his campaign in 1980. It's pretty interesting — apparently this concern led to both his interest in nuclear weapon abolition (which he mostly didn't talk about) and in his unrealistic and harmful missile defense plans.

So according to this book, The Day After wasn't actually any kind of turning point.

Comment by Eli Rose (reallyeli) on [AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects · 2022-08-17T19:01:38.719Z · EA · GW

The answer is yes, I can think of some projects in this general area that sound good to me. I'd encourage you to email me or sign up to talk to me about your ideas and we can go from there. As is always the case, a lot rides on further specifics about the project — i.e. just the bare fact that something is focused on mid-career professionals in tech doesn't give me a lot of info about whether it's something we'd want to fund or not.

Comment by Eli Rose (reallyeli) on 80,000 Hours is hiring for a marketer · 2022-08-09T17:28:39.231Z · EA · GW

(I work at Open Phil on community-building grantmaking.)

This role seems quite high-impact to me and I'd encourage anyone on the fence to apply. Our 2020 survey leads me to believe that 80k has been very impactful in terms of contributing to the trajectories of people who are now doing important longtermist work. I think good marketing work could significantly increase the number of people that 80k reaches, and the impact of doing this quickly and well seems competitive with a lot of other community-building work to me — one reason for this is that I think one digital marketer can effectively deploy a lot of funding.

Comment by Eli Rose (reallyeli) on Why EAs are skeptical about AI Safety · 2022-07-20T07:04:00.327Z · EA · GW

Is there an equally high level of expert consensus on the existential risks posed by AI?

There isn't. I think a strange but true and important fact about the problem is that it just isn't a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it's unclear who the relevant "experts" should be. Technical AI researchers are maybe the best choice, but they're still not a good one; they're in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.

Comment by Eli Rose (reallyeli) on A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] · 2022-07-20T05:42:48.746Z · EA · GW


Comment by Eli Rose (reallyeli) on How good is The Humane League compared to the Against Malaria Foundation? · 2022-06-17T21:53:16.443Z · EA · GW


Comment by Eli Rose (reallyeli) on Mid-career people: strongly consider switching to EA work · 2022-05-01T07:12:53.166Z · EA · GW

I think this is a good question and there are a few answers to it.

One is that many of these jobs only look like they check the "improving the world" box if you have fairly unusual views. There aren't many people in the world for whom e.g. "doing research to prevent future AI systems from killing us all" tracks as an altruistic activity. It's interesting to look at this (somewhat old) estimate of how many EAs even exist.

Another is that many of the roles discussed here aren't research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).

Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don't yet exist and therefore can't be experimented upon. Out of all possible research that could be done in academia, "expected tractability" is a large factor in determining what questions people try to tackle. But when you're filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it's much more likely that things will be a confusing muddle that it's difficult to make clear progress on.

Comment by Eli Rose (reallyeli) on reallyeli's Shortform · 2022-04-27T19:55:11.346Z · EA · GW

What I'm talking about tends to be more of an informal thing which I'm using "EMH" as a handle for. I'm talking about a mindset where, when you think of something that could be an impactful project, your next thought is "but why hasn't EA done this already?" I think this is pretty common and it's reasonably well-adapted to the larger world, but not very well-adapted to EA.

Comment by Eli Rose (reallyeli) on reallyeli's Shortform · 2022-04-27T17:43:56.817Z · EA · GW

EMH says that we shouldn't expect great opportunities to make money to just be "lying around" ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer "why didn't anyone do this before?" (ofc. this is a simplification, EMH isn't really one coherent view)

One might also think that there aren't great EA projects just "lying around" ready for anyone to do. This would be an "EMH for EA." But I think it's not true.

Comment by Eli Rose (reallyeli) on Consider Changing Your Forum Username to Your Real Name · 2022-04-27T02:55:46.601Z · EA · GW

I changed my display name as a result of this post, thanks!

Comment by Eli Rose (reallyeli) on reallyeli's Shortform · 2022-04-24T00:31:47.798Z · EA · GW

There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it's a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:

  1. EA and the availability of lots of funding for it are relatively new — there's just not that much time for "market inefficiencies" to have been filled.
  2. The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.
Comment by Eli Rose (reallyeli) on reallyeli's Shortform · 2022-04-23T23:26:26.456Z · EA · GW

If you're an EA who's just about to graduate, you're very involved in the community, and most of the people you think are really cool are EAs, I think there's a decent chance you're overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the "career capital" their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.

At first blush it seems like this recommends you should almost never take an EA job early in your career — since jobs at EA orgs are such a small proportion of all jobs, what are the odds that such a job was optimal from a career capital perspective? I think this is wrong for a number of reasons, but it's instructive to actually run through the list. One is that a job being at an EA org is correlated with it being good in other ways — e.g. with it having smart, driven colleagues that you get on well with, or with it being in a field connected to one of the world's biggest problems. Another is that some types of career capital are best gotten at EA orgs or in doing EA projects — e.g. if you want to upskill for community-building work, there's plausibly no Google/McKinsey of community-building to go get useful career capital for this at. (Though I do think some types of experience, like startup experience, are often transferable to community-building.)

I think a good orientation to have towards this is to try your hardest, when looking at jobs as a new grad, to "wipe the slate clean" of tribal-affiliation-related considerations, and (to a large extent) of impact-related considerations, and assess mostly based on career-capital considerations.

(Context: I worked at an early-stage non-EA startup for 3 years before getting my current job at Open Phil. This was an environment where I was pushed to work really hard, take on a lot of responsibility, and produce high-quality work. I think I'd be way worse at my current job [and less likely to have gotten it] without this experience. My co-workers cared about lots of instrumental stuff EA cares about, like efficiency, good management, feedback culture, etc. I liked them a lot and was really motivated. However, this doesn't happen to everyone at every startup, and I was plausibly unusually well-suited to it or unusually lucky.)

Comment by Eli Rose (reallyeli) on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T04:58:38.177Z · EA · GW

Thanks for posting this. I found a lot of it resonant — particularly the stuff about inventing reasons to discount positive feedback, and having to pile on more and more unlikely beliefs to avoid updating to "I'm good at this."

I remember, fairly recently, taking seriously some version of "I'm not actually good at this stuff, I'm just absurdly skilled at fooling others into thinking that I am." I don't know man, it seemed like a pretty good hypothesis at the time.

Comment by Eli Rose (reallyeli) on Effectiveness is a Conjunction of Multipliers · 2022-03-26T23:47:14.970Z · EA · GW

One can't stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can't give chickens malaria nets.

It seems like that one requires 'starting from scratch' in some sense. There might be analogies to the human case (e.g. don't focus on your pampered pets), but they still need to be argued.

So I think the final number should be lower. (It's still quite high, of course!)

Comment by Eli Rose (reallyeli) on Open Phil’s longtermist EA movement-building team is hiring · 2022-03-22T06:59:46.757Z · EA · GW

Just a reminder that the deadline for applications is this Friday, March 25th.

Comment by Eli Rose (reallyeli) on How does the simulation hypothesis deal with the 'problem of the dust'? · 2022-03-13T06:56:36.349Z · EA · GW

Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.

I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros's brain, then say "at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire."

This could conceivably fail if there's not enough pairs of dust specks in the universe to make the numbers work out. The "pure time" mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.


I agree that it seems like there's something around "how complex is the mapping." I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?

Comment by Eli Rose (reallyeli) on Sex work as part of mental health and wellbeing services · 2022-03-04T10:10:59.417Z · EA · GW

I downvoted the OP because it doesn't seem to be suited to this forum. The author's experiences are interesting, but I don't think the post contains an attempt to explore the potential cause area impartially.

Comment by Eli Rose (reallyeli) on AI views and disagreements AMA: Christiano, Ngo, Shah, Soares, Yudkowsky · 2022-03-04T09:42:56.507Z · EA · GW

Toby Ord's definition of an existential catastrophe is "anything that destroys humanity's longterm potential." The worry is that misaligned AGI which vastly exceeds humanity's power would be basically in control of what happens with humans, just as humans are, currently, basically in control of what happens with chimpanzees. It doesn't need to kill all of us in order for this to be a very, very bad outcome.

E.g. the enslavement by the steel-loving AGI you describe sounds like an existential catastrophe, if that AGI is sufficiently superhuman. You describe a "large portion of humanity" enslaved in this scenario, implying a small portion remain free — but I don't think this would happen. Humans with meaningful freedom are a threat to the steel-lover's goals (e.g. they could build a rival AGI) so it would be instrumentally important to remove that freedom.

Comment by Eli Rose (reallyeli) on Mosul Dam Could Kill 1 Million Iraqis. · 2022-03-02T07:09:05.141Z · EA · GW

This is an interesting issue; it makes sense that ISIS would be bad at dam maintenance.

Without reading all the sources (so perhaps these are clearly answered somewhere in there), some next questions I'd be curious about:

  • Where does the "500,000 to 1.5 million" estimate of deaths come from? Is this taking the simulations from the European Commission paper and assuming that anyone affected by water levels over X meters high dies?
  • Likewise, where do the cost estimates for the solutions come from?
  • Is it right that if this happened, it would be the most deaths caused by a dam failure, ever? Wikipedia seems to suggest this is so, with the 1975 Banqiao Dam failure causing ~20k - 200k deaths.

One solution would be to spend $2 billion to finish construction of Badush dam downstream in order to block the floodwaters, which if this saved 1 million lives would come out at $2000/life saved, better than AMF. Even if that's likely too optimistic, it's suggestive, as the likely if as yet unknown existence of more targeted marginal uses of money for readers means this is in my opinion a very promising new cause area worth further investigation, this post being an opener towards further inquiry. I encourage others to do much more detailed expected value calculations, with openminded curiosity.

I appreciate that this is just a toy estimate. But I think even at a toy level we could make the estimate more accurate by having a term for "P(dam failure within X years, absent our intervention)". The dam may not fail within a given timeframe, or it may be fixed by other actors before it fails, etc, and it doesn't seem like the case is so overwhelming that these outcomes should be ignored. E.g. if you think the dam is 50% likely to fail within 40 years, absent our intervention, then the estimate looks like $4000/life saved in expectation.

Comment by Eli Rose (reallyeli) on Some thoughts on vegetarianism and veganism · 2022-02-15T05:14:03.104Z · EA · GW

If you think the signalling benefits from being veg*n are large, then it seems plausible to me that the signalling benefits from being a "scope-sensitive" or "evidence-sensitive" veg*n are larger, at least depending on your background culture and how high-bandwidth of a message you can send.

My family didn't ask any questions when I became vegetarian (lots of their friends are vegetarian), but the fact that I still eat oysters causes no end of questions. This leads to conversations about different types of animal sentience that feel more like they're actually about our treatment of animals than would have happened if I were a "normal" vegetarian.

I've had less opportunity to see the effects firsthand, but I think being averse to foods in rough proportion to their suffering per calorie, e.g. eating beef but avoiding eggs (and talking about why you do this, when asked) might have a similar result.

[This isn't to argue that the signalling benefits outweigh the direct harm.]

Comment by Eli Rose (reallyeli) on Software engineering - Career review · 2022-02-09T08:57:52.763Z · EA · GW

ensuring that these experiments are as efficient and safe as possible

Is "safe" here meant in the sense of "not accelerating risks from AI," or in the sense of "difficult to steal" (i.e. secure)?

Comment by Eli Rose (reallyeli) on Software engineering - Career review · 2022-02-09T08:52:09.716Z · EA · GW

However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm.

I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like it's making the counterintuitive claim that "safety" work is actually where much of the danger lies. Is that intended?

Comment by Eli Rose (reallyeli) on What (standalone) LessWrong posts would you recommend to most EA community members? · 2022-02-09T06:35:43.675Z · EA · GW

I really like Ends Don't Justify Means (Among Humans) and think it's a bit underrated. (In that I don't hear people reference it much.)

I think I find the lesson generally useful: that in some cases it can be bad for me to "follow consequentialism," (because in some cases I'm an idiot) without consequentialism being itself bad.

Comment by Eli Rose (reallyeli) on Momentum 2022 updates (we're hiring) · 2022-01-12T08:52:07.607Z · EA · GW

Are you able to share the breakdown of where donations to recommended charities go, by recommended charity? (E.g. you mentioned that "some campaigns directed 10-35% of funds to longtermism," but it's not immediately clear to me which of the recommended charities at that counts.)

Comment by Eli Rose (reallyeli) on Pedant, a type checker for Cost Effectiveness Analysis · 2022-01-02T16:13:04.163Z · EA · GW

I've often wished for something like this when doing cost-effectiveness analyses or back-of-the-envelope calculations for grantmaking. (I have perhaps more programming background than the average grantmaker.)

Something like "Guesstimate but with typechecking" would, at first blush, seem to be the most useful version. But perhaps you shouldn't trust my feedback until I've actually used it!

Comment by Eli Rose (reallyeli) on Mortality, existential risk, and universal basic income · 2021-11-30T17:32:39.811Z · EA · GW

It sounds like you're implying that 10% of the US lives in "extreme poverty," following the 10% of the world — but this isn't the case. The article you cite gives 37 million for the number below the US poverty line, which is not the same thing. (Possibly you know this already, but I thought the sentence was a bit confusing for onlookers.)

Comment by Eli Rose (reallyeli) on How does the simulation hypothesis deal with the 'problem of the dust'? · 2021-11-21T21:19:40.118Z · EA · GW

It seems related but different. E.g. Boltzmann brains expect to die in the next second, but dust-brains do not.

Comment by Eli Rose (reallyeli) on How does the simulation hypothesis deal with the 'problem of the dust'? · 2021-11-21T21:17:39.645Z · EA · GW

Thanks, but I think this is a different topic.

Comment by Eli Rose (reallyeli) on Concerns about AMF from GiveWell reading - Part 1 · 2021-10-18T06:14:44.231Z · EA · GW

I didn't mean to imply you did, though I see how "human organizations wanting to dictate the terms on which they can be criticized" might sound that way. My sense that it's bad if posts on the Forum that are critical of AMF get met with this kind of argument doesn't hinge on whether the person making the argument is involved with AMF or not.

Comment by Eli Rose (reallyeli) on Concerns about AMF from GiveWell reading - Part 1 · 2021-10-17T21:37:15.829Z · EA · GW

I really think you ought to consider renaming this post... Probably about 1000 people will see the title. There's some chance you could convince someone to stop donating to AMF just from the title - that tends to be how brains work, even though it isn't very rational.

I think it's not a good idea to respond to criticism in this way. I imagine myself as an outsider, skeptical of some project, and having supporters of the project tell me, "It's morally wrong to say we're not doing good without following our things-to-do-before-critiquing-us checklist, because critiques of us (if improperly done) might cause us to lose support, which is tantamount to causing harm."

I think this would (and should) make skeptic-me take a dimmer view of the project in question. It's unconvincing on the object level; to the extent that I already don't think what you're doing is valuable, I shouldn't be moved by arguments about how critiquing it might destroy value. And it pattern-matches to the many other instances of humans organizations wanting to dictate the terms on which they can be criticized, and leveraging the force of moral arguments to do so. Organizations that do this kind of thing are often not truth-seeking and genuinely open to criticism (even when it's done "properly" by their lights).

Comment by Eli Rose (reallyeli) on Early career EA's should consider joining fast-growing startups in emerging technologies · 2021-10-10T20:40:44.482Z · EA · GW

Re: not being selective about what startups to work at -- oh that's interesting, makes me more think I got lucky (in startup selection or in some other way).

Comment by Eli Rose (reallyeli) on Early career EA's should consider joining fast-growing startups in emerging technologies · 2021-10-10T18:11:11.970Z · EA · GW

By fast-growing startup, I mean a company that seems decently likely to be one of the top ~20 highest valued startups founded in a given 5 year period.

This sounds more like "top startup" than "fast-growing"? Not trying to nitpick, the terms just seem pretty different to me.

I think the bar need not be that high for some of the benefits you mention. I had an experience that jibes with this:

About 6 months after joining, I started leading a team of ~5 engineers on a high priority engineering project. That was mostly due to the company needing leaders to keep up with our growth, and my hustle and generalist skills making me well-suited for the role. That experience taught me a lot about leadership, management, and long-term engineering projects, and it seems like this type of experience is much more common in fast-growing startups.

from joining a startup that was certainly not one of the top ~20 in a five-year period -- it was "just a TechStars company." I found this really valuable. Probably I got fewer of the other benefits you mention around working with the top people in a given industry (this was a "random webapp" startup, not an ML startup, so that didn't really apply.)

Comment by Eli Rose (reallyeli) on Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways · 2021-08-20T03:41:09.462Z · EA · GW

Ah, glad this seems valuable! : )

Comment by Eli Rose (reallyeli) on EA Survey 2020: How People Get Involved in EA · 2021-07-16T17:36:20.127Z · EA · GW

Sorry, I neglected to say thank you for this previously!

Comment by Eli Rose (reallyeli) on Linch's Shortform · 2021-07-13T20:50:41.545Z · EA · GW

This idea sounds really cool. Brainstorming: a variant could be several people red teaming the same paper and not conferring until the end.

Comment by Eli Rose (reallyeli) on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-06T00:51:41.609Z · EA · GW

Viewership as in YouTube viewers? Where are you getting that stat from?

Comment by Eli Rose (reallyeli) on The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) · 2021-07-05T07:37:46.082Z · EA · GW

Looks like this already happened, in March 2020:

Comment by Eli Rose (reallyeli) on EA Survey 2020: How People Get Involved in EA · 2021-06-08T02:57:21.701Z · EA · GW

It looks like Sam Harris interviewed Will MacAskill this year. He also interviewed Will in 2016. How might we tell if the previous interview created a similar number of new EA-survey-takers, or if this year's was particularly successful? The data from that year doesn't seem to include a "podcast" option.