Comment by raemon on Salary Negotiation for Earning to Give · 2019-04-13T20:12:20.799Z · score: 7 (3 votes) · EA · GW

BTW, if you're a tech worker and you feel a vague obligation to learn how to negotiate but it's kinda aversive and/or you're not sure how to go about it...

...even just bothering to do it at all can net you $5k - $10k a year. Like, just saying "hey, that seems a bit low, can you go higher?"

There are various more complicated or effortful things you can do, but "negotiate at all even slightly" is surprisingly effective.

Comment by raemon on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T20:23:05.852Z · score: 4 (2 votes) · EA · GW

I think that makes sense but in practice is something that makes more sense to handle through their day jobs. (If they went the route of hiring someone for whom managing the fund was their actual day job I'd agree that generally higher salaries would be good, for mostly the same reason they'd be good across the board in EA)

Comment by raemon on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T03:19:37.166Z · score: 10 (3 votes) · EA · GW

Part of my thinking here is that this would be a mistake: focus and attention are some of the most valuable things, and splitting your focus is generally not good.

Comment by raemon on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T02:35:22.401Z · score: 8 (2 votes) · EA · GW

I'm familiar with good things coming out of those places, but not sure why they're the appropriate lens in this case.

Popping back to this:

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

This makes more sense to me when you actually have a company large enough to theoretically have multiple arms. AFAICT there are no arms here, there are just like 1-3 people working on a thing. And I'd expect getting to the point where you could have that requires at least 5-10 years of work.

What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?

Comment by raemon on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T22:58:01.056Z · score: 6 (3 votes) · EA · GW
What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Is there a particular reason to assume that'd be a good idea?

Comment by raemon on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T19:33:12.685Z · score: 28 (16 votes) · EA · GW

I have a weird mix of feelings and guesses here.

I think it's good on the margin for people to be able to express opinions without needing to formalize them into recommendations for the reason stated here. I think the overall conversation happening here is very important.

I do still feel pretty sad looking at the comments here — some of the commenters seem to not have a model of what they're incentivizing.

They remind me of the stereotype of a parent who's kid has moved away and grown up, and doesn't call very often. And periodically the kid does call, but the first thing they hear is the parent complaining "why don't you ever call me?", which makes the kid less likely to call home.

EA is vetting constrained.

EA is network constrained.

These are actual hard problems, that we're slowly addressing by building network infrastructure. The current system is not optimal or fair, but progress won't go faster by complaining about it.

It can potentially go faster via improvements in strategy, and re-allocating resources. But each of those improvements will come in a tradeoff. You could hire more grantmakers full-time, but those grantmakers are generally working full-time on something else comparably important.

This writeup is unusually thorough, and Habryka has been unusually willing to engage with comments and complaints. But I think Habryka has higher-than-average willingness to deal with that.

When I imagine future people considering

a) whether to be a grantmaker,

b) whether to write up their reasons publicly

c) whether to engage with comments on those reasons

I predict that some of the comments on this thread to make all of those less likely (in escalating order). It also potentially makes grantees less likely to consent to public discussion of their evaluation, since it might get ridiculed in the comments.

Because EA is vetting constrained, I think public discussion of grant-reasoning is particularly important. It's one of the mechanisms that'll give people a sense of what projects will get funded and what goes into a grantmaking process, and get a lot of what's currently 'insider knowledge' more publicly accessible.

Comment by raemon on How x-risk projects are different from startups · 2019-04-08T00:39:04.611Z · score: 13 (5 votes) · EA · GW

Just wanted to say I appreciate the nuance you're aiming at here. (Getting that nuance right is real hard)

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T17:01:02.895Z · score: 3 (2 votes) · EA · GW

Reasonably. That does sound like it’s at a comparable scale.

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T07:29:56.703Z · score: 5 (3 votes) · EA · GW

(I ask because there's a big difference between a community of 10-50 people and 200-300 people. I think at the latter scale, you actually need more infrastructure)

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T06:20:40.473Z · score: 2 (1 votes) · EA · GW

How big is the London community?

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-29T23:56:26.308Z · score: 11 (5 votes) · EA · GW

True, but from what I recall that was largely for reasons that I expect not to apply to EA Hotel.

Comment by raemon on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-28T20:17:05.851Z · score: 5 (3 votes) · EA · GW

Thanks. That was indeed much easier.

Comment by raemon on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-27T22:17:17.940Z · score: 3 (2 votes) · EA · GW

BTW, I just tried to donate $100 (not much but about what I feel comfortable impulse-donating), and the trivial inconvenience of finding and typing in a credit card through me off. A paypal moneypool link would probably have been lower friction for me (not arguing it's lower friction overall, just that having a variety of easy payment types is probably useful to getting marginal donors)

Comment by raemon on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-27T22:13:50.910Z · score: 14 (8 votes) · EA · GW

Relatedly: there already is reasonable infrastructure (and continuing to be more) oriented towards getting EAs to live in a few hub cities.

This is good, but it leaves open an alternate path (living in a cheap place, not optimized for being near silicon-valley money or Oxford respectability), that is currently very underexplored.

Comment by raemon on Request for comments: EA Projects evaluation platform · 2019-03-23T02:11:34.740Z · score: 9 (3 votes) · EA · GW

Mild formatting note: I found the introduction a bit long as well as mostly containing information I already knew.

I'm not sure how to navigate "accessible" vs "avoid wasting people's time". But I think you could have replaced the introduction with a couple bullet links, like:

....

Building off of:

 What to Do With People?

 Can EA copy Teach For America?

 EA is Vetting Constrained

The EA community has plenty of money and people, but is bottlenecked on a way to scalably evaluate new projects. I'd like to start a platform that:

  • provides feedback on early stage projects
  • estimates what resources would be necessary to start a given project
  • further in the project's life-cycle, evaluating the team and idea fit.

...

Or, something like that (I happen to like bullet points, but in any case seems like it should be possible to cut the opening few paragraphs into a few lines)

Comment by raemon on How can prediction markets become more trendy, legal, and accessible? · 2019-03-12T20:33:35.563Z · score: 7 (7 votes) · EA · GW

Relevant posts by Zvi Mowshowitz.

Prediction Markets When Do They Work?

Subsidizing Prediction Markets

Comment by raemon on EA is vetting-constrained · 2019-03-09T21:10:18.689Z · score: 1 (1 votes) · EA · GW
Very rough reply ... the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement

This makes sense and leads to me to somewhat downgrade my enthusiasm for my "Earn to Learn To Vett" comment (although I suspect it's still good on the margin)

Comment by raemon on EA is vetting-constrained · 2019-03-09T02:43:52.393Z · score: 16 (11 votes) · EA · GW

I think this is basically accurate. As I mentioned in another thread, the issue is that the scaling-up-of-vetting is still generally network constrained.

But, this framing (I like this framing) suggests to me that the thing to do is a somewhat different take on Earning to Give.

I had previously believed that Earning-to-Give people should focus on networking their way into hubs where they can detect early stage organizations, vett them, and fund them. And that this was the main mechanism by which their marginal dollars could be complementary to larger funders.

But, the Vetting-Constrained lens suggests that Earners-to-Give should be doing that even harder, not because of the marginal value of their dollars, but because this could allow them to self-fund their own career capital as a future potential grantmaker.

And moreover, this means that whereas before I'd have said it's only especially worth it to Earn-to-Give if you make a lot of money, now I'd recommend harder for marginal EAs to join donor lotteries. If a hundred people each put in $10k into 10 different donor lotteries, now you have 10 people with $100k, enough to seed fund an org for a year. And this is valuable because it gives them experience think about whether organizations are good.

There could be some systemization to this to optimize how much experience a person gets and how the org(s) they funded turned out to fare. (Maybe with some prediction markets thrown in)

Comment by raemon on You Have Four Words · 2019-03-08T21:40:05.531Z · score: 1 (1 votes) · EA · GW

(Btw, alternate titles for this post were "you have about 5 words", "you only have 5 words", and "you have less than seven words.") :P

Comment by raemon on You Have Four Words · 2019-03-08T21:17:49.087Z · score: 1 (1 votes) · EA · GW

Nod. My motivation to write the post came in a brief spurt and I wanted to just get it out without subjecting it to a review process, so I erred on the side of wording it the way I endorsed and letting you take credit if you wanted.

Comment by raemon on You Have Four Words · 2019-03-07T21:09:42.741Z · score: 3 (2 votes) · EA · GW

This post inspired by a conversation (conversation partner can reveal themselves if they so choose), who originally claimed (I think off-the-cuff) that that you had four words if you needed to coordinate 100,000 people (resulting in highly simplified strategies).

I updated my own estimate downwards (of the number of people you need to be coordinating to face the four word limit), after observing that EA only has somewhere-on-the-order-of-a-thousand people involved and important concepts often lose their nuance. (Although, to be fair, this is at least in part because there's multiple concepts that are all nuanced that all need to be kept track of, each of which need to get boiled down to a simple jargon term)

Comment by raemon on You Have Four Words · 2019-03-07T00:59:10.688Z · score: 14 (8 votes) · EA · GW

"Donate to Effective Charities."

"AI will kill us."

"Consider Earning to Give."

"EA is Talent Constrained."

You Have Four Words

2019-03-07T00:57:29.273Z · score: 36 (19 votes)
Comment by raemon on What to do with people? · 2019-03-06T22:28:47.689Z · score: 3 (2 votes) · EA · GW

I was very interested in the "city is not a tree" post, but found it juuust confusing/dense enough to bounce off of it. I'd be interest in a link-post or comment that summarizes the key insights there in layman's terms.

Comment by raemon on What to do with people? · 2019-03-06T19:57:00.979Z · score: 7 (4 votes) · EA · GW

Nod. My comment wasn't intended to be an argument against, so much as "make sure you understand that this is the world you're building" (and that, accordingly, you make sure your arguments and language don't depend on the old world)

The traditional EA mindset is something like "find the charities with the heavy tails on the power law distribution."

The Agora mindset (Agora was an org I worked at for a bit, that evolved sort of in parallel to EA) was instead "find a way to cut out the bottom 50% on charities and focus on the top 50%", which at the time I chafed at but I appreciate better now as the sort of thing you automatically deal with when you're trying to build something that scales.

I do think we're *already quite close* to the point where that phase transition needs to happen. (I think people who are very thoughtful about their donations can still do much better than "top 50%", but "be very thoughtful" isn't a part of the thing that scales easily)

Comment by raemon on What to do with people? · 2019-03-06T19:51:45.326Z · score: 4 (4 votes) · EA · GW

A particular risk here, is that coordination is one of the most costly things to fail at.

I'm happy to encourage new EAs to tackle a random research project, or to attempt the sort of charity entrepreneurship that, well, Charity Entrepreneurship seems to encourage.

I'm much more cautious about encouraging people to try to build infrastructure for the EA community, if it only actually works if it not only is high quality but also everyone gets on board with it at the same time. In particular, it seems like people are too prone to focus on the second part.

Every time you try to coordinate on a piece of changing infrastructure, and the project flops, it makes people less enthusiastic to try the next piece of coordination infrastructure (and I think there's a variation on this for hierarchical leadership)

But I'm fairly excited about things like AI Safety camp, i.e. building new hubs of infrastructure that other existing infrastructure doesn't rely on until it's been vetted.

(It's still important to make sure something like AI Safety camp is done well, because if it's done poorly at scale it can result in a confusing morass of training tools of questionable quality. This is not a warning not to try it, just to be careful when you do)

Comment by raemon on What to do with people? · 2019-03-06T19:45:31.322Z · score: 15 (6 votes) · EA · GW

FYI, the LessWrong team's take on this underlying problem is "find ways to make intellectual progress in a decentralized fashion, even if it's less efficient than it'd be in a tight knit organization."

The new Questions feature and the upcoming improvements to it are meant to provide a way for the community to keep track of it's collective research agenda and allow people to identify important unsolved problems, and solve them.

Comment by raemon on What to do with people? · 2019-03-06T19:43:19.069Z · score: 12 (6 votes) · EA · GW

I'm generally sold on the "you need more hierarchical networks" to get real things done (and even more on the more general claim that you need to expand the network in some way, hierarchical or not).

But, interestingly, the bottleneck on fixing the lack of scalable hierarchical network structures is... still the lack of hierarchical network structure. Identifying the problem doesn't make it go away.

I think most orgs seem to be doing at least a reasonable job of focusing on building out their infrastructure, it's just that they're at the early stages of doing so and it's a necessarily slow process. Scaling too quickly kills organizations. Hierarchy works best when you know exactly what to do, and runs the risk of being too inflexible.

(If you run an org, and aren't already thinking about how to build better infrastructure that expands the surface area of the network, I do think you should spend a fair bit of time thinking about that)

Comment by raemon on What to do with people? · 2019-03-06T19:34:01.891Z · score: 11 (4 votes) · EA · GW

The main thing with scaling Earning to Give is eventually you have give up on any clear definition of "effective." Part of the appeal of early-days Earn to Give was it was so simple. Make money. Give 10%. Choose from one of this relatively short list of charities.

My sense is that the "well vetted" charities can only handle a few hundred million a year, and the "weird plausibly good unvetted charities that easily fit into any EA frameworks" also only can handle a few hundred million a year, and then after that... I dunno you're back to basically just donating anywhere that seems remotely plausible.

Which... maybe is actually the correct place for EA to go. But it's important to note that it might go in that direction.

(Relatedly, I used to have some implicit belief that EA was better than the Gates Foundation, but nowadays, apart from EA taking X-risk and a few other weird beliefs seriously, EA seems to do basically the same things the Gates Foundation does, and the Gates Foundation is just what it looks like when you scale up by a factor of 10)

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T01:44:47.901Z · score: 5 (10 votes) · EA · GW

It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.

(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)

Comment by raemon on Dealing with Network Constraints (My Model of EA Careers) · 2019-03-02T01:05:49.899Z · score: 5 (4 votes) · EA · GW

Nod. BTW, the next post in this pseudo-sequence is going to be called "The Mysterious Old Wizard Bottleneck."

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T21:23:17.795Z · score: 3 (3 votes) · EA · GW

Just wanted to flag – I've been surprised and sad about how frequently people delete accounts on the EA forum. This is a totally reasonable comment and I'm confused about why the author would have deleted their account within 40 minutes of posting it (as seems to be the case as-of-the-time I write this)

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T21:05:57.740Z · score: 17 (8 votes) · EA · GW

I think it's fine to be a "norm, if you can afford it."

Dealing with Network Constraints (My Model of EA Careers)

2019-02-28T01:34:03.571Z · score: 38 (20 votes)
Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:09:27.573Z · score: 17 (4 votes) · EA · GW
It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

On a different note though:

I actually agree with this claim, but it's a weirder claim.

People used to have real communities. And engaging with them was actually a part of being emotionally healthy.

Now, we live in an atomized society where where community mostly doesn't exist, or is a pale shadow of it's former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it's actually helping them be healthy.

And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.

Sometimes this does mean a local arts program, or dance community, or whatever. If that's something you're actually getting value from.

The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a "real community" than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the "I want to live in a world with nice things, this is a nice thing" standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:03:32.689Z · score: 7 (2 votes) · EA · GW

The tldr I guess is:

Maybe it's the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don't).

But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T21:29:04.438Z · score: 1 (1 votes) · EA · GW

Hmm, I think they need about the same amount of hype. I do think Impact Prizes aren't any harder to scale – Certificates of Impact already depend on something like Impact Prizes eventually existing.

Actually, I think of Impact Prizes as "a precise formulation of how one might scale the hype and money necessary for Certificates to work."

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:05:31.157Z · score: 8 (3 votes) · EA · GW

Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.

Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)

Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of "sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?"
But, as soon as you've thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his "replacing guilt" series). But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying "well then, the obvious-implications-of-EA-thinking must be wrong", or "I guess maybe I don't need to live healthily".
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like "okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it's a necessary evil that I feel guilty about."
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says "I'm just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare's writing and see if you're able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it's better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I'm in a community that prioritizes saying true things even if they're inconvenient, I'm willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you're on board with Point-A-the-way-I-meant-Point-A-to-be-interpreted requires going through some awkward and maybe unhealthy stages where you haven't fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with "we really think you should read some detailed blogposts about the psychology of this before you commit" (this may be a good idea), reading the blogposts wouldn't actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like "if you live off more than $20k a year that's basically murder". (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it's fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, there's still a fair number of prominent people hanging around who do earnestly lean towards "it's okay to make costs hidden, it's okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It."
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it's not worth talking about and taking seriously as a consideration.

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:03:29.706Z · score: 1 (1 votes) · EA · GW
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic.
Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results.
Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude.
I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

(I broke the quoted text into more paragraphs so that I could parse it more easily. I'm thinking about a reply – the questions you're posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T20:56:49.673Z · score: 6 (8 votes) · EA · GW

Thanks for writing this.

I feel an ongoing sense of frustration that even though this has seemed like the common wisdom of most "longterm EA folk" for several years... new people arriving in the community often have to go through a learning process before they can really accept this.

This means that in any given EA space, where most people are new, there will be a substantial fraction of people who haven't internalized this, and are still stressing themselves out about it, and are in turn stressing out new new people who are exposed more often to the "see everything through the utilitarian lens" than posts like this.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T20:49:34.370Z · score: 9 (4 votes) · EA · GW

My impression is that nobody has made it their job (and spent at least a month and preferably a year or two) to make Certificates of Impact work. i.e. money is real because humans have agreed to believe it's real, and because there's a lot of good infrastructure that helps it work. If Certificates of Impact (or Prizes) are to be real someone needs to actually build a thing and hype it continuously. So far it doesn't feel like it's been tried.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T06:27:18.400Z · score: 1 (1 votes) · EA · GW

Part of the point is that, although the prize isn't awarded until 2022, you can still sell your rights to the prize in 2019, to someone who predicts that you will win the prize in 2022.

Comment by raemon on How GiveWell's Research is Evolving · 2019-02-11T00:48:27.840Z · score: 17 (7 votes) · EA · GW

I'm curious how this relates to OpenPhil (I'd been bucketing "OpenPhil as the research team that does harder-to-quantify/justify stuff, and Givewell as the team that does... not that")

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-23T20:28:28.751Z · score: 7 (4 votes) · EA · GW

(Updated the title of the post, after realizing that people who thought they agreed with me only read the headline and missed the very first point that it's still valuable to donate at least a token amount)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-07T04:45:52.116Z · score: 13 (5 votes) · EA · GW

*nods*

I think the way I'd phrase advice to someone who's already excited to get started donating, is some combination of:

a) try to save at least as much as you donate. (As deluks mentioned elsethread, it is totally possible to both donate and save signficantly, so someone who's already chomping at the bit to donate significantly can probably find the budget for both 10% donations and savings)

b) re total runway time, I think a reasonable plan of action is "get at least 6 months [comfortable] runway, and meanwhile be thinking about your potential longterm plans. A lot of people start out focused on donating but eventually find themselves wishing they had the freedom to start a project, or a join a lower paying job, so at least consider preparing for that sort of possibility."

Comment by raemon on Should there be an EA crowdfunding platform? · 2018-12-06T23:19:04.154Z · score: 7 (4 votes) · EA · GW

I like a lot of the directions here. My main concern is that the current implementation details here seem like a lot of work, when it seems like Grant Evaluations is already fairly bandwidth constrained.

Some alternate that I think might make a middle ground between "everyone pitches ideas randomly to the EA forum / kickstarter etc" and the "highly structured vetting process" described here:

  • Right now, there are several EA grantmaking bodies (CEA, BERI, OpenPhil, EA Funds, etc). My impression is there is some duplication of labor in setting up each grant funnel, and duplication of effort for a given project to submit multiple grants.
  • Some of those orgs actually have different requirements for who they donate to, so it makes sense for them to have different processes
  • But, I'd expect most of the core process to be pretty similar.

So, proposal: create a common application process which includes whatever submission criteria are shared between grantmakers, with whatever additional details are required for specific orgs. This doesn't create any additional obligations on people's time, just streamlines the work that's already being done.

You could potentially also share the application publicly.

There might be additional details to work out to prevent information cascades, and to optimize the epistemics of the system.

Comment by raemon on Should there be an EA crowdfunding platform? · 2018-12-06T23:00:07.091Z · score: 1 (1 votes) · EA · GW

I do think this is a promising idea, but coordination-technology is actually an area where I think it's pretty important to get a bunch of nuances right, where just building a thing is a) unlikely to work, b) causes harm to future attempts to build the thing.

You don't just need to build tech, you need to get lots of people on board with it at once. And every instance of getting everyone on board with a thing has a large cost, and every failed instance of that makes people less willing to try out the next thing.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-05T05:19:20.978Z · score: 1 (1 votes) · EA · GW

(initial version of the above comment wasn't quite replying to what deluks was saying – I accidentally started writing and then got tunnel vision and forgot the points about agentiness. Reworded a bit to address that)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-04T23:02:30.868Z · score: 9 (5 votes) · EA · GW

For completeness sake, responding more in depth to your 80k comment. (It's plausible this should go in the other 80k post-thread but it seemed just as much part of this conversation. shrug)

Disclaimer Re: 80k

I haven't read 80k very thoroughly and am not sure whether I endorse their advice or if my picture of their overall advice is accurate. But what advice I've seen does seem like it's aiming to fill a fairly narrow set of top-vacancies. And that it does seem pretty alienating if you're not part of their demographic.

This doesn't necessarily mean 80k should change focus – the top career paths are still highly important to fill and they have limited time. But I do think it probably means 80k style advice shouldn't be the only/primary place we direct newcomer's attention.

My own take on what kind of direct work is advisable is still a probably a bit depressing – I don't think there are easy answers on how to help, and it'd be hard to scale across 10,000s of people.

[It's possible 80k actually shares these views, or even that they're listed on the website, I haven't checked]

My take:

[edit: updated because I didn't quite address deluks917's points as worded]

I think the issues getting into EA Direct Work has less do with how skilled you need to be, and more to do with limitations in network bandwidth.

There is some agentiness needed to get involved, but a) I think agency is a learnable skill, b) the amount required is less than you might think.

If you can successfully get yourself into the EA network, then you can be aware of early stage projects forming. Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.

I think this is quite achievable for the median EA.

Early stage orgs often have neither money, or time for an extensive hiring project – people just start working together with people they know. The bottleneck is more on people knowing each other than particular skills.

But, new projects and orgs also increase the surface area of EA, adding more places for newcomers to plug into. So if you can help a budding project grow into an institution, you're not just doing direct work, you're helping the overall community scale.

These jobs are lower pay, sure. But that's precisely why I think Earn-to-Save is important.

This is still a bit rate limited, and couldn't handle an influx of 10,000s of thousands. But I think it can handle more than it currently does. And it's definitely not because people aren't top-half-of-oxford talented.

Meanwhile, although "being agenty enough to found a project yourself" is fairly hard, it's learnable. The path to learning it is a bit circuitous and doesn't necessarily fit directly into EA. But I think most EAs would benefit from taking on a complex project that forces them to grow, learning "hustle" and "networking", etc. This works best when it's a project you already are excited about (doesn't matter much if it's EA related), so it doesn't feel like you're making a sacrifice so much as just exploring something new and cool.

I don't think people know if they can be agenty until they try, and I currently think it's a better default-path for aspiring EAs to go something like:

  • Start donating a bit as a credible signal
  • Build up runway
  • Do some projects in your spare time, practice thinking seriously about EA, and try a few things to see if some of the direct work stuff is a good fit for you.
  • Depending on how the previous bit goes, do one of:
    • try a low-medium risk plan that could move you into a higher impact path, but fails gracefully (i.e. move to an EA hub for a regular job you'll enjoy, but then explore the network there and see if you can transition)
    • try a high risk plan if you're feeling ambitious
    • or, just try to move into the most lucrative version of whatever your default career was going to be anyway, if the above 2 options don't make sense for you.

All three of which benefit from having enough runway to quit your current job.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-30T01:16:21.718Z · score: 5 (4 votes) · EA · GW

So I have a mixture of agreements and disagreements with your quoted comment (minor meta point: I recommend formatting it such that it's a blockquote to make it easier to see which section is which)

I'll summarize my own version of that comment in a bit (the tldr of which is "it's not as bad as you describe it, but yeah, it's still pretty bad").

But I don't think the applicability hinges on the specifics of your comment. Instead, I'd argue:

Earn-to-save is relevant to a much broader swath of people. Even if you're just trying to Earn-to-Give ultimately, it's still much more important to seek out higher paying jobs than to donate when you're at at a low-to-mid-paying job. This is relevant even if you're "just" moving from $50k to $80k.

My biggest crux here is that having 2 years of runway is important even for switching jobs at that level, and I think this should dominate even within your framework (at least by my understanding of your position).

Meanwhile, I'd make a more speculative claim which is that while yes, most people probably won't end up getting a Direct Impact career, the people that do still have enough expected value that that early EAs should at least be seriously considering that possibility. (I very much don't think you need to be top-half of oxford to for direct work to be better than earning to give)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-29T21:52:06.611Z · score: 1 (1 votes) · EA · GW

One last bit, that I realized I didn't emphasize very hard in the OP: I'm also imagining this being pushed harder than Earning to Give is currently pushed.

The status quo is that if you ask "what do you need to do to be an Effective Altruist?" you get a murky answer, where "donate 10%" is one thing a bunch of people agree qualify, but so does working at an EA org, and maybe taking time off to learn does, and if you're a student or poor it's a bit ambiguous.

I would definitely oppose putting uniform pressure on everyone at an EA meetup group to donate 10% – there's too many situations where the blunt instrument of social pressure would do the wrong thing. But I would be pretty comfortable putting uniform pressure on any EA with income to Earn to Save.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-29T21:36:21.956Z · score: 1 (1 votes) · EA · GW

That all said, to be clear, I do also find the survey data you linked in the other comment pretty disappointing. I do think it's often quite possible to be donating 10% and saving 10% (or more). I think this should be encouraged for people who have gotten financially situated and have a rough idea of their longterm plans.

Earning to Save (Give 1%, Save 10%)

2018-11-26T23:47:58.384Z · score: 66 (40 votes)

"Taking AI Risk Seriously" – Thoughts by Andrew Critch

2018-11-19T02:21:00.568Z · score: 26 (12 votes)

Earning to Give as Costly Signalling

2017-06-24T16:43:25.995Z · score: 11 (11 votes)

What Should the Average EA Do About AI Alignment?

2017-02-25T20:07:10.956Z · score: 29 (26 votes)

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

2017-01-11T17:45:48.394Z · score: 18 (20 votes)

Meetup : Brooklyn EA Gathering

2015-04-13T00:07:47.159Z · score: 0 (0 votes)