Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:09:27.573Z · score: 12 (4 votes) · EA · GW
It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

On a different note though:

I actually agree with this claim, but it's a weirder claim.

People used to have real communities. And engaging with them was actually a part of being emotionally healthy.

Now, we live in an atomized society where where community mostly doesn't exist, or is a pale shadow of it's former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it's actually helping them be healthy.

And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.

Sometimes this does mean a local arts program, or dance community, or whatever. If that's something you're actually getting value from.

The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a "real community" than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the "I want to live in a world with nice things, this is a nice thing" standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:03:32.689Z · score: 7 (2 votes) · EA · GW

The tldr I guess is:

Maybe it's the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don't).

But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T21:29:04.438Z · score: 1 (1 votes) · EA · GW

Hmm, I think they need about the same amount of hype. I do think Impact Prizes aren't any harder to scale – Certificates of Impact already depend on something like Impact Prizes eventually existing.

Actually, I think of Impact Prizes as "a precise formulation of how one might scale the hype and money necessary for Certificates to work."

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:05:31.157Z · score: 8 (3 votes) · EA · GW

Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.

Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)

Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of "sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?"
But, as soon as you've thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his "replacing guilt" series). But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying "well then, the obvious-implications-of-EA-thinking must be wrong", or "I guess maybe I don't need to live healthily".
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like "okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it's a necessary evil that I feel guilty about."
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says "I'm just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare's writing and see if you're able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it's better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I'm in a community that prioritizes saying true things even if they're inconvenient, I'm willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you're on board with Point-A-the-way-I-meant-Point-A-to-be-interpreted requires going through some awkward and maybe unhealthy stages where you haven't fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with "we really think you should read some detailed blogposts about the psychology of this before you commit" (this may be a good idea), reading the blogposts wouldn't actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like "if you live off more than $20k a year that's basically murder". (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it's fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, there's still a fair number of prominent people hanging around who do earnestly lean towards "it's okay to make costs hidden, it's okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It."
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it's not worth talking about and taking seriously as a consideration.

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:03:29.706Z · score: 1 (1 votes) · EA · GW
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic.
Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results.
Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude.
I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

(I broke the quoted text into more paragraphs so that I could parse it more easily. I'm thinking about a reply – the questions you're posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T20:56:49.673Z · score: 4 (7 votes) · EA · GW

Thanks for writing this.

I feel an ongoing sense of frustration that even though this has seemed like the common wisdom of most "longterm EA folk" for several years... new people arriving in the community often have to go through a learning process before they can really accept this.

This means that in any given EA space, where most people are new, there will be a substantial fraction of people who haven't internalized this, and are still stressing themselves out about it, and are in turn stressing out new new people who are exposed more often to the "see everything through the utilitarian lens" than posts like this.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T20:49:34.370Z · score: 9 (4 votes) · EA · GW

My impression is that nobody has made it their job (and spent at least a month and preferably a year or two) to make Certificates of Impact work. i.e. money is real because humans have agreed to believe it's real, and because there's a lot of good infrastructure that helps it work. If Certificates of Impact (or Prizes) are to be real someone needs to actually build a thing and hype it continuously. So far it doesn't feel like it's been tried.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T06:27:18.400Z · score: 1 (1 votes) · EA · GW

Part of the point is that, although the prize isn't awarded until 2022, you can still sell your rights to the prize in 2019, to someone who predicts that you will win the prize in 2022.

Comment by raemon on How GiveWell's Research is Evolving · 2019-02-11T00:48:27.840Z · score: 17 (7 votes) · EA · GW

I'm curious how this relates to OpenPhil (I'd been bucketing "OpenPhil as the research team that does harder-to-quantify/justify stuff, and Givewell as the team that does... not that")

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-23T20:28:28.751Z · score: 7 (4 votes) · EA · GW

(Updated the title of the post, after realizing that people who thought they agreed with me only read the headline and missed the very first point that it's still valuable to donate at least a token amount)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-07T04:45:52.116Z · score: 13 (5 votes) · EA · GW

*nods*

I think the way I'd phrase advice to someone who's already excited to get started donating, is some combination of:

a) try to save at least as much as you donate. (As deluks mentioned elsethread, it is totally possible to both donate and save signficantly, so someone who's already chomping at the bit to donate significantly can probably find the budget for both 10% donations and savings)

b) re total runway time, I think a reasonable plan of action is "get at least 6 months [comfortable] runway, and meanwhile be thinking about your potential longterm plans. A lot of people start out focused on donating but eventually find themselves wishing they had the freedom to start a project, or a join a lower paying job, so at least consider preparing for that sort of possibility."

Comment by raemon on Should there be an EA crowdfunding platform? · 2018-12-06T23:19:04.154Z · score: 5 (2 votes) · EA · GW

I like a lot of the directions here. My main concern is that the current implementation details here seem like a lot of work, when it seems like Grant Evaluations is already fairly bandwidth constrained.

Some alternate that I think might make a middle ground between "everyone pitches ideas randomly to the EA forum / kickstarter etc" and the "highly structured vetting process" described here:

  • Right now, there are several EA grantmaking bodies (CEA, BERI, OpenPhil, EA Funds, etc). My impression is there is some duplication of labor in setting up each grant funnel, and duplication of effort for a given project to submit multiple grants.
  • Some of those orgs actually have different requirements for who they donate to, so it makes sense for them to have different processes
  • But, I'd expect most of the core process to be pretty similar.

So, proposal: create a common application process which includes whatever submission criteria are shared between grantmakers, with whatever additional details are required for specific orgs. This doesn't create any additional obligations on people's time, just streamlines the work that's already being done.

You could potentially also share the application publicly.

There might be additional details to work out to prevent information cascades, and to optimize the epistemics of the system.

Comment by raemon on Should there be an EA crowdfunding platform? · 2018-12-06T23:00:07.091Z · score: 1 (1 votes) · EA · GW

I do think this is a promising idea, but coordination-technology is actually an area where I think it's pretty important to get a bunch of nuances right, where just building a thing is a) unlikely to work, b) causes harm to future attempts to build the thing.

You don't just need to build tech, you need to get lots of people on board with it at once. And every instance of getting everyone on board with a thing has a large cost, and every failed instance of that makes people less willing to try out the next thing.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-05T05:19:20.978Z · score: 1 (1 votes) · EA · GW

(initial version of the above comment wasn't quite replying to what deluks was saying – I accidentally started writing and then got tunnel vision and forgot the points about agentiness. Reworded a bit to address that)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-12-04T23:02:30.868Z · score: 9 (5 votes) · EA · GW

For completeness sake, responding more in depth to your 80k comment. (It's plausible this should go in the other 80k post-thread but it seemed just as much part of this conversation. shrug)

Disclaimer Re: 80k

I haven't read 80k very thoroughly and am not sure whether I endorse their advice or if my picture of their overall advice is accurate. But what advice I've seen does seem like it's aiming to fill a fairly narrow set of top-vacancies. And that it does seem pretty alienating if you're not part of their demographic.

This doesn't necessarily mean 80k should change focus – the top career paths are still highly important to fill and they have limited time. But I do think it probably means 80k style advice shouldn't be the only/primary place we direct newcomer's attention.

My own take on what kind of direct work is advisable is still a probably a bit depressing – I don't think there are easy answers on how to help, and it'd be hard to scale across 10,000s of people.

[It's possible 80k actually shares these views, or even that they're listed on the website, I haven't checked]

My take:

[edit: updated because I didn't quite address deluks917's points as worded]

I think the issues getting into EA Direct Work has less do with how skilled you need to be, and more to do with limitations in network bandwidth.

There is some agentiness needed to get involved, but a) I think agency is a learnable skill, b) the amount required is less than you might think.

If you can successfully get yourself into the EA network, then you can be aware of early stage projects forming. Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.

I think this is quite achievable for the median EA.

Early stage orgs often have neither money, or time for an extensive hiring project – people just start working together with people they know. The bottleneck is more on people knowing each other than particular skills.

But, new projects and orgs also increase the surface area of EA, adding more places for newcomers to plug into. So if you can help a budding project grow into an institution, you're not just doing direct work, you're helping the overall community scale.

These jobs are lower pay, sure. But that's precisely why I think Earn-to-Save is important.

This is still a bit rate limited, and couldn't handle an influx of 10,000s of thousands. But I think it can handle more than it currently does. And it's definitely not because people aren't top-half-of-oxford talented.

Meanwhile, although "being agenty enough to found a project yourself" is fairly hard, it's learnable. The path to learning it is a bit circuitous and doesn't necessarily fit directly into EA. But I think most EAs would benefit from taking on a complex project that forces them to grow, learning "hustle" and "networking", etc. This works best when it's a project you already are excited about (doesn't matter much if it's EA related), so it doesn't feel like you're making a sacrifice so much as just exploring something new and cool.

I don't think people know if they can be agenty until they try, and I currently think it's a better default-path for aspiring EAs to go something like:

  • Start donating a bit as a credible signal
  • Build up runway
  • Do some projects in your spare time, practice thinking seriously about EA, and try a few things to see if some of the direct work stuff is a good fit for you.
  • Depending on how the previous bit goes, do one of:
    • try a low-medium risk plan that could move you into a higher impact path, but fails gracefully (i.e. move to an EA hub for a regular job you'll enjoy, but then explore the network there and see if you can transition)
    • try a high risk plan if you're feeling ambitious
    • or, just try to move into the most lucrative version of whatever your default career was going to be anyway, if the above 2 options don't make sense for you.

All three of which benefit from having enough runway to quit your current job.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-30T01:16:21.718Z · score: 5 (4 votes) · EA · GW

So I have a mixture of agreements and disagreements with your quoted comment (minor meta point: I recommend formatting it such that it's a blockquote to make it easier to see which section is which)

I'll summarize my own version of that comment in a bit (the tldr of which is "it's not as bad as you describe it, but yeah, it's still pretty bad").

But I don't think the applicability hinges on the specifics of your comment. Instead, I'd argue:

Earn-to-save is relevant to a much broader swath of people. Even if you're just trying to Earn-to-Give ultimately, it's still much more important to seek out higher paying jobs than to donate when you're at at a low-to-mid-paying job. This is relevant even if you're "just" moving from $50k to $80k.

My biggest crux here is that having 2 years of runway is important even for switching jobs at that level, and I think this should dominate even within your framework (at least by my understanding of your position).

Meanwhile, I'd make a more speculative claim which is that while yes, most people probably won't end up getting a Direct Impact career, the people that do still have enough expected value that that early EAs should at least be seriously considering that possibility. (I very much don't think you need to be top-half of oxford to for direct work to be better than earning to give)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-29T21:52:06.611Z · score: 1 (1 votes) · EA · GW

One last bit, that I realized I didn't emphasize very hard in the OP: I'm also imagining this being pushed harder than Earning to Give is currently pushed.

The status quo is that if you ask "what do you need to do to be an Effective Altruist?" you get a murky answer, where "donate 10%" is one thing a bunch of people agree qualify, but so does working at an EA org, and maybe taking time off to learn does, and if you're a student or poor it's a bit ambiguous.

I would definitely oppose putting uniform pressure on everyone at an EA meetup group to donate 10% – there's too many situations where the blunt instrument of social pressure would do the wrong thing. But I would be pretty comfortable putting uniform pressure on any EA with income to Earn to Save.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-29T21:36:21.956Z · score: 1 (1 votes) · EA · GW

That all said, to be clear, I do also find the survey data you linked in the other comment pretty disappointing. I do think it's often quite possible to be donating 10% and saving 10% (or more). I think this should be encouraged for people who have gotten financially situated and have a rough idea of their longterm plans.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-29T21:02:14.773Z · score: 8 (3 votes) · EA · GW

I do think the post would be much improved if it went into details with more numbers and cases (I definitely did a low effort version of the post).

But my core point was actually subtly different from mingyuan's, and I think the numbers that would support my point are a different sort than "how much money can people afford to donate?" (not sure which type of numbers you meant to imply)

Mingyuan's case is one of the things I was trying to solve for. But the more important underlying claim was "it's more important to have at least a year of runway in the bank than it is to get started donating heavily."

(This is essentially what 80k is already recommending as Ben Todd notes elsewhere in thread. Their current post argues to donate 1% until you have 6-12 months of runway, and runway includes moving in with parents. I'd argue for a stronger claim that recommends 12-36 months and living with parents doesn't count, but the basic principle is the same)

This obviously only makes sense as EA advice if there's a part 2, where you actually do something with the money (be it donate, or actually use the runway to switch jobs or move cities or retrain skills).

My suggested numbers of Earning to Save weren't an attempt to rigorously determine the optimal financial advice, they were mostly starting from the point of "we currently encourage people to donate 10%. and instead I basically think the upfront advice should switch that 10% to focus on savings until they have enough runway."

The numbers that'd support this don't have much to do with how much you can easily live on, and instead have more to do with "how strong are the benefits to switching careers, how likely are people to run into financial hardship, how long does it typically take to get a new job, how valuable is it to try and launch a major project or contribute to EA with direct work."

I admittedly don't have a clear set of numbers backing that up, but it is my pretty strong impression both from:

  • Personal experience and anecdotes from friends (when I quit my job with about 6 months runway it turned out I needed 18 months and I ended up homeless for awhile, and meanwhile the rationalist houses in NYC that made good community centers only worked because someone had 50k in the bank that could absorbed roommates randomly leaving, paying upfront costs for deposits etc)
  • 80k and other EA orgs changing their direction a bit. I realize we're currently a bit in a pendulum swing back away from the talent-constraint language, but I think a core concept is still pretty important – that much of the good you can do has to do with things other than donating, and this requires more flexible messaging.
  • Experience in the nonprofit landscape. My time at Agora for Good, where I was much more exposed to the broader nonprofit landscape, radically changed what I thought was important about philanthropy. AMF is basically the leftovers of the malaria landscape – the stuff that Gates Foundation doesn't get around to. (See this comment for specifics). This has led me to think that most of the value in EA is in discovering or creating new giving opportunities that the major funders don't believe in yet. (this is a somewhat different take on why I think EA is more talent constrained than funding constrained, and that much of the value an EA can generate has more to do with gaining skills and switching careers rather than donating)

Comment by raemon on So you want to do operations [Part one] - which skills do you need? · 2018-11-28T21:14:41.480Z · score: 5 (4 votes) · EA · GW

For me, it was quite important that the project was not just "important" in the sense that it was relevant to the global good, but "important" in the sense that it was meeting all of my own needs. ie:

  • I felt like other people in my social circle cared about it
  • The end product was something that tied in with my overall self-narrative/image
  • Many intermediate stages were very creatively stimulating
  • The difficulty of the tasks were roughly at my current skill level
  • There was clearly nobody else who would do the thing if I didn't do it (this is unfortunately in tension with "doing it sustainably". It's possible people need to be forced to learn this skill in somewhat stressful burnout-inducing environments and then later can apply it in healthier environments)

A lot of things had to go right at once, which were pretty situation-dependent (and me-dependent)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-28T21:00:36.847Z · score: 1 (1 votes) · EA · GW

Yeah, I think something similar to this is probably best.

I do think it's often necessary, to get support from others, to initially do some self-funded / volunteer work to demonstrate proof of concept. But it's probably best to get outside feedback as soon as possible.

Comment by raemon on So you want to do operations [Part one] - which skills do you need? · 2018-11-28T20:21:38.095Z · score: 2 (2 votes) · EA · GW

I personally have found it pretty situational – I went through a fairly binary switch from "had not had a project that felt important enough to take heroic responsibility on" to "suddenly found lots of projects where it felt fairly natural take heroic responsibility for things."

(And then burned out and currently don't take much buck-stops-here responsibility, but feel like I could again if I needed to. I think figuring out how to do this sustainably, both as an individual and at the org level, is pretty tricky)

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-28T19:09:09.902Z · score: 1 (1 votes) · EA · GW

Fair. I do think the underlying point of "don't donate all your money to charity right before asking for a bunch of money for your new charity" is still pretty important. If nothing else, it should still mean that you don't need to pay yourself.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-28T02:20:39.442Z · score: 3 (4 votes) · EA · GW

Hmm. This feels like it's reading more or different things into the post than I intended to convey.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-27T19:13:53.848Z · score: 13 (8 votes) · EA · GW

Yeah. My impression was that "prioritize savings and runway" had gotten a fair amount of traction the EAsphere, but it hadn't quite hit the point where it was the obvious advice newcomers were getting.

I would note, responding to the article, that I think it's important to have enough runway that you can live without doing things like manage an airBnB, because part of my motivation here is to make sure people can spend their runway devoting their full cognitive attention to solving important problems.

When I was broke and airBnBing to survive, the airBnB took a lot of overhead. It was worth it, but only as a backup-backup plan, not as something I'd want to be part of my found-a-startup plan.

Similarly, living with my parents for a bit was stressful, and meant that I didn't have immediate access to my usual network for either social support, or cross pollination of ideas, or easy exposure to opportunities.

In "Taking AI Risk Seriously", one of the important points (but buried a bit) is advocating for people to have comfortable runway, not "you can technically live off of Ramen in your parent's basement runway."

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-27T19:10:40.184Z · score: 2 (4 votes) · EA · GW

I do want to note that I very specifically didn't say giving 10% was impossible – I said it was impractical as a default option for all new EAs. This is both because for some people it can hard, but more importantly because for many people I think it's inadvisable. I think it's much more important for new EAs to adopt plans with option value, that start building them runway.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-27T02:29:52.614Z · score: 9 (6 votes) · EA · GW

I have a few classes of response:

  • My personal experience was that living on $46k was fairly difficult. You can chalk this up to "I was bad at budgeting", but...
  • I think it's actually somewhat bad to emphasize frugality as the way new EAs approach financial problems. It incentives an approach to finance that amounts to penny-pinching over amounts that don't matter, and not investing in things that give you additional time or multiply your power. (This is less a point against "try to save 20%" and more a point against immediately jumping to "donate 10%"). Cognitive bandwidth that you spend on "how to I make sure to meet this budget" is probably better spent on "how do I boost my income by 20k?" (or "how do I solve this problem at the high-impact org that I work at now")
  • Meanwhile, the point here is not to find something "reasonably achievable", but something that basically anyone can do as soon as they get involved with EA. Some people are making $26k instead of $50k. Some people have a lot of student debt. Some people should be taking some time off to think, or recover from burnout, or have a lot of health care costs. Some people have housemates that randomly move out without warning and their rent suddenly doubles.

I still think the GWWC pledge is a good thing for many EAs to adopt, and I think "saving 10%, donate 10%" is also a pretty good default for anyone for whom it's practical (this is basically what I did when I got a higher paying job). I just don't think it works as a default.

Comment by raemon on Earning to Save (Give 1%, Save 10%) · 2018-11-26T23:50:23.575Z · score: 3 (4 votes) · EA · GW

Note: this is phrased as a "rally people around an idea" kind of post.

Ideally I'd have had time to break this into two posts, one of which more dispassionately discusses the benefits of prioritizing savings (which would have fit the frontpage criteria), and one of which made the more specific claim of the "give 1% save 10%", which was a bit more arbitrary.

I think there's some cost to jumping straight to the "rally people to your idea" stage (if everyone's doing it it erodes the epistemic commons). I ended up doing it anyway because I didn't have much time and found this one easier to write, but in the interest of setting a good example on the new forum wanted to at least acknowledge that.

Earning to Save (Give 1%, Save 10%)

2018-11-26T23:47:58.384Z · score: 64 (38 votes)
Comment by raemon on "Taking AI Risk Seriously" – Thoughts by Andrew Critch · 2018-11-19T22:49:43.469Z · score: 6 (5 votes) · EA · GW

(Off the cuff thoughts, which are very low confidence. Not attributed to Critch at all)

So, this depends quite a bit on how you think the world is shaped (which is a complex enough question that Critch made the recommendation to just think about it for weeks or months). But the three classes of answer that I can think of are:

a) in many possible worlds, the selfish and altruistic answers are just the same. The best way to survive a fast or even moderate takeoff is to ensure a positive singularity, and just pouring your efforts and money into maximizing the chance of that is really all you can do.

b) in some possible worlds (perhaps like Robin Hanson's Age of Em), it might matter that you have skills that can go into shaping the world (such as skilled programming). Though this is realistically only an option for some people.

c) for the purposes of flourishing in the intervening years (if we're in a slowish takeoff over the next 1-4 decades), owning stock in the right companies or institutions might help. (although some caution about worrying this over the actual takeoff period may be more of a distraction than helpful)

d) relatedly, simply getting yourself psychologically ready for the world to change dramatically may be helpful in and of itself, and/or be useful to make yourself ready to take on sudden opportunities as they arise.

Comment by raemon on "Taking AI Risk Seriously" – Thoughts by Andrew Critch · 2018-11-19T21:15:46.063Z · score: 11 (5 votes) · EA · GW

Ah. So I'm not sure I can represent Critch here off-the-cuff, but my interpretation of this post is a bit different than what you've laid out here.

This is not a proposal for how the field overall should grow. There should be infrastructural efforts made to onboard people via mentorship, things like AI Safety Camp, things like MIRI Fellows, etc.

This post is an on-the-margin recommendation to some subset of people. I think there were a few intents here:

1. If you're basic plan is to donate, consider trying to become useful for direct work instead. Getting useful on direct work probably requires at least some chunk of time for thinking and understanding the problem, and some chunk of time for learning new skills.

2. The "take time off to think" thing isn't meant to be "do solo work" (like writing papers) It's more specifically for learning about the AI Alignment problem and landscape. From there, maybe the thing you do is write papers (solo or at an org), or maybe it's apply for a managerial or ops position at an org, or maybe it's founding a new project.

3. I think (personal opinion, although I expect Critch to agree), that when it comes to learning skills there are probably better ways to go about it than "just study independently." (Note the sub-sections on taking advantage of being in school). This will vary from person to person.

4. Not really covered in the post, but I personally think there's a "mentorship bottleneck". It's obviously better to have mentors and companions, and the field should try to flesh that out. The filter for people who can work at least somewhat independently and figure things out for themselves is a filter of necessity, not an ideal situation.

3. I think Critch was specifically trying to fill some particular-gaps on the margin, which is "people who can be trusted to flesh out the middle-tier hierarchy", who can be entrusted to launch and run new projects competently without needing to be constantly doublechecked. This is necessary to grow the field for people who do still need mentorship or guidance. (My read from recent 80k posts is that the field is still somewhat "management bottlenecked")

Comment by raemon on "Taking AI Risk Seriously" – Thoughts by Andrew Critch · 2018-11-19T20:01:30.465Z · score: 4 (3 votes) · EA · GW

Fair, but it's fairly easy to fix. Updated, and I added a link to BERI to give people some more context of who Andrew Critch is and why you might care.

"Taking AI Risk Seriously" – Thoughts by Andrew Critch

2018-11-19T02:21:00.568Z · score: 26 (12 votes)
Comment by raemon on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T19:01:10.122Z · score: 1 (1 votes) · EA · GW

It's worth noting in the previous 80k post that many orgs consider hiring skilled managers to be a top priority. So I don't think it's that this isn't happening, just... well, it takes time.

Comment by raemon on Doing vs Talking at EA Events · 2018-09-07T22:37:04.513Z · score: 2 (2 votes) · EA · GW

Re: Research

This is actually what I currently think EA groups should focus on, not because the research itself is likely to be directly important, but because I think generally, one of the most important things an EA community should do is help it's members learn how to think critically through an EA lens.

Research isn't the only way to go about this, but I think trying to answer real questions, while taking into account impact, practicality, neglectedness, etc to help you orient on the right questions, is a good practice.

Comment by raemon on EA Forum 2.0 Initial Announcement · 2018-09-07T22:32:26.340Z · score: 2 (2 votes) · EA · GW

A couple quick responses to some of the more straightforward comments:

  • A lot of the typography is just a first pass "try to give it different fonts than LW". I expect that will improve as more time is put into it.
  • I believe on EA Forum, posts should start with a strong-upvote from their author (starting at 1 karma, and quickly rising to around 3 as they gain some karma and then slowly increasing from there). (This is how LW does it and AFAICT EA forum was keeping that)
  • You can't delete posts at the moment but you can move them to your drafts.
  • Apologies for the loading speed, which we're continuing to improve (on both EA and LW forums).
Comment by raemon on Leverage Research: reviewing the basic facts · 2018-08-05T20:41:16.633Z · score: 14 (14 votes) · EA · GW

FYI, I a) struggle to read most of your posts (and seem like I'm supposed to be in the target audience)

b) the technique I myself use is "write the post the way I'd naturally write it (i.e. long and meandering), and then write a tldr of the post summarizing it with a few bullet points... and then realize that the tldr was all I actually needed to say in the first place.

Comment by raemon on Earning to Give as Costly Signalling · 2017-06-24T17:33:00.052Z · score: 4 (4 votes) · EA · GW

Maybe, but the thing I'm trying to get at here is "a bunch of people saying that rich people should donate to X" is a less credible signal than "a bunch of people saying X thing is important enough that they are willing to donate to it themselves."

Earning to Give as Costly Signalling

2017-06-24T16:43:25.995Z · score: 11 (11 votes)
Comment by raemon on Note: Jeff Bezos posted on Twitter today asking for philanthropy ideas · 2017-06-18T17:29:48.875Z · score: 2 (2 votes) · EA · GW

I responded here

https://twitter.com/Raemon777/status/876489114861830144

Givewell's made it their mission to find the best nonprofits working in exactly the near-term, urgent but high-impact space.

Comment by raemon on Note: Jeff Bezos posted on Twitter today asking for philanthropy ideas · 2017-06-18T17:29:23.696Z · score: 0 (0 votes) · EA · GW

I was about to excitedly list my own contribution here, and then actually clicked yours and... ah. I see. :P

(It is a great idea but not quite in the spirit of the thing I was about to share. lol)

Comment by raemon on Concrete project lists · 2017-03-29T23:09:06.589Z · score: 1 (1 votes) · EA · GW

How could bad research not make it harder to find good research? When you're looking for the research, you have to look through additional things before you find the good research, and good research is fairly costly to ascertain in the first place.

Comment by raemon on Concrete project lists · 2017-03-25T22:32:07.056Z · score: 10 (9 votes) · EA · GW

Thanks for doing this!

My sense is what people are missing is a set of social incentives to get started. Looking at any one of these, they feel overwhelming, they feel like they require skills that I don't have. It feels like if I start working on it, then EITHER I'm blocking someone whose better qualified from working on it OR someone who's better qualified will do it anyway and my efforts will be futile.

Or, in the case of research, my bad quality research will make it harder for people to find good quality research.

Or, in the case of something like "start one of the charities Givewell wants people to start", it feels like... just, a LOT of work.

And... this is all true. Kind of. But it's also true that the way people get good at things is by doing them. And I think it's sort of necessary for people to throw themselves into projects they aren't prepared for, as long as they can get tight feedback looks that enable them to improve.

I have half-formed opinions about what's needed to resolve that, that can be summarized as "better triaged mentorship." I'll try to write up more detailed thoughts soon.

Comment by raemon on CEA's strategic update for February 2017 · 2017-03-19T20:54:08.541Z · score: 2 (2 votes) · EA · GW

Glad to see the plans laid out.

I think it'd have made more sense to do the "EA Funds" experiment in Quarter 4, where it ties in more with people's annual giving habits.

I do think it may be valuable to try even if the donations are not counterfactual (for purposes of being able to coordinate donations better)

Comment by raemon on Open Thread #36 · 2017-03-17T15:04:41.577Z · score: 1 (1 votes) · EA · GW

Very much agreed. I was pretty worried to see the initial responses saying 'saving for retirement isn't EA'.

Comment by raemon on EA Funds Beta Launch · 2017-03-05T20:53:26.787Z · score: 2 (2 votes) · EA · GW

I currently believe MIRI is the best technical choice for Far Future concerns, but that meta-ish human-capital building orgs like 80k or CFAR are plausibly the second-best choice.

Are those the sorts of things that would fall under "Far Future" or "Movement Building?"

Comment by raemon on What Should the Average EA Do About AI Alignment? · 2017-03-01T17:29:32.859Z · score: 5 (5 votes) · EA · GW

While I didn't elaborate on my thoughts in the OP, essentially I was aiming to say "if you'd like to play a role in advocating for AI safety, the first steps are to gain skills so you can persuade the right people effectively. I think some people jump from "become convinced that AI is an issue" to "immediately start arguing with people on the internet".

If you want to do that, I'd say it's important to:

a) gain a firm understanding of AI and AI safety, b) gain an understanding common objections and modes of thought surrounding those objections. b) practice engaging with people in a way that actually has a positive impact (do this practice on lower-stakes issues, not AI). My experience is that positive interactions involve a lot of work and emotional labor.

(I still argue occasionally about AI on the internet and I think I've regretted it basically every time)

I think it makes more sense to aim for high-impact influence, where you cultivate a lot of valuable skills that gets you hired at actual AI research firms where you can then shape the culture in a way that prioritizes safety.

Comment by raemon on What Should the Average EA Do About AI Alignment? · 2017-02-28T20:47:21.026Z · score: 0 (0 votes) · EA · GW

I agree with this concern, thanks. When I rewrite this post in a more finalized form I'll include reasoning like this.

Comment by raemon on What Should the Average EA Do About AI Alignment? · 2017-02-27T05:25:51.488Z · score: 2 (2 votes) · EA · GW

Thanks, fixed. I had gotten partway through updating that to say something more comprehensive, decided I needed more time to think about it, and then accidentally saved it anyway.

Comment by raemon on What Should the Average EA Do About AI Alignment? · 2017-02-26T06:14:27.195Z · score: 2 (2 votes) · EA · GW

Heh, correct. Will update soon when I have a non phone to do it.

Comment by raemon on What Should the Average EA Do About AI Alignment? · 2017-02-25T22:37:20.959Z · score: 1 (1 votes) · EA · GW

Thanks, fixed.

Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)

What Should the Average EA Do About AI Alignment?

2017-02-25T20:07:10.956Z · score: 28 (26 votes)
Comment by raemon on Why I donated to the Environmental Data & Governance Initiative · 2017-01-19T16:21:32.348Z · score: 1 (1 votes) · EA · GW

Sort of related to that - is there a place this sort of post (and the other recent Tostan post) can get aggregated?

Comment by raemon on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T12:15:00.374Z · score: 1 (1 votes) · EA · GW

I agree that these are tradeoffs and that that's very sad. I don't have a very strong opinion on the overall net-balance of the policy. But (it sounds like we both agree?) that they are probably a necessary evil for organizations like this.

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

2017-01-11T17:45:48.394Z · score: 18 (20 votes)

Meetup : Brooklyn EA Gathering

2015-04-13T00:07:47.159Z · score: 0 (0 votes)