Posts

Raemon's EA Shortform Feed 2019-06-19T22:12:48.966Z · score: 17 (10 votes)
What's the median amount a grantmaker gives per year? 2019-05-04T00:15:57.178Z · score: 21 (5 votes)
You Have Four Words 2019-03-07T00:57:29.273Z · score: 36 (19 votes)
Dealing with Network Constraints (My Model of EA Careers) 2019-02-28T01:34:03.571Z · score: 39 (21 votes)
Earning to Save (Give 1%, Save 10%) 2018-11-26T23:47:58.384Z · score: 66 (40 votes)
"Taking AI Risk Seriously" – Thoughts by Andrew Critch 2018-11-19T02:21:00.568Z · score: 26 (12 votes)
Earning to Give as Costly Signalling 2017-06-24T16:43:25.995Z · score: 11 (11 votes)
What Should the Average EA Do About AI Alignment? 2017-02-25T20:07:10.956Z · score: 29 (26 votes)
Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) 2017-01-11T17:45:48.394Z · score: 18 (20 votes)
Meetup : Brooklyn EA Gathering 2015-04-13T00:07:47.159Z · score: 0 (0 votes)

Comments

Comment by raemon on The Future of Earning to Give · 2019-10-15T05:07:14.413Z · score: 5 (3 votes) · EA · GW
My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations)

My background assumption is that it's important to grow the number of people who can work fulltime on grant evaluation.

Remember that Givewell was originally just a few folk doing research in their spare time.

Comment by raemon on The Future of Earning to Give · 2019-10-15T05:06:01.955Z · score: 3 (2 votes) · EA · GW

My understanding (not confident) is that those people (at least Nick Beckstead) are more something like advisors acting as a sanity check or something (or at least that they aren't the ones putting most of the time into the funds)

Comment by raemon on The Future of Earning to Give · 2019-10-14T01:21:57.599Z · score: 15 (7 votes) · EA · GW

I also think there's some potential to re-orient the EA pipeline around this concept. If local EA meetups did a collective donor lottery, then even if only one of them ends up allocating the money, they could still solicit help from others to think about it.

My experience is that EA meetups struggle a bit with "what do we actually do to maintain community cohesiveness, given that for many of us our core action is something we do a couple times per year, mostly privately." If a local meetup did a collective donor lottery, than even if only one person wins the lottery, they could still solicit help from others to evaluate donor targets, and make it a collective group project. (while being the sort of project that's okay if some people flake on)

Comment by raemon on The Future of Earning to Give · 2019-10-14T01:21:13.738Z · score: 2 (1 votes) · EA · GW

(edit: whoops, responded to wrong comment)

Comment by raemon on The Future of Earning to Give · 2019-10-14T01:19:44.313Z · score: 61 (22 votes) · EA · GW

My take: rank-and-file-EAs (and most EA local communities) should be oriented around donor lotteries.

Background beliefs:

  • I think EA is vetting constrained
  • Much of the direct work that needs doing is network constrained (i.e. requires mentorship, in part to help people gain context they need to form good plans)
  • The Middle of the Middle of the EA community should focus on getting good at thinking.
  • There's only so much space in the movement for direct work, and it's unhealthy to set expectations that direct work is what people are "supposed to be."

I think the "default action" for most EAs should be something that is:

  • Simple, easy, and reasonably impactful
  • Provides a route for people who want to put in more effort to do so, while practicing building actual models of the EA ecosystem.

I don't think it's really worth it for someone donating a few thousand dollars to put a lot of effort into evaluating where to donate. But if 50 people each put $2000 into a donation lottery, then they collectively have $100,000, which is enough to justify at least one person's time in thinking seriously about where to put it. (It's also enough to angel-invest in a new person or org, allowing them to vet new orgs as well as existing ones)

I think it's probably more useful for one person to put serious effort into allocating $100,000, than 50 people to put token effort into allocating $2000.

This seems better than generic Earning to Give to me (except for people who make enough for donating, say, $25,000 or more realistic)

Comment by raemon on Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) · 2019-10-01T06:28:56.775Z · score: 6 (3 votes) · EA · GW

I asked Critch about this today and he said it seemed fine.

Comment by raemon on Kerry_Vaughan's Shortform · 2019-09-24T01:23:43.997Z · score: 4 (2 votes) · EA · GW

This was quite an interesting point I hadn't considered before. Looking forward to reading more.

Comment by raemon on Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) · 2019-09-20T21:15:47.994Z · score: 1 (2 votes) · EA · GW

My understanding is that it's currently focused on nonprofits (in large part because it's much more logistically and legally complicated to send money to individuals)

Comment by raemon on Effective Altruism and Everyday Decisions · 2019-09-20T20:56:18.517Z · score: 11 (6 votes) · EA · GW
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to rules like "try to behave in non-wasteful ways", especially when the exception is personally beneficial. And I think each exception can weaken your broader narrative about what you value and who you are.

I was brought up in a family that was very pro-don't-waste, and I've had an a lengthy shift towards "actually, 'not wasting'" just isn't very important. It's more of a carry-over from a time when a) humanity had a lot less ability to produce stuff, b) humanity had worse landfill technology than we have now."

Insofar as we do produce too much waste, it's mostly at a corporate/organizational level than something that makes sense for individuals to prioritize.

It's not that I think people should be making exceptions to rules like 'try to behave in non-wasteful ways', it's that I mostly now think that 'don't be wasteful' wasn't that useful a core-rule in the first place.

(Among my cruxes here are a belief that landfill technology has improved since the era when 'don't waste' and 'recycle' memes took off, as well as a shift towards 'thinking broadly about having a high impact is much more important than individual local decisions.'

Past me (and perhaps you) might be suspicious of the 'landfill technology is actually good enough that this isn't that big a deal', perhaps rightly so because it's a kinda suspiciously-convenient belief. I don't have arguments-at-the-ready that'd have convinced past me, so mostly just laying out my current reasoning without expecting it to be that persuasive at the moment)

Comment by raemon on Leverage Research: reviewing the basic facts · 2019-09-19T23:01:46.909Z · score: 16 (8 votes) · EA · GW

Just wanted to say I super appreciated this writeup.

Comment by raemon on 'Longtermism' · 2019-07-26T00:01:24.389Z · score: 14 (9 votes) · EA · GW

I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don't have any context.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

Comment by raemon on I find this forum increasingly difficult to navigate · 2019-07-13T22:20:09.357Z · score: 6 (3 votes) · EA · GW

Quick note that if you set All Posts to "sort by new" instead of "sort by Daily" there'll be 50 posts. (The Daily view is a bit weird because it varies a lot depending on forum traffic that week)

Comment by raemon on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-07T20:39:09.722Z · score: 11 (7 votes) · EA · GW

I don't have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.

Comment by raemon on I find this forum increasingly difficult to navigate · 2019-07-05T23:36:24.562Z · score: 6 (4 votes) · EA · GW

For the record I'm someone who works on the forum and thought the OP was expressed pretty reasonably.

Comment by raemon on I find this forum increasingly difficult to navigate · 2019-07-05T23:28:38.720Z · score: 4 (2 votes) · EA · GW

Strong upvoted mostly to make it easier to find this comment.

Comment by raemon on Raemon's EA Shortform Feed · 2019-07-03T07:59:03.280Z · score: 2 (1 votes) · EA · GW

The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they're either young and lacking some core "figure out how to be helpful and actually help" skills, or they're older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.

I think the *End* of the Middle of the funnel is more of where "volunteer at EA orgs" makes sense. And people in the Middle of the Middle who think they have the "figure out how to be helpful and help" property should do so if they're self-motivated to. (If they're not self motivated they're probably not a good volunteer)

Comment by raemon on Raemon's EA Shortform Feed · 2019-07-03T07:56:25.875Z · score: 3 (2 votes) · EA · GW

My claim is just that "volunteer at an org" is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn't to say volunteers aren't valuable, or that many EAs shouldn't explore that as an option, or that better coordination tools to improve the situation shouldn't be built.

But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said "huh, it looks like there should be all this free labor available by passionate people, can't we connect these people with orgs that need volunteers?" and tried to build some kind of tool to help with that, it turned out that most people aren't actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.

My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.

(Again, not arguing that ALLFED shouldn't look for volunteers or that EAs shouldn't volunteer at ALLFED, esp. if my experience doesn't match yours. I'd encourage anyone reading this who's looking for projects to give ALLFED volunteering a look.)

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-30T21:49:44.967Z · score: 2 (1 votes) · EA · GW

Membranes

A membrane is a semi-permeable barrier that things can enter and leave, but it's a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.

An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.

(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)

What happens inside the membrane?

  • First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose.
  • As noted elsewhere, I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of "willing to put in extra effort and work." Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence.
  • In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished)

Membranes can work via two mechanisms:

  • Be more careful about who you let in, in the first place
  • Be willing to invest effort in giving feedback, or being willing to expel people from the group.

The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).

If you're much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.

On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of "so-and-so seemed not super promising" but if you'd given them a chance to grow it would have been fine.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-30T21:49:19.547Z · score: 3 (2 votes) · EA · GW

Notes from a "mini talk" I gave to a couple people at EA Global.

Local EA groups (and orgs, for that matter) need leadership, and membranes.

Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.

Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it's worth people's effort to overcome the barriers to entry, and/or maintain that barrier.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-30T21:22:56.806Z · score: 2 (1 votes) · EA · GW

Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn't scale. There are communities and movements that are designed such that there's lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don't think EA is one of them.

I've heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-27T23:05:54.487Z · score: 5 (3 votes) · EA · GW

Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-27T00:24:41.741Z · score: 2 (1 votes) · EA · GW

I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-23T07:38:25.449Z · score: 3 (2 votes) · EA · GW

I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.

Meanwhile... "sufficiently advanced thinking looks like doing", or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.

I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn't actually rise to the level of "thinking for real." Thinking for real is real work.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-20T19:48:02.587Z · score: 7 (5 votes) · EA · GW

So I actually draw an important distinction between "mid-level EAs", where there's three stages:

"The beginning of the Middle" – once you've read all the basics of EA, the thing you should do is... read more things about EA. There's a lot to read. Stand on the shoulders of giants.

"The Middle of the Middle" – ????

"The End of the Middle" – Figure out what to do, and start doing it (where "it" is probably some kind of ambitious project).

An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong.

(In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result)

So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don't yet have the right meta skills to do that.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-20T19:23:48.276Z · score: 4 (2 votes) · EA · GW

What goals, though?

Comment by raemon on Is preventing child abuse a plausible Cause X? · 2019-06-20T09:16:38.934Z · score: 3 (2 votes) · EA · GW

I didn't write a top level post but I sketched out some of the relevant background ideas here. (I'm not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T23:43:53.317Z · score: 8 (2 votes) · EA · GW

Integrity, Accountability and Group Rationality

I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.

Some background reading for my current thoughts includes habryka's post on Integrity and my own comment here on competition.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T23:41:35.677Z · score: 11 (5 votes) · EA · GW

A few reasons for I think competition is good:

  • Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
  • Easier criticism. When there's only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn't get done at all. Multiple orgs can allow people to think more freely about the situation.
  • Competition forces people to shape up a bit. If you're the only org in town doing a thing, there's just less pressure to do a good job.
  • "Healthy" Competition enables certain kinds of integrity. Sort of related to the previous two points. Say you think Cause X is real important, but there's only one org working on it. If you think Org A isn't being as high integrity as you'd like, your options are limited (criticize them, publicly or privately, or start your own org, which is very hard. If you think Org A is overall net positive you might risk damaging Cause X by criticizing it. But if there are multiple Orgs A and B working on Cause X, there are less downsides of criticizing it. (Alternate framing is that maybe criticism wouldn't actually damage cause X but it may still feel that way to a lot of people, so getting a second Org B can be beneficial). Multiple orgs working on a topic makes it easier to reward good behavior.
    • In particular, if you notice that you're running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)

There are some special caveats here:

  • Some types of jobs benefit from concentration.
    • Communication platforms sort of want to be monopolies so people don't have to check a million different sites and facebook groups.
    • Research orgs benefit from having a number of smart people bouncing ideas around.
  • This means...
    • See if you can refactor a goal into something that doesn't actually require a monopoly.
    • If it's particularly necessary for a given org to be a monopoly, it should be held to a higher standard – both in terms of operational competence and in terms of integrity.
    • If you want to challenge a monopoly with a new org, there's likewise a particular burden to do a good job.
    • I think "doing a good job" requires a lot of things, but some important things (that should be red flags to at least think about more carefully if they're lacking) include:
      • Having strong leadership with a clear vision
        • Make sure you have a deep understanding of what you're trying to do, and a clear model of how it's going to help
      • Not trying to do a million things at once. I think a major issue facing some orgs is lack of focus.
      • Probably don't have this be your first major project. Your first major project should be something it's okay to fail at. Coordination projects are especially costly to fail at because they make the job harder for the next person.
      • Invest a lot in communication on your team.
Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T23:40:49.383Z · score: 16 (8 votes) · EA · GW

Competition in the EA Sphere

A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.

By now, I think we have the capacity (both financial, coordinational and human-talent) that that's less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.

I'm interested in chatting with people about the nuts and bolts of how to apply this.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T23:09:01.901Z · score: 5 (3 votes) · EA · GW

Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:

  • I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
    • So the longterm Earn-to-Give options are:
      • Actually becoming pretty good at vetting organizations and people
      • Joining donor lotteries (where you still might have to get good at thinking if you win)
      • Donating to GiveDirectly (which is maybe actually fine but less exciting)
  • The world isn't okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you're trying to help with.
    • I think all of these require a general thinking skill that is hard to come by and really needs practice.

(Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T23:00:56.632Z · score: 27 (10 votes) · EA · GW

Mid-level EA communities, and cultivating the skill of thinking

I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you've read all the introductory content, but before you're ready to tackle anything real ambitious... what should you do, and what should your local EA community encourage people to do?

My sense is that grassroots EA groups default to "discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory."

I have varying opinions on those things, but even if they were all good ideas... they leave an unsolved problem where there isn't a very good "bread and butter" activity that you can do repeatedly, that continues to be interesting after you've learned the basics.

My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like "figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it's okay if to not do a very good job because you're still learning."

I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are:

  • LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking
  • Updating the Cause Prioritization wiki (either this one or this one, I'm not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts.

I'm interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive version of itself.

Comment by raemon on Raemon's EA Shortform Feed · 2019-06-19T22:36:16.843Z · score: 5 (3 votes) · EA · GW

Grantmaking and Vetting

I think EA is vetting constrained. It's likely that I'll be involved with a new experimental grant allocation process. There are a few key ingredients here that are worth discussing:

  • Meta Process design. I have some thoughts on designing good grantmaking processes (at the meta level), and I'm interested in hearing from others about what seem like important process elements.
  • Evaluation approach. I haven't done (much) evaluation before, and would be interested in talking to people about what makes for good evaluation approaches.
  • Object level ideas about organizations worth funding. New orgs, old orgs. (Note: I am specifically interested in things that feed into the x-risk ecosystem somehow. Also, in the near future will only be able to consider organizations rather than individuals)
Comment by raemon on There's Lots More To Do · 2019-06-14T23:13:21.340Z · score: 14 (5 votes) · EA · GW

I think if you've read Ben's writings, it's obvious that the prime driver is about epistemic health.

Comment by raemon on There's Lots More To Do · 2019-06-11T20:49:08.228Z · score: 5 (3 votes) · EA · GW

Also worried about the overall epistemic health of EA – if it's reliably misleading people, it's much less useful as a source of information.

Comment by raemon on There's Lots More To Do · 2019-06-10T20:19:49.725Z · score: 16 (7 votes) · EA · GW

I'm fairly confident, based on reading other stuff Ben Hoffman has written, that this post has much less to do with Ben wanting to justify a rejection of EA style giving, and and much more to do with Ben being frustrated by what he sees as bad arguments/reasoning/deception in the EA sphere.

Comment by raemon on Is preventing child abuse a plausible Cause X? · 2019-06-01T01:09:55.493Z · score: 11 (4 votes) · EA · GW

I have more thoughts but it's sufficiently off topic for this post that I'll probably start a new thread about it.

Comment by raemon on Is preventing child abuse a plausible Cause X? · 2019-06-01T00:13:39.544Z · score: 14 (7 votes) · EA · GW

Meta note: I feel a vague sense of doom about a lot of questions on the EA forum (contrasted with LessWrong), which is that questions end up focused on "how should EA overall coordinate", "what should be the top causes" and "what should be part of the EA narrative?"

I worry about this because I think it's harder to think clearly about narratives and coordination mechanisms that it is about object level facts. I also have a sense that the questions are often framed in a way that is trying to tell me the answer rather than help me figure things out.

And often I think the questions could be reframed as empirical questions without the "should" and "we" frames, which a) I think would be easier to reason about, b) would remain approximately as useful for helping people to coordinate.

"Is X a top cause area?" is a sort of weird question. The whole point of EA is that you need to prioritize, and there are only ever going to be a smallish number of "top causes". So the answer to any given "Is this Cause X" is going to be "probably not."

But, it's still useful to curiously explore cause areas that are underexplored. "What are the tractable interventions of [this particular cause]?" is a question that you can explore without making it about whether it's one of the top causes overall.

Comment by raemon on Software: Private sector to non-profits · 2019-05-21T05:54:39.999Z · score: 2 (1 votes) · EA · GW

FYI Critch in particular is pretty time constrained. I'm not sure who the best person to reach out to currently who has the knowledge and also time to do a good job helping. (I'll ask around, meanwhile the "apply to MIRI" suggestion is what I got)

Comment by raemon on Software: Private sector to non-profits · 2019-05-21T05:46:03.117Z · score: 5 (3 votes) · EA · GW


Buck Shlegeris writes (on FB):

I think that every EA who is a software engineer should apply to work at MIRI, if you can imagine wanting to work at MIRI.
It's probably better for you to not worry about whether you're wasting our time. The first step in our interview is the Triplebyte quiz, which I think is pretty good at figuring out who I should spend more time talking to. And I think EAs are good programmers at high enough rates that it seems worth it to me to encourage you to apply.
There is great honor in trying and failing to get a direct work job. I feel fondness in my heart towards all the random people who email me asking for my advice on becoming an AI safety researcher, even though I'm not fast at replying to their emails and most are unlikely to be able to contribute much to AI safety research.
You should tell this to all your software engineer friends too.
EDIT: Sorry, I should have clarified that I meant that you should do this if you're not already doing something else that's in your opinion comparably valuable. I wrote this in response to a lot of people not applying to MIRI out of respect for our time or something; I think there are good places to work that aren't MIRI, obviously.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-21T01:21:45.337Z · score: 8 (4 votes) · EA · GW
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.

Just wanted to make a quick note that I also felt the "overview" style posts aren't very useful to me (since they mostly encapsulate things I already had thought about)

At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.

Comment by raemon on How do we check for flaws in Effective Altruism? · 2019-05-07T02:07:23.912Z · score: 4 (2 votes) · EA · GW

I basically agree with this. I have a bunch of thoughts about healthy competition in the EA sphere I've been struggling to write up.

Comment by raemon on What's the median amount a grantmaker gives per year? · 2019-05-05T20:30:28.194Z · score: 6 (3 votes) · EA · GW

Riceissa answered this on the LessWrong version of this question – the original source this facebook post by Vipul Naik.

For three different foundations: Open Philanthropy Project, Bill & Melinda Gates Foundation, and Laura and John Arnold Foundations, I calculated that the total money granted per hour of staff time is approximately $1000 - $3000. This includes all staff time (obtained by taking number of people on staff and multiplying by 2000 hours for a year, then comparing with annual grants).
Is there a reasonable argument that foundations would generally have this ratio of money granted to staff time? For instance, if we break down the cost into direct grant investigation cost + cost of time spent getting familiar with the domain and evaluating strategy, etc., are we bound to arrive at a comparable figure?
One foundation that has a much higher ratio of money granted to staff time in recent years is Atlantic Philanthropies, but they are in spend-down mode right now and I don't have a good picture of their overall spend trajectory and employee counts yet.
Open Philanthropy Project:
Grants in 2016: $50 to $100 million
Staff at year-end: ~20 (+ some shared operational staff with GiveWell)
Laura and John Arnold Foundation
Grants in 2015: $185 million
Staff in 2016: ~50 listed on their site
Bill & Melinda Gates Foundation
Grants: ~$4.2 billion
Staff: ~1500
Comment by raemon on Reasons to eat meat · 2019-04-23T00:16:32.935Z · score: 21 (11 votes) · EA · GW

FWIW I'm currently reducetarian (formally vegetarian), and currently give around 2% of my income. I don't give more because I don't think it's the strategically correct choice for me at the moment. In the past I've given 10%.

But, I consider it *way* easier to give 10% of my income than to change my diet. My income has fluctuated from 50k to 90k and back and not really changed my lifestyle all that much. Changing my donations requires basically a one time change to a monthly auto-payment thingy. Changing my diet requires continuous willpower.

Comment by raemon on Salary Negotiation for Earning to Give · 2019-04-13T20:12:20.799Z · score: 7 (3 votes) · EA · GW

BTW, if you're a tech worker and you feel a vague obligation to learn how to negotiate but it's kinda aversive and/or you're not sure how to go about it...

...even just bothering to do it at all can net you $5k - $10k a year. Like, just saying "hey, that seems a bit low, can you go higher?"

There are various more complicated or effortful things you can do, but "negotiate at all even slightly" is surprisingly effective.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T20:23:05.852Z · score: 4 (2 votes) · EA · GW

I think that makes sense but in practice is something that makes more sense to handle through their day jobs. (If they went the route of hiring someone for whom managing the fund was their actual day job I'd agree that generally higher salaries would be good, for mostly the same reason they'd be good across the board in EA)

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T03:19:37.166Z · score: 10 (3 votes) · EA · GW

Part of my thinking here is that this would be a mistake: focus and attention are some of the most valuable things, and splitting your focus is generally not good.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T02:35:22.401Z · score: 8 (2 votes) · EA · GW

I'm familiar with good things coming out of those places, but not sure why they're the appropriate lens in this case.

Popping back to this:

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

This makes more sense to me when you actually have a company large enough to theoretically have multiple arms. AFAICT there are no arms here, there are just like 1-3 people working on a thing. And I'd expect getting to the point where you could have that requires at least 5-10 years of work.

What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T22:58:01.056Z · score: 6 (3 votes) · EA · GW
What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Is there a particular reason to assume that'd be a good idea?

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:33:12.685Z · score: 31 (19 votes) · EA · GW

I have a weird mix of feelings and guesses here.

I think it's good on the margin for people to be able to express opinions without needing to formalize them into recommendations for the reason stated here. I think the overall conversation happening here is very important.

I do still feel pretty sad looking at the comments here — some of the commenters seem to not have a model of what they're incentivizing.

They remind me of the stereotype of a parent who's kid has moved away and grown up, and doesn't call very often. And periodically the kid does call, but the first thing they hear is the parent complaining "why don't you ever call me?", which makes the kid less likely to call home.

EA is vetting constrained.

EA is network constrained.

These are actual hard problems, that we're slowly addressing by building network infrastructure. The current system is not optimal or fair, but progress won't go faster by complaining about it.

It can potentially go faster via improvements in strategy, and re-allocating resources. But each of those improvements will come in a tradeoff. You could hire more grantmakers full-time, but those grantmakers are generally working full-time on something else comparably important.

This writeup is unusually thorough, and Habryka has been unusually willing to engage with comments and complaints. But I think Habryka has higher-than-average willingness to deal with that.

When I imagine future people considering

a) whether to be a grantmaker,

b) whether to write up their reasons publicly

c) whether to engage with comments on those reasons

I predict that some of the comments on this thread to make all of those less likely (in escalating order). It also potentially makes grantees less likely to consent to public discussion of their evaluation, since it might get ridiculed in the comments.

Because EA is vetting constrained, I think public discussion of grant-reasoning is particularly important. It's one of the mechanisms that'll give people a sense of what projects will get funded and what goes into a grantmaking process, and get a lot of what's currently 'insider knowledge' more publicly accessible.

Comment by raemon on How x-risk projects are different from startups · 2019-04-08T00:39:04.611Z · score: 13 (5 votes) · EA · GW

Just wanted to say I appreciate the nuance you're aiming at here. (Getting that nuance right is real hard)