Comment by raemon on There's Lots More To Do · 2019-06-14T23:13:21.340Z · score: 2 (1 votes) · EA · GW

I think if you've read Ben's writings, it's obvious that the prime driver is about epistemic health.

Comment by raemon on There's Lots More To Do · 2019-06-11T20:49:08.228Z · score: 3 (2 votes) · EA · GW

Also worried about the overall epistemic health of EA – if it's reliably misleading people, it's much less useful as a source of information.

Comment by raemon on There's Lots More To Do · 2019-06-10T20:19:49.725Z · score: 6 (4 votes) · EA · GW

I'm fairly confident, based on reading other stuff Ben Hoffman has written, that this post has much less to do with Ben wanting to justify a rejection of EA style giving, and and much more to do with Ben being frustrated by what he sees as bad arguments/reasoning/deception in the EA sphere.

Comment by raemon on Is preventing child abuse a plausible Cause X? · 2019-06-01T01:09:55.493Z · score: 11 (4 votes) · EA · GW

I have more thoughts but it's sufficiently off topic for this post that I'll probably start a new thread about it.

Comment by raemon on Is preventing child abuse a plausible Cause X? · 2019-06-01T00:13:39.544Z · score: 14 (7 votes) · EA · GW

Meta note: I feel a vague sense of doom about a lot of questions on the EA forum (contrasted with LessWrong), which is that questions end up focused on "how should EA overall coordinate", "what should be the top causes" and "what should be part of the EA narrative?"

I worry about this because I think it's harder to think clearly about narratives and coordination mechanisms that it is about object level facts. I also have a sense that the questions are often framed in a way that is trying to tell me the answer rather than help me figure things out.

And often I think the questions could be reframed as empirical questions without the "should" and "we" frames, which a) I think would be easier to reason about, b) would remain approximately as useful for helping people to coordinate.

"Is X a top cause area?" is a sort of weird question. The whole point of EA is that you need to prioritize, and there are only ever going to be a smallish number of "top causes". So the answer to any given "Is this Cause X" is going to be "probably not."

But, it's still useful to curiously explore cause areas that are underexplored. "What are the tractable interventions of [this particular cause]?" is a question that you can explore without making it about whether it's one of the top causes overall.

Comment by raemon on Software: Private sector to non-profits · 2019-05-21T05:54:39.999Z · score: 2 (1 votes) · EA · GW

FYI Critch in particular is pretty time constrained. I'm not sure who the best person to reach out to currently who has the knowledge and also time to do a good job helping. (I'll ask around, meanwhile the "apply to MIRI" suggestion is what I got)

Comment by raemon on Software: Private sector to non-profits · 2019-05-21T05:46:03.117Z · score: 5 (3 votes) · EA · GW


Buck Shlegeris writes (on FB):

I think that every EA who is a software engineer should apply to work at MIRI, if you can imagine wanting to work at MIRI.
It's probably better for you to not worry about whether you're wasting our time. The first step in our interview is the Triplebyte quiz, which I think is pretty good at figuring out who I should spend more time talking to. And I think EAs are good programmers at high enough rates that it seems worth it to me to encourage you to apply.
There is great honor in trying and failing to get a direct work job. I feel fondness in my heart towards all the random people who email me asking for my advice on becoming an AI safety researcher, even though I'm not fast at replying to their emails and most are unlikely to be able to contribute much to AI safety research.
You should tell this to all your software engineer friends too.
EDIT: Sorry, I should have clarified that I meant that you should do this if you're not already doing something else that's in your opinion comparably valuable. I wrote this in response to a lot of people not applying to MIRI out of respect for our time or something; I think there are good places to work that aren't MIRI, obviously.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-21T01:21:45.337Z · score: 8 (4 votes) · EA · GW
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.

Just wanted to make a quick note that I also felt the "overview" style posts aren't very useful to me (since they mostly encapsulate things I already had thought about)

At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.

Comment by raemon on How do we check for flaws in Effective Altruism? · 2019-05-07T02:07:23.912Z · score: 4 (2 votes) · EA · GW

I basically agree with this. I have a bunch of thoughts about healthy competition in the EA sphere I've been struggling to write up.

Comment by raemon on What's the median amount a grantmaker gives per year? · 2019-05-05T20:30:28.194Z · score: 6 (3 votes) · EA · GW

Riceissa answered this on the LessWrong version of this question – the original source this facebook post by Vipul Naik.

For three different foundations: Open Philanthropy Project, Bill & Melinda Gates Foundation, and Laura and John Arnold Foundations, I calculated that the total money granted per hour of staff time is approximately $1000 - $3000. This includes all staff time (obtained by taking number of people on staff and multiplying by 2000 hours for a year, then comparing with annual grants).
Is there a reasonable argument that foundations would generally have this ratio of money granted to staff time? For instance, if we break down the cost into direct grant investigation cost + cost of time spent getting familiar with the domain and evaluating strategy, etc., are we bound to arrive at a comparable figure?
One foundation that has a much higher ratio of money granted to staff time in recent years is Atlantic Philanthropies, but they are in spend-down mode right now and I don't have a good picture of their overall spend trajectory and employee counts yet.
Open Philanthropy Project:
Grants in 2016: $50 to $100 million
Staff at year-end: ~20 (+ some shared operational staff with GiveWell)
Laura and John Arnold Foundation
Grants in 2015: $185 million
Staff in 2016: ~50 listed on their site
Bill & Melinda Gates Foundation
Grants: ~$4.2 billion
Staff: ~1500

What's the median amount a grantmaker gives per year?

2019-05-04T00:15:57.178Z · score: 21 (5 votes)
Comment by raemon on Reasons to eat meat · 2019-04-23T00:16:32.935Z · score: 21 (11 votes) · EA · GW

FWIW I'm currently reducetarian (formally vegetarian), and currently give around 2% of my income. I don't give more because I don't think it's the strategically correct choice for me at the moment. In the past I've given 10%.

But, I consider it *way* easier to give 10% of my income than to change my diet. My income has fluctuated from 50k to 90k and back and not really changed my lifestyle all that much. Changing my donations requires basically a one time change to a monthly auto-payment thingy. Changing my diet requires continuous willpower.

Comment by raemon on Salary Negotiation for Earning to Give · 2019-04-13T20:12:20.799Z · score: 7 (3 votes) · EA · GW

BTW, if you're a tech worker and you feel a vague obligation to learn how to negotiate but it's kinda aversive and/or you're not sure how to go about it...

...even just bothering to do it at all can net you $5k - $10k a year. Like, just saying "hey, that seems a bit low, can you go higher?"

There are various more complicated or effortful things you can do, but "negotiate at all even slightly" is surprisingly effective.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T20:23:05.852Z · score: 4 (2 votes) · EA · GW

I think that makes sense but in practice is something that makes more sense to handle through their day jobs. (If they went the route of hiring someone for whom managing the fund was their actual day job I'd agree that generally higher salaries would be good, for mostly the same reason they'd be good across the board in EA)

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T03:19:37.166Z · score: 10 (3 votes) · EA · GW

Part of my thinking here is that this would be a mistake: focus and attention are some of the most valuable things, and splitting your focus is generally not good.

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T02:35:22.401Z · score: 8 (2 votes) · EA · GW

I'm familiar with good things coming out of those places, but not sure why they're the appropriate lens in this case.

Popping back to this:

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

This makes more sense to me when you actually have a company large enough to theoretically have multiple arms. AFAICT there are no arms here, there are just like 1-3 people working on a thing. And I'd expect getting to the point where you could have that requires at least 5-10 years of work.

What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T22:58:01.056Z · score: 6 (3 votes) · EA · GW
What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Is there a particular reason to assume that'd be a good idea?

Comment by raemon on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:33:12.685Z · score: 31 (19 votes) · EA · GW

I have a weird mix of feelings and guesses here.

I think it's good on the margin for people to be able to express opinions without needing to formalize them into recommendations for the reason stated here. I think the overall conversation happening here is very important.

I do still feel pretty sad looking at the comments here — some of the commenters seem to not have a model of what they're incentivizing.

They remind me of the stereotype of a parent who's kid has moved away and grown up, and doesn't call very often. And periodically the kid does call, but the first thing they hear is the parent complaining "why don't you ever call me?", which makes the kid less likely to call home.

EA is vetting constrained.

EA is network constrained.

These are actual hard problems, that we're slowly addressing by building network infrastructure. The current system is not optimal or fair, but progress won't go faster by complaining about it.

It can potentially go faster via improvements in strategy, and re-allocating resources. But each of those improvements will come in a tradeoff. You could hire more grantmakers full-time, but those grantmakers are generally working full-time on something else comparably important.

This writeup is unusually thorough, and Habryka has been unusually willing to engage with comments and complaints. But I think Habryka has higher-than-average willingness to deal with that.

When I imagine future people considering

a) whether to be a grantmaker,

b) whether to write up their reasons publicly

c) whether to engage with comments on those reasons

I predict that some of the comments on this thread to make all of those less likely (in escalating order). It also potentially makes grantees less likely to consent to public discussion of their evaluation, since it might get ridiculed in the comments.

Because EA is vetting constrained, I think public discussion of grant-reasoning is particularly important. It's one of the mechanisms that'll give people a sense of what projects will get funded and what goes into a grantmaking process, and get a lot of what's currently 'insider knowledge' more publicly accessible.

Comment by raemon on How x-risk projects are different from startups · 2019-04-08T00:39:04.611Z · score: 13 (5 votes) · EA · GW

Just wanted to say I appreciate the nuance you're aiming at here. (Getting that nuance right is real hard)

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T17:01:02.895Z · score: 3 (2 votes) · EA · GW

Reasonably. That does sound like it’s at a comparable scale.

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T07:29:56.703Z · score: 5 (3 votes) · EA · GW

(I ask because there's a big difference between a community of 10-50 people and 200-300 people. I think at the latter scale, you actually need more infrastructure)

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-30T06:20:40.473Z · score: 2 (1 votes) · EA · GW

How big is the London community?

Comment by raemon on Why is the EA Hotel having trouble fundraising? · 2019-03-29T23:56:26.308Z · score: 11 (5 votes) · EA · GW

True, but from what I recall that was largely for reasons that I expect not to apply to EA Hotel.

Comment by raemon on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-28T20:17:05.851Z · score: 5 (3 votes) · EA · GW

Thanks. That was indeed much easier.

Comment by raemon on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-27T22:17:17.940Z · score: 3 (2 votes) · EA · GW

BTW, I just tried to donate $100 (not much but about what I feel comfortable impulse-donating), and the trivial inconvenience of finding and typing in a credit card through me off. A paypal moneypool link would probably have been lower friction for me (not arguing it's lower friction overall, just that having a variety of easy payment types is probably useful to getting marginal donors)

Comment by raemon on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-27T22:13:50.910Z · score: 14 (8 votes) · EA · GW

Relatedly: there already is reasonable infrastructure (and continuing to be more) oriented towards getting EAs to live in a few hub cities.

This is good, but it leaves open an alternate path (living in a cheap place, not optimized for being near silicon-valley money or Oxford respectability), that is currently very underexplored.

Comment by raemon on Request for comments: EA Projects evaluation platform · 2019-03-23T02:11:34.740Z · score: 9 (3 votes) · EA · GW

Mild formatting note: I found the introduction a bit long as well as mostly containing information I already knew.

I'm not sure how to navigate "accessible" vs "avoid wasting people's time". But I think you could have replaced the introduction with a couple bullet links, like:

....

Building off of:

 What to Do With People?

 Can EA copy Teach For America?

 EA is Vetting Constrained

The EA community has plenty of money and people, but is bottlenecked on a way to scalably evaluate new projects. I'd like to start a platform that:

  • provides feedback on early stage projects
  • estimates what resources would be necessary to start a given project
  • further in the project's life-cycle, evaluating the team and idea fit.

...

Or, something like that (I happen to like bullet points, but in any case seems like it should be possible to cut the opening few paragraphs into a few lines)

Comment by raemon on How can prediction markets become more trendy, legal, and accessible? · 2019-03-12T20:33:35.563Z · score: 7 (7 votes) · EA · GW

Relevant posts by Zvi Mowshowitz.

Prediction Markets When Do They Work?

Subsidizing Prediction Markets

Comment by raemon on EA is vetting-constrained · 2019-03-09T21:10:18.689Z · score: 1 (1 votes) · EA · GW
Very rough reply ... the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement

This makes sense and leads to me to somewhat downgrade my enthusiasm for my "Earn to Learn To Vett" comment (although I suspect it's still good on the margin)

Comment by raemon on EA is vetting-constrained · 2019-03-09T02:43:52.393Z · score: 16 (11 votes) · EA · GW

I think this is basically accurate. As I mentioned in another thread, the issue is that the scaling-up-of-vetting is still generally network constrained.

But, this framing (I like this framing) suggests to me that the thing to do is a somewhat different take on Earning to Give.

I had previously believed that Earning-to-Give people should focus on networking their way into hubs where they can detect early stage organizations, vett them, and fund them. And that this was the main mechanism by which their marginal dollars could be complementary to larger funders.

But, the Vetting-Constrained lens suggests that Earners-to-Give should be doing that even harder, not because of the marginal value of their dollars, but because this could allow them to self-fund their own career capital as a future potential grantmaker.

And moreover, this means that whereas before I'd have said it's only especially worth it to Earn-to-Give if you make a lot of money, now I'd recommend harder for marginal EAs to join donor lotteries. If a hundred people each put in $10k into 10 different donor lotteries, now you have 10 people with $100k, enough to seed fund an org for a year. And this is valuable because it gives them experience think about whether organizations are good.

There could be some systemization to this to optimize how much experience a person gets and how the org(s) they funded turned out to fare. (Maybe with some prediction markets thrown in)

Comment by raemon on You Have Four Words · 2019-03-08T21:40:05.531Z · score: 1 (1 votes) · EA · GW

(Btw, alternate titles for this post were "you have about 5 words", "you only have 5 words", and "you have less than seven words.") :P

Comment by raemon on You Have Four Words · 2019-03-08T21:17:49.087Z · score: 1 (1 votes) · EA · GW

Nod. My motivation to write the post came in a brief spurt and I wanted to just get it out without subjecting it to a review process, so I erred on the side of wording it the way I endorsed and letting you take credit if you wanted.

Comment by raemon on You Have Four Words · 2019-03-07T21:09:42.741Z · score: 4 (3 votes) · EA · GW

This post inspired by a conversation (conversation partner can reveal themselves if they so choose), who originally claimed (I think off-the-cuff) that that you had four words if you needed to coordinate 100,000 people (resulting in highly simplified strategies).

I updated my own estimate downwards (of the number of people you need to be coordinating to face the four word limit), after observing that EA only has somewhere-on-the-order-of-a-thousand people involved and important concepts often lose their nuance. (Although, to be fair, this is at least in part because there's multiple concepts that are all nuanced that all need to be kept track of, each of which need to get boiled down to a simple jargon term)

Comment by raemon on You Have Four Words · 2019-03-07T00:59:10.688Z · score: 14 (8 votes) · EA · GW

"Donate to Effective Charities."

"AI will kill us."

"Consider Earning to Give."

"EA is Talent Constrained."

You Have Four Words

2019-03-07T00:57:29.273Z · score: 36 (19 votes)
Comment by raemon on What to do with people? · 2019-03-06T22:28:47.689Z · score: 3 (2 votes) · EA · GW

I was very interested in the "city is not a tree" post, but found it juuust confusing/dense enough to bounce off of it. I'd be interest in a link-post or comment that summarizes the key insights there in layman's terms.

Comment by raemon on What to do with people? · 2019-03-06T19:57:00.979Z · score: 7 (4 votes) · EA · GW

Nod. My comment wasn't intended to be an argument against, so much as "make sure you understand that this is the world you're building" (and that, accordingly, you make sure your arguments and language don't depend on the old world)

The traditional EA mindset is something like "find the charities with the heavy tails on the power law distribution."

The Agora mindset (Agora was an org I worked at for a bit, that evolved sort of in parallel to EA) was instead "find a way to cut out the bottom 50% on charities and focus on the top 50%", which at the time I chafed at but I appreciate better now as the sort of thing you automatically deal with when you're trying to build something that scales.

I do think we're *already quite close* to the point where that phase transition needs to happen. (I think people who are very thoughtful about their donations can still do much better than "top 50%", but "be very thoughtful" isn't a part of the thing that scales easily)

Comment by raemon on What to do with people? · 2019-03-06T19:51:45.326Z · score: 4 (4 votes) · EA · GW

A particular risk here, is that coordination is one of the most costly things to fail at.

I'm happy to encourage new EAs to tackle a random research project, or to attempt the sort of charity entrepreneurship that, well, Charity Entrepreneurship seems to encourage.

I'm much more cautious about encouraging people to try to build infrastructure for the EA community, if it only actually works if it not only is high quality but also everyone gets on board with it at the same time. In particular, it seems like people are too prone to focus on the second part.

Every time you try to coordinate on a piece of changing infrastructure, and the project flops, it makes people less enthusiastic to try the next piece of coordination infrastructure (and I think there's a variation on this for hierarchical leadership)

But I'm fairly excited about things like AI Safety camp, i.e. building new hubs of infrastructure that other existing infrastructure doesn't rely on until it's been vetted.

(It's still important to make sure something like AI Safety camp is done well, because if it's done poorly at scale it can result in a confusing morass of training tools of questionable quality. This is not a warning not to try it, just to be careful when you do)

Comment by raemon on What to do with people? · 2019-03-06T19:45:31.322Z · score: 15 (6 votes) · EA · GW

FYI, the LessWrong team's take on this underlying problem is "find ways to make intellectual progress in a decentralized fashion, even if it's less efficient than it'd be in a tight knit organization."

The new Questions feature and the upcoming improvements to it are meant to provide a way for the community to keep track of it's collective research agenda and allow people to identify important unsolved problems, and solve them.

Comment by raemon on What to do with people? · 2019-03-06T19:43:19.069Z · score: 12 (6 votes) · EA · GW

I'm generally sold on the "you need more hierarchical networks" to get real things done (and even more on the more general claim that you need to expand the network in some way, hierarchical or not).

But, interestingly, the bottleneck on fixing the lack of scalable hierarchical network structures is... still the lack of hierarchical network structure. Identifying the problem doesn't make it go away.

I think most orgs seem to be doing at least a reasonable job of focusing on building out their infrastructure, it's just that they're at the early stages of doing so and it's a necessarily slow process. Scaling too quickly kills organizations. Hierarchy works best when you know exactly what to do, and runs the risk of being too inflexible.

(If you run an org, and aren't already thinking about how to build better infrastructure that expands the surface area of the network, I do think you should spend a fair bit of time thinking about that)

Comment by raemon on What to do with people? · 2019-03-06T19:34:01.891Z · score: 11 (4 votes) · EA · GW

The main thing with scaling Earning to Give is eventually you have give up on any clear definition of "effective." Part of the appeal of early-days Earn to Give was it was so simple. Make money. Give 10%. Choose from one of this relatively short list of charities.

My sense is that the "well vetted" charities can only handle a few hundred million a year, and the "weird plausibly good unvetted charities that easily fit into any EA frameworks" also only can handle a few hundred million a year, and then after that... I dunno you're back to basically just donating anywhere that seems remotely plausible.

Which... maybe is actually the correct place for EA to go. But it's important to note that it might go in that direction.

(Relatedly, I used to have some implicit belief that EA was better than the Gates Foundation, but nowadays, apart from EA taking X-risk and a few other weird beliefs seriously, EA seems to do basically the same things the Gates Foundation does, and the Gates Foundation is just what it looks like when you scale up by a factor of 10)

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T01:44:47.901Z · score: 5 (10 votes) · EA · GW

It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.

(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)

Comment by raemon on Dealing with Network Constraints (My Model of EA Careers) · 2019-03-02T01:05:49.899Z · score: 5 (4 votes) · EA · GW

Nod. BTW, the next post in this pseudo-sequence is going to be called "The Mysterious Old Wizard Bottleneck."

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T21:23:17.795Z · score: 3 (3 votes) · EA · GW

Just wanted to flag – I've been surprised and sad about how frequently people delete accounts on the EA forum. This is a totally reasonable comment and I'm confused about why the author would have deleted their account within 40 minutes of posting it (as seems to be the case as-of-the-time I write this)

Comment by raemon on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T21:05:57.740Z · score: 17 (8 votes) · EA · GW

I think it's fine to be a "norm, if you can afford it."

Dealing with Network Constraints (My Model of EA Careers)

2019-02-28T01:34:03.571Z · score: 39 (21 votes)
Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:09:27.573Z · score: 17 (4 votes) · EA · GW
It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

On a different note though:

I actually agree with this claim, but it's a weirder claim.

People used to have real communities. And engaging with them was actually a part of being emotionally healthy.

Now, we live in an atomized society where where community mostly doesn't exist, or is a pale shadow of it's former self. So there exist a lot of people who donate to the local arts club or whatever out of a vague sense of obligation rather than because it's actually helping them be healthy.

And yes, that should be challenged. But not because those people should instead be donating to the global good (although maybe they should consider that). Rather, those people should figure out how to actually be healthy, actually have a community, and make sure to support those things so they can continue to exist.

Sometimes this does mean a local arts program, or dance community, or whatever. If that's something you're actually getting value from.

The rationalist community (and to a lesser extent the EA community) have succeeded in being, well, more of a "real community" than most things do. So there are times when I want to support projects within them, not from the greater-good standpoint, but from the "I want to live in a world with nice things, this is a nice thing" standpoint. (More thoughts here in my Thoughts on the REACH Patreon article)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T22:03:32.689Z · score: 7 (2 votes) · EA · GW

The tldr I guess is:

Maybe it's the case that being emotionally healthy is only valuable insofar as it translates into the global good (if you assume moral realism, which I don't).

But, even in that case, it seems often the case that being emotionally healthy requires, among other things, you not to treat your emotional health as a necessary evil than you indulge.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T21:29:04.438Z · score: 1 (1 votes) · EA · GW

Hmm, I think they need about the same amount of hype. I do think Impact Prizes aren't any harder to scale – Certificates of Impact already depend on something like Impact Prizes eventually existing.

Actually, I think of Impact Prizes as "a precise formulation of how one might scale the hype and money necessary for Certificates to work."

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:05:31.157Z · score: 8 (3 votes) · EA · GW

Meanwhile, my previously written thoughts on this topic, not quite addressing your claims but covering a lot of related issues, is here. Crossposting for ease of reference, warning that it includes some weird references that may not be relevant.

Context: Responding to Zvi Mowshowitz who is arguing to be wary of organizations/movements/philosophies that encourage you to give them all your resources (even your favorite political cause, yes, yours, yes, even effective altruism)

Point A: The Sane Response to The World Being On Fire (While Human)
Myself, and most EA folk I talk to extensively (including all the leaders I know of) seem to share the following mindset:
The set of ideas in EA (whether focused on poverty, X-Risk, or whatever), do naturally lead one down a path of "sacrifice everything because do you really need that $4 Mocha when people are dying the future is burning everything is screwed but maybe you can help?"
But, as soon as you've thought about this for any length of time, clearly, stressing yourself out about that all the time is bad. It is basically not possible to hold all the relevant ideas and values in your head at once without going crazy or otherwise getting twisted/consumed-in-a-bad-way.
There are a few people who are able to hold all of this in their head and have a principled approach to resolving everything in a healthy way. (Nate Soares is the only one who comes to mind, see his "replacing guilt" series). But for most people, there doesn't seem to be a viable approach to integrating the obvious-implications-of-EA-thinking and the obvious-implications-of-living-healthily.
You can resolve this by saying "well then, the obvious-implications-of-EA-thinking must be wrong", or "I guess maybe I don't need to live healthily".
But, like, the world is on fire and you can do something about it and you do obviously need to be healthy. And part of being healthy is not just saying things like "okay, I guess I can indulge things like not spending 100% of my resources on saving the world in order to remain healthy but it's a necessary evil that I feel guilty about."
AFAICT, the only viable, sane approach is to acknowledge all the truths at once, and then apply a crude patch that says "I'm just going to not think about this too hard, try generally to be healthy, put whatever bit of resources towards having the world not-be-on-fire that I can do safely.
Then, maybe check out Nate Soare's writing and see if you're able to integrate it in a more sane way, if you are the sort of person who is interested in doing that, and if so, carefully go from there.
Point B: What Should A Movement Trying To Have the World Not Be On Fire Do?
The approach in Point A seems sane and fine to me. I think it is in fact good to try to help the world not be on fire, and that the correct sane response is to proactively look for ways to do so that are sustainable and do not harm yourself.
I think this is generally the mindset held by EA leadership.
It is not out-of-the-question that EA leadership in fact really wants everyone to Give Their All and that it's better to err on the side of pushing harder for that even if that means some people end up doing unhealthy things. And the only reason they say things like Point A is as a ploy to get people to give their all.
But, since I believe Point A is quite sane, and most of the leadership I see is basically saying Point A, and I'm in a community that prioritizes saying true things even if they're inconvenient, I'm willing to assume the leadership is saying Part A because it is true as opposed to for Secret Manipulative Reasons.
This still leaves us with some issues:
1) Getting to the point where you're on board with Point-A-the-way-I-meant-Point-A-to-be-interpreted requires going through some awkward and maybe unhealthy stages where you haven't fully integrated everything, which means you are believing some false things and perhaps doing harm to yourself.
Even if you read a series of lengthy posts before taking any actions, even if the Giving What We Can Pledge began with "we really think you should read some detailed blogposts about the psychology of this before you commit" (this may be a good idea), reading the blogposts wouldn't actually be enough to really understand everything.
So, people who are still in the process of grappling with everything end up on EA forum and EA Facebook and EA Tumblr saying things like "if you live off more than $20k a year that's basically murder". (And also, you have people on Dank EA Memes saying all of this ironically except maybe not except maybe it's fine who knows?)
And stopping all this from happening would be pretty time consuming.
2) The world is in fact on fire, and people disagree on what the priorities should be on what are acceptable things to do in order for that to be less the case. And while the Official Party Line is something like Point A, there's still a fair number of prominent people hanging around who do earnestly lean towards "it's okay to make costs hidden, it's okay to not be as dedicated to truth as Zvi or Ben Hoffman or Sarah Constantin would like, because it is Worth It."
And present_day_Raemon thinks those people are wrong, but not obviously so wrong that it's not worth talking about and taking seriously as a consideration.

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T21:03:29.706Z · score: 1 (1 votes) · EA · GW
What you seem to really be talking about is whether or not we should have final goals besides the global good. I disagree and think this topic should be treated with more rigor: parochial attachments are philosophically controversial and a great deal of ink has already been spilled on the topic.
Assuming robust moral realism, I think the best-supported moral doctrine is hedonistic utilitarianism and moral uncertainty yields roughly similar results.
Assuming anti-realism, I don't have any reason to intrinsically care more about your family, friends, etc (and certainly not about your local arts organization) than anyone else in the world, so I cannot endorse your attitude.
I do intrinsically care more about you as you are part of the EA network, and more about some other people I know, but usually that's not a large enough difference to justify substantially different behavior given the major differences in cost-effectiveness between local actions and global actions. So I don't think in literal cost-effectiveness terms, but global benefits are still my general goal. It's not okay to give money to local arts organizations, go to great lengths to be active in the community, etc: there is a big difference between the activities that actually are a key component of a healthy personal life, and the broader set of vaguely moralized projects and activities that happen to have become popular in middle / upper class Western culture. We should be bolder in challenging these norms.

(I broke the quoted text into more paragraphs so that I could parse it more easily. I'm thinking about a reply – the questions you're posing here do definitely deserve a serious response. I have some sense that people have already written the response somewhere – Minding Our Way by Nate Soares comes close, although I don't think he addresses the "what if there actually exist moral obligations?" question, instead assuming mostly non-moral-realism)

Comment by raemon on You have more than one goal, and that's fine · 2019-02-20T20:56:49.673Z · score: 6 (8 votes) · EA · GW

Thanks for writing this.

I feel an ongoing sense of frustration that even though this has seemed like the common wisdom of most "longterm EA folk" for several years... new people arriving in the community often have to go through a learning process before they can really accept this.

This means that in any given EA space, where most people are new, there will be a substantial fraction of people who haven't internalized this, and are still stressing themselves out about it, and are in turn stressing out new new people who are exposed more often to the "see everything through the utilitarian lens" than posts like this.

Comment by raemon on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T20:49:34.370Z · score: 9 (4 votes) · EA · GW

My impression is that nobody has made it their job (and spent at least a month and preferably a year or two) to make Certificates of Impact work. i.e. money is real because humans have agreed to believe it's real, and because there's a lot of good infrastructure that helps it work. If Certificates of Impact (or Prizes) are to be real someone needs to actually build a thing and hype it continuously. So far it doesn't feel like it's been tried.

Earning to Save (Give 1%, Save 10%)

2018-11-26T23:47:58.384Z · score: 66 (40 votes)

"Taking AI Risk Seriously" – Thoughts by Andrew Critch

2018-11-19T02:21:00.568Z · score: 26 (12 votes)

Earning to Give as Costly Signalling

2017-06-24T16:43:25.995Z · score: 11 (11 votes)

What Should the Average EA Do About AI Alignment?

2017-02-25T20:07:10.956Z · score: 29 (26 votes)

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

2017-01-11T17:45:48.394Z · score: 18 (20 votes)

Meetup : Brooklyn EA Gathering

2015-04-13T00:07:47.159Z · score: 0 (0 votes)