Posts

Name for the larger EA+adjacent ecosystem? 2021-03-18T14:21:10.666Z
Longtermism ⋂ Twitter 2020-06-15T14:19:37.044Z
RyanCarey's Shortform 2020-01-27T22:18:23.751Z
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z
Tell us how to improve the forum 2017-01-03T06:25:32.114Z
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z
EA Open Thread: October 2015-10-10T19:27:04.119Z
September Open Thread 2015-09-13T14:22:20.627Z
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z
Superforecasters [link] 2015-08-20T18:38:27.846Z
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z
July Open Thread 2015-07-02T13:41:52.991Z
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z
June Open Thread 2015-06-01T12:04:00.027Z
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z
Three new offsite posts 2015-05-18T22:26:18.674Z
May Open Thread 2015-05-01T09:53:47.278Z
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z
April Open Thread 2015-04-01T22:42:48.295Z
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z
GiveWell Updates 2015-03-11T22:43:30.967Z
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z
February Open Thread 2015-02-16T17:42:35.208Z
The AI Revolution [Link] 2015-02-03T19:39:58.616Z
February Meetups Thread 2015-02-03T17:57:04.323Z
January Open Thread 2015-01-19T18:12:55.433Z
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z
I am Samwise [link] 2015-01-08T17:44:37.793Z
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z
January Meetups Thread 2015-01-05T16:08:38.455Z
CFAR's annual update [link] 2014-12-26T14:05:55.599Z
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z
Upcoming Meetups 6 2014-12-08T17:29:00.830Z
Open Thread 6 2014-12-01T21:58:29.063Z
Upcoming Meetups 5 2014-11-24T21:02:07.631Z
Open thread 5 2014-11-17T15:57:12.988Z

Comments

Comment by RyanCarey on [deleted post] 2021-05-08T09:24:31.261Z

It's a separate concept!

Comment by RyanCarey on [deleted post] 2021-05-07T20:08:47.040Z

"Scalably using labour"? Since it's about getting people to do things, not about recruiting them.

Comment by RyanCarey on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-03T15:57:25.712Z · EA · GW

So you've shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T15:29:33.289Z · EA · GW

Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn't find their approach useful, and quickly switched to working autonomously, on starting the  EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I've discouraged people from working there! So what is the theory exactly?

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T23:43:28.751Z · EA · GW

You cited.. prioritization

OK, so essentially you don't own up to strawmanning my views?

You... ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”

This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders' forum. And the leaders' forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.

I’m...stuff like

Yes, Gates has thought about cause prio some, but he's less engaged with it, and especially the cutting edge of it than many others.

You’ve ..."authentic"

You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.

I agree... EA brand? 

You seem to have missed my point again. As I said, "It's [tough] to ask people to switch unilaterally". That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it's tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.

Per my bolded text, I don't get the sense that I'm being debated in good faith, so I 'll try to avoid making further comments in this subthread.

Comment by RyanCarey on EA Forum feature suggestion thread · 2021-04-21T20:06:39.811Z · EA · GW

One underlying reason your comment got a lot of upvotes was because the post was viewed many times. Controversy leads to pageviews. Arguably "net upvotes" is an OK metric for post quality (where popularity is important) whereas "net upvotes"/"pageviews" might make more sense for comments.

Side-issue: isn't Karma from posts weighted at 10x compared to Karma in comments? Or at least, I think it once was. And that would help a bit in this particular instance.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T17:25:26.403Z · EA · GW

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

I agree that incentives within EA lean (a bit) longtermist. The incentives don't come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden's case, he switched due to a combination of "the force of the arguments" and being impressed with the quality of thought of some longtermists. For example, Holden writes "I've been particularly impressed with Carl Shulman's reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell's." It's reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the "incentive structure" as something that is monolithic, or that can explain away major (reasonable) changes.

B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they're experts in content selection, then great! But I think authenticity is a strong default.

Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I'm already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an "Introduction to prioritization", and also, online conversation would happen on a "priorities forum", and so on (or something similar). It's tougher to ask people to switch unilaterally.

Comment by RyanCarey on [deleted post] 2021-04-21T02:18:38.179Z

This largely seems reasonable to me. However, I'll just push back on the idea of treating near/long-term as the primary split:

  • I don't see people on this forum writing a lot about near-term AI issues, so does it even need a category?
  • It's arguable whether near-term/long-term is a more fundamental division than technical/strategic. For example, people sometimes use the phrase "near-term AI alignment", and some research applies to both near-term and long-term issues.

One attractive alternative might be just to use the categories AI alignment and AI strategy and forecasting.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-20T19:50:06.787Z · EA · GW

I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.

Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:

  1. We're presenting introductory material, and the resources are readers attention
  2. B is popular with people who identify with the EA community
  3. B is popular with people who are using logical arguments?

I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation - better to either (A) present the arguments, (e.g. arguments against Nick Beckstead's thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people's views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics - as a relative non-expert, I certainly didn't. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-16T19:45:24.321Z · EA · GW

Let's look at the three arguments for focusing more on shorttermist content:

1. The EA movement does important work in these two causes
I think this is basically a "this doesn't represent members views" argument: "When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement". Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:

  • people demanded EA Handbook 2.0 refocus away from longtermism, or
  • Bostrom's excellent talk on crucial considerations was removed from effectivealtruism.org

it would have been better to focus on the merits of the ideas, rather than follow  majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: "ask not what EA will do for you but what together we can do for utility"! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement's members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear "this doesn't represent members views", we  hear alarm bells ringing...

2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don't get interested in longtermism (or aren't a fit for careers in it) might think that the EA movement is not for them.

Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience's attention, so it's necessary to focus on attracting those who can do the most good in priority areas.

Comment by RyanCarey on Resources on the expected value of founding a for-profit start-up? · 2021-04-06T14:27:51.619Z · EA · GW

I looked at some literature on this question, considering various reference classes back in 2014: YC founders, Stanford Entrepreneurs, VC-funded companies.

The essence of the problem in my view is 1) choosing (and averaging over) good reference classes, 2) understanding the heavy tails, and 3) understanding that startup founders are selected to be good at founding (a correlation vs causation issue).

First, consider the first two points:

1. Make very sure that your reference class consists mostly of startups, not less-ambitious family/lifestyle businesses.

2. The returns of startups are so heavy-tailed that you can make a fair estimate based on just the richest <1% of founders in the reference class (based on the public valuation and any dilution, or based on the likes of Forbes billionaire charts.).

For example, in YC, we see that Stripe and AirBnB are worth ~$100B each, and YC has maybe graduated ~2k founders, so each founder might make ~$100M on-expectation

I'd estimated $6M and $10M on-expectation for VC-funded founders and Stanford-founders respectively.

A more controversial reference class is "earn-to-give founders". Sam Bankman-Fried has made about $10B from FTX. If 50 people have pursued this path, the expected earnings are $200M.

The YC and "earn-to-give" founder classes are especially small. In aggregate, I think we can say that the expected earnings for a generic early-stage EA founder are in the range of $1-100M, depending on their reference class (including the degree of success and situation). Having said this, 60-90% of companies make nothing (or lose money). With such a failure rate, checking against one's tolerance for personal risk is important.

Then, we must augment the analysis by considering the third point:

3. Startup founders are selected to be good at founding (correlation vs causation)

If we intervene to create more EA founders, they'll perform less well than the EAs that already chose to found startups, because the latter are disproportionately suited to startups. How much worse is unclear - you could try to consider more and less selective classes of founders (i.e. make a forecast that conditions on / controls for features of the founders) but that analysis takes more work, and I'll leave it to others.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-04-03T00:00:23.808Z · EA · GW

EA popsci would be fun! 

§1. The past was totally fucked. 

§2. Bioweapons are fucked. 

§3. AI looks pretty fucked. 

§4. Are we fucked? 

§5. Unfuck the world!

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-04-02T23:56:29.988Z · EA · GW

EA popsci would be fun:

§1 the past was totally fucked

 

 

§2 bioweapons are still pretty fucked

§3 AI looks fucked

§4 are we fucked?

(...)

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T15:23:44.659Z · EA · GW

Good point - this has changed my model of this particular issue a lot (it's actually not something I've spent much time thinking about).

I guess we should (by default) imagine that if at time T you recruit a person, that they'll do an activity that you would have valued, based on your beliefs at time T.

Some of us thought that recruitment was even better, in that  the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/don't.

There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might've used for thinking, and for reducing your uncertainties.

On balance, recruitment now seems like a pretty bad way to deal with uncertainty.

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T13:37:58.549Z · EA · GW

I'm picturing that the original person switches to working on Q when they realise it's more valuable, at least more often than the new recruit, which describes what I've seen in reality: recruits sometimes themselves as having been recruited for a more narrow purpose than the goal of the person who recruited them.

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T02:46:49.142Z · EA · GW

How the Haste Consideration turned out to be wrong.

In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:

  1. Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.
  2. When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised - severely - so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn't counter the haste consideration per se. It's just one way that recruitment is less good than one might hope-> See AGB's subthread].
  3. The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally - the people who did the most good in the EA movement with me, and/or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.

Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on - as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-28T12:46:13.322Z · EA · GW

A step that I think would be good to see even sooner is any professor at a top school getting in a habit of giving talks at gifted high-schools. At some point, it might be worth a few professors each giving dozens of talks per year, although it wouldn't have to start that way.

Edit: or maybe just people with "cool" jobs. Poker players? Athletes?

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-28T00:07:22.499Z · EA · GW

What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?

Comment by RyanCarey on Proposed Longtermist Flag · 2021-03-27T22:15:21.653Z · EA · GW

Same energy as:

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-27T15:30:59.647Z · EA · GW

High impact teachers? (Teaching as Task Y)

The typical view, here, on high-school outreach seems to be that:

  1. High-school outreach has been somewhat effective, uncovering one highly capable do-gooder per 10-100 exceptional students.
  2. But people aren't treating it with the requisite degree of sensitivity: they don't think enough about what parents think, they talk about "converting people", and there have been bad events of unprofessional behaviour.

So I think high-school outreach should be done, but done differently. Involving some teachers would be useful step toward professionalisation (separating the outreach from the rationalist community would be another). 

But (1) also suggests that teaching at a school for gifted children could be a priority activity in itself. The argument is if a teacher can inspire a bright student to try to do good in their career, then the student might be manifold more effective than the teacher themselves would have been, if they had tried to work directly on the world's problems. And students at such schools are exceptional enough (Z>2) that this could happen many times throughout a teacher's career.

This does not mean that teaching is the best way to reach talented do-gooders. But it doesn't have to be, because it could attract some EAs who who wouldn't suit outreach paths. It leads to stable and respected employment, involves interpersonal contact that can be meaningful, and so on (at least, some interactions with teachers were quite meaningful to me, well before EA entered my picture).

I've said that teachers could help professionalise summer schools, and inspire students. I also think that a new high school for gifted altruists could be a high-priority. It could gather talented altruistic students together, so that they have more social support, and better meet their curricular needs (e.g. econ, programming, philosophy, research). I expect that such a school could attract great talent. It would be staffed with pretty talented at knowledgeable teachers. It would be advised by some professors at top schools. If necessary, by funding scholarships, it could grow its student based arbitrarily. Maybe a really promising project.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-27T02:07:20.032Z · EA · GW

A friend's "names guy" once suggested calling the EA movement "Unfuck the world"...

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-27T00:35:10.518Z · EA · GW

OK, what names would we expect to promote action-orientation if "GP" wouldn't?

Comment by RyanCarey on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T16:07:42.218Z · EA · GW

Yeah, it's an unfortunate phrasing. Often when people, especially authorities, say that they feel that something is not on the table, they're in-effect declaring that it is off the table, while avoiding the responsibility of explaining why. Which probably was not intended, but still came across as a bit uncool. It's like: can't we just figure out whether it's a good idea, and then decide whether to put it on the table?

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-26T15:49:25.018Z · EA · GW

I like this style of thinking, but I don't think it pushes in the direction that you suggest. EA entities with "priorities" in the name disproportionately work on surveys and policy, whereas those with "EA" in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.

On balance, I think "global priorities" connotes more concreteness and action-orientation than "EA", which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-26T15:30:49.575Z · EA · GW

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.

Though you can just say "I'm interested in / I work on global priorities / I'm in the prioritisation community", or anything that you would say about the AI safety community, for example.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-25T02:03:42.880Z · EA · GW

TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-25T00:34:01.185Z · EA · GW

Agree that selection effects can be desirable and that dilution effects may matter if we choose a name that is too likable. But if we hold likability fixed, and switch to a name that is more appropriate (i.e. more descriptive), then it should select people more apt for the movement, leading to a stronger core.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-25T00:23:12.266Z · EA · GW

I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.

I'm concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I'm in favour of people gaining influence, all things equal!

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-24T23:26:50.525Z · EA · GW

Interesting.

1) I'm convinced that a "GP" community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today's community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect - can you explain?

2) assuming that "GP" is too intrinsically political, can you think of any alternatives that have some of its advantages of "GP" without that disadvantage.

Comment by RyanCarey on Proposed Longtermist Flag · 2021-03-24T17:52:48.164Z · EA · GW

Yeah, this is cool! Although maybe too expansionist - it suggests that we plan to conquer our light cone, which might mean defending it against non-Earth-originating life. Separately, I guess adding a colour gradient is bad, since that's harder to draw, and flags usually don't have them.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-24T17:32:45.934Z · EA · GW

I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

This sounds very right to me. 

Another way of putting this argument is that "global priorities (GP)"  community is both more likable and more appropriate  than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense. 

More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big problems of our time?" rather than "how much ought one to give?" or "what is the best way to solve problem X?" Moreover, I'd more likely bring up Parfit's catastrophic risks thought experiment, than Singer's shallow pond. A more appropriate name could help reduce bait-and-switch dynamics, and help with recruiting people more suited to the jobs that we need done.

If you have a name that's much more likable and somewhat more appropriate, then you're in a much stronger position introducing the ideas to new people, whether they are highly-susceptible to them, or less so. So I imagine introducing these ideas as "GP" to a parent, an acquaintance, a donor, or an adjacent student group, would be less of an uphill battle than "EA" in almost all cases.

Apart from likability and appropriateness, the other five of Neumeir's naming criteria are:

  • Distinctiveness. EA wins.
  • Brevity. GP wins. It's 16 letters rather than 17, and  6 syllables rather than 7.
  • Easy spelling and punctuation. GP wins. In a word frequency corpus "Global" and "Priorities" feature 93M and 11M times, compared to "Effective" (75M) and "Altruism" (0.4M). Relatedly, "effective altruism" is annoying enough to say that people tend to abbreviate it to "EA", which is somewhat opaque and exclusionary.
  • Extendability. GP wins. It's more natural to use GP than EA to describe non-agents e.g. GP research vs EA research, and "policy prioritisation" is a better extension than "effective policy", because we're more about doing the important thing than just doing something well.
  • Protectability. EA wins, I guess, although note that "global priorities" already leads me exclusively to organisations in the EA community, so probably GP is protectable enough.

Overall, GP looks like a big upgrade. Another thing to keep in mind is that it may be more of an upgrade than it seems based on discussions within the existing community, because it consists of only those who were not repelled by the current "EA" name.

Concretely, what would this mean? Well... instead of EA Global, EA Forum, EA Handbook, EA Funds, EA Wiki, you would probably have GP Summit, GP Forum, (G)P Handbook, (G)P Funds, GP Wiki etc. Obviously, there are some switching costs in regard to the effort of renaming, and of name recognition, but as an originator of two of these things, the names themselves seem like improvements to me - it seems much more useful to go to a summit, or read resources about global priorities, rather than one focused on altruism in abstract. Orgs like OpenPhil/LongView/80k wouldn't have to change their names at all.

Moreover, changing the name to GP would break the names of some named orgs, it wouldn't always do that. In fact, the Global Priorities Institute was initially going to be the EA Institute, but the name had to be switched to sound more academically respectable. If the community was renamed as the Global Priorities Community, then GPI would get to be named after the community that it originated from and be academically respectable at the same time, which would be super-awesome. The fact that prioritisation arises more frequently in EA org names than any phrase except for "EA" itself might be telling us something important. Consider: "Rethink Priorities", "Global Priorities Project", "Legal Priorities Project", "Global Priorities Institute", "Priority Wiki", "Cause Prioritisation Wiki".

Another possible disadvantage would be if it made it harder for us to attract our core audience. But to be honest, I think that the people who are super-excited about utilitarianism and rationality are pretty likely to find us anyway, and that having a slightly larger and more respectable-looking community would help with that in some ways anyway.

Finally, renaming can be an opportunity for re-centering the brand and strategy overall. How exactly we might refocus could be controversial, but it would be a valuable opportunity.

So overall, I'd be really excited about a name change!

Comment by RyanCarey on Proposed Longtermist Flag · 2021-03-24T13:57:48.440Z · EA · GW

That flag is cool, but here's an alternative that uses some of the same ideas. 

The black background represents the vastness of space, and its current emptiness. The blue dot represents our fragile home. The ratio of their sizes represents the importance of our cosmic potential (larger version here).

A "Pale Blue Dot" flag for longtermism

It's also a reference to Carl Sagan's Pale Blue Dot - a photo taken of Earth, from a spacecraft that is now further from Earth than any other human-made object, and that was the first to leave our solar system.

Carl Sagan's Pale Blue Dot

Sagan wrote this famous passage about the image:

Look again at that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar," every "supreme leader," every saint and sinner in the history of our species lived there-on a mote of dust suspended in a sunbeam.

The Earth is a very small stage in a vast cosmic arena. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot.

Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.

The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.

It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we've ever known.

Comment by RyanCarey on Responses and Testimonies on EA Growth · 2021-03-23T23:14:52.386Z · EA · GW

To be fair, people pivoted hard toward longtermism because they're convinced that it's a much higher priority, which seems correct to me.

Comment by RyanCarey on Responses and Testimonies on EA Growth · 2021-03-23T20:59:19.918Z · EA · GW

I'd agree that on the current margin, "EAs getting harder to find" could be a factor, as well as some combination of things like (#2-4).

Having said that, what seems like an underrated fact is that although EA outreach (CEA/80k/CFAR) deploys less funds than EA research (FHI/MIRI/CSER/...), a priori, I'd expect outreach to scale better - since research has to be more varied, and requires more specific skills. This leads to the question: why we don't we yet have a proof of concept for turning ~$100M into high quality movement growth? Maybe this is the biggest issue. (#2) can explain why CEA hasn't offered this. (#4) is more comprehensive, because it explain why 80k and others haven't.

Comment by RyanCarey on Responses and Testimonies on EA Growth · 2021-03-23T17:49:25.620Z · EA · GW

I agree that it's worth asking for an explanation why growth has - if anything - slowed, while funds have vastly increased. One interesting exercise is to categorise the controversies. Some major categories:

  1. Leverage-people violating social norms (which was a mistake)
  2. CEA under-delivering operationally (mistake)
  3. Re-prioritising toward longtermism (not a mistake imo)
  4. Re-prioritising away from community growth (unclear whether a mistake)

The mistakes:

  • GWWC deprioritised (3,4)
  • EA Ventures (1,2)
  • EA Global 2015 PR (1,3)
  • Pareto Fellowship cultishness (1)
  • EA Funds deprioritised (2)
  • EA Grants under-delivered (2)
  • Community Building Grants under-delivered (2)

But the more interesting question is: "fundamentally, why has growth not sped up?". In my view, (1-3) did not directly slow growth. But (1-3) led EA community leaders to deprioritise community growth (i.e. led to (4)). And the lack of acceleration is basically because we didn't allocate that much resources to community growth. At any given time, most community leaders (except perhaps when writing and promoting books) have spent their time on research, business, academia, and grantmaking therein, rather than community growth.

I think that in order to reinvigorate community growth, you really need to work on (4). We need to develop a new vision for what growth of the EA community (or some related community, such as a longtermist one) should look like, and establish that it would be worthwhile, before investing in it. How could it gather elite talent, of the sort that can substantially help with effectively disbursing funds to reduce long-term risks? And so on. 

Comment by RyanCarey on Law school vs MPP in Australia for those who have strong verbal skills but are weak at maths · 2021-03-23T15:32:34.843Z · EA · GW

I figure law is usually more competitive than MPP, so would be a better signal of capability, if university quality is held constant. Since they seem similarly relevant for policy.

Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-19T01:07:33.084Z · EA · GW

The FFLARRP ecosystem: forecasting, fact-checking, longtermism, altruism, rationality, reform, and progress! :P

Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-19T00:50:11.474Z · EA · GW

Makes sense - I guess they're all taking an enlightenment-style worldview and pursuing intellectual progress on questions that matter over longer timescales...

Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-18T23:09:46.630Z · EA · GW

TBH, it's a question that popped into mind from background consciousness, but I can think of many possible applications:

  • helping people in various parts of the EA-adjacent ecosystem know about the other parts, which they may be better-suited to helping
  • helping people in various parts of this ecosystem understand what thinking (or doing) has already been done in other parts of the ecosystem
  • building kinship between parts of the ecosystem
  • academically studying the overall ecosystem - why have these similar movements sprung up at a similar time?
  • planning for which parts are comparatively advantaged at what different types of tasks
Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-18T22:02:35.193Z · EA · GW

Yes, and I guess there's a lot of other components that could possibly be added to that list: science reform (reproducibility, open science), fact-checking, governance reform (approval voting or empowerment of the technocracy), that vary from being possible small ingredients of  any new "enlightenment" to being unlikely to come to much...

Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-18T21:55:10.221Z · EA · GW

Just like environmentalism and animal rights  intersect with EA, without being a subset of it, the same could be true for longtermism. (I expect longtermism to grow a lot outside of EA, while remaining closer to EA than those other groups.)

Comment by RyanCarey on Name for the larger EA+adjacent ecosystem? · 2021-03-18T14:25:42.256Z · EA · GW

Maybe the "PEARL communities"? (Progress studies, effective altruism, rationality and longtermism)?

Comment by RyanCarey on Certificates of impact · 2021-03-14T02:16:29.581Z · EA · GW

Ohh, I should've made this clearer.

The NFT would be used to represent responsibility for (custodianship of) a particular impactful action. Just as with impact certificates as previously proposed, a person who, for example, runs EA Harvard for 2018, could put responsibility for this impact onto the marketplace. Buyers, when pricing this asset, can then evaluate how well EA Harvard did at creating things (that may be fungible) like number of EAs produced, or net effect on wellbeing, and pay accordingly.

I think it's useful to sell responsibility for the impact of a particular action  (which is non-fungible), rather than a some responsibility for some (fungible) quantum N of impact, so that the job of judging the impactfulness of the action can be left to the markets.

Comment by RyanCarey on Certificates of impact · 2021-03-13T17:08:33.314Z · EA · GW

I wonder if it would make sense to sell certificates of Impact as non-fungible tokens (NFTs), given that NFTs are emerging as a lucrative way of publicly representing the "ownership" of non-physical assets like digital artwork.

Comment by RyanCarey on Our plans for hosting an EA wiki on the Forum · 2021-03-07T02:28:38.689Z · EA · GW

I think this Wiki has a decent shot at success, similar to other niche resources like SEP and Wolfram Mathworld, given the clear need, and Pablo's and engineers' efforts. And it would be super-useful if it does so.

Two current bug/feature requests:

  • It's currently very unclear where to find the Wiki content. For example, I expected to find it  here and also here. Reading this discussion (by Pablo and Michael), I expected to see the Wiki page being discussed, or at least a link to it, but I couldn't.
  • I think the idea that you're presenting Wiki pages as modified versions of "tags" is very (and fundamentally) confusing user experience (even if that's how they have to be coded). Can users not understand wiki pages as first-class entities, and say that a tag is one way that we can use a Wiki page?
Comment by RyanCarey on How many hits do the hits of different EA sites get each year? · 2021-03-04T19:08:39.185Z · EA · GW

You can start by looking at the Alexa engagement ranking - lower rank is (superlinearly) better:

nickbostrom.com: 367k

effectivealtruism.org: 187k

givewell.org: 140k

80000hours.org: 122k

slatestarcodex.com: 91k

lesswrong.com: 46k

etc.

Comment by RyanCarey on Should I transition from economics to AI research? · 2021-03-02T06:44:14.721Z · EA · GW

I had two follow-up questions: first, do you think there is a big difference in impact between getting a tenure-track position at a top-20 school vs medium-ranked school?

I would guess medium-big, especially if your route to impact is teaching PhD students (or anything that requires a lot of funding), as opposed to governmental advising (or anything that doesn't).

Secondly, why do you think switching subjects reduces those odds a lot? Do you think it's because it's unlikely that I would get accepted into a PhD program in AI or because, even if I'm accepted, I'm less likely to get a tenure-track position in this field? 

We won't really know until we see someone study the question. My guess is that for most switchers, the PhD program would be worse than the current one (AI is more competitive than econ, and age works against you), and so due to that, they would likely end up in a worse tenure track position. Plus the impact is delayed and some of it foreclosed by retirement. So the cost seems decent-sized.

Comment by RyanCarey on Should I transition from economics to AI research? · 2021-02-28T19:48:59.025Z · EA · GW

My guess is that AI safety papers are more impactful than longtermist econ ones, since they are directly targeted at significant near-term risks. Having said that, there are now a hundred or so people working on various aspects of long-term AI safety, which is more than can be said for longtermist econ, so I don't think the impact-difference is huge. Maybe we're talking about a three-fold difference in impact, per unit time invested by an undifferentiated EA researcher - something that could easily be overriden by personal factors. But many longtermist researchers would argue that the impact difference is much more or less.

My experience in transitioning from medicine to AI is that it was very costly. I feel I was set back by ~5 years in my intellectual and professional development - I had to study a masters degree and do years of research assisting and junior research work to even get back to my previous level of knowledge and seniority. From an impact standpoint, I clearly had to exit medicine, but it's not clear that moving to AI safety had any greater impact than moving into (for example) biosecurity would have.

For most people in a PhD in any long-term relevant subject (econ, biology, AI, stats), with a chance of a tenure-track position at a top-20 (worldwide) school, I expect it will make sense to push for that for at least ~3 years, and to postpone worries about pivoting until after that. Because switching subjects reduces those odds a lot.

More broadly, as a community,  we mostly ought to ask people to pivot their careers when they are young (e.g. in an undergraduate), or when the impact differential is large  (e.g. medicine to biosecurity). Which I don't think it really is when you're contrasting the right parts of econ with AI safety.

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

Comment by RyanCarey on Open and Welcome Thread: February 2021 · 2021-02-18T04:13:21.387Z · EA · GW

And why is it equal to 1?

Comment by RyanCarey on Should patient investors try to correlate portfolio holdings with potential cause areas? · 2021-02-01T20:02:39.680Z · EA · GW

It also gets referred to as "mission hedging", a term that GPI attributes to Tran's ‘Divest, Disregard, or Double Down (2017).