Posts

Which non-EA-funded organisations did well on Covid? 2021-06-08T14:19:41.627Z
Name for the larger EA+adjacent ecosystem? 2021-03-18T14:21:10.666Z
Longtermism ⋂ Twitter 2020-06-15T14:19:37.044Z
RyanCarey's Shortform 2020-01-27T22:18:23.751Z
Worldwide decline of the entomofauna: A review of its drivers 2019-07-04T19:06:17.041Z
SHOW: A framework for shaping your talent for direct work 2019-03-12T17:16:44.885Z
AI alignment prize winners and next round [link] 2018-01-20T12:07:16.024Z
The Threat of Nuclear Terrorism MOOC [link] 2017-10-19T12:31:12.737Z
Informatica: Special Issue on Superintelligence 2017-05-03T05:05:55.750Z
Tell us how to improve the forum 2017-01-03T06:25:32.114Z
Improving long-run civilisational robustness 2016-05-10T11:14:47.777Z
EA Open Thread: October 2015-10-10T19:27:04.119Z
September Open Thread 2015-09-13T14:22:20.627Z
Reducing Catastrophic Risks: A Practical Introduction 2015-09-09T22:33:03.230Z
Superforecasters [link] 2015-08-20T18:38:27.846Z
The long-term significance of reducing global catastrophic risks [link] 2015-08-13T22:38:23.903Z
A response to Matthews on AI Risk 2015-08-11T12:58:38.930Z
August Open Thread: EA Global! 2015-08-01T15:42:07.625Z
July Open Thread 2015-07-02T13:41:52.991Z
[Discussion] Are academic papers a terrible discussion forum for effective altruists? 2015-06-05T23:30:32.785Z
Upcoming AMA with new MIRI Executive Director, Nate Soares: June 11th 3pm PT 2015-06-02T15:05:56.021Z
June Open Thread 2015-06-01T12:04:00.027Z
Introducing Alison, our new forum moderator 2015-05-28T16:09:26.349Z
Three new offsite posts 2015-05-18T22:26:18.674Z
May Open Thread 2015-05-01T09:53:47.278Z
Effective Altruism Handbook - Now Online 2015-04-23T14:23:28.013Z
One week left for CSER researcher applications 2015-04-17T00:40:39.961Z
How Much is Enough [LINK] 2015-04-09T18:51:48.656Z
April Open Thread 2015-04-01T22:42:48.295Z
Marcus Davis will help with moderation until early May 2015-03-25T19:12:11.614Z
Rationality: From AI to Zombies was released today! 2015-03-15T01:52:54.157Z
GiveWell Updates 2015-03-11T22:43:30.967Z
Upcoming AMA: Seb Farquhar and Owen Cotton-Barratt from the Global Priorities Project: 17th March 8pm GMT 2015-03-10T21:25:39.329Z
A call for ideas - EA Ventures 2015-03-01T14:50:59.154Z
Seth Baum AMA next Tuesday on the EA Forum 2015-02-23T12:37:51.817Z
February Open Thread 2015-02-16T17:42:35.208Z
The AI Revolution [Link] 2015-02-03T19:39:58.616Z
February Meetups Thread 2015-02-03T17:57:04.323Z
January Open Thread 2015-01-19T18:12:55.433Z
[link] Importance Motivation: a double-edged sword 2015-01-11T21:01:10.451Z
I am Samwise [link] 2015-01-08T17:44:37.793Z
The Outside Critics of Effective Altruism 2015-01-05T18:37:48.862Z
January Meetups Thread 2015-01-05T16:08:38.455Z
CFAR's annual update [link] 2014-12-26T14:05:55.599Z
MIRI posts its technical research agenda [link] 2014-12-24T00:27:30.639Z
Upcoming Christmas Meetups (Upcoming Meetups 7) 2014-12-22T13:21:17.388Z
Christmas 2014 Open Thread (Open Thread 7) 2014-12-15T16:31:35.803Z
Upcoming Meetups 6 2014-12-08T17:29:00.830Z
Open Thread 6 2014-12-01T21:58:29.063Z
Upcoming Meetups 5 2014-11-24T21:02:07.631Z

Comments

Comment by RyanCarey on A Twitter Bot that regularly tweets current top posts from the EA Forum · 2021-07-25T20:35:48.744Z · EA · GW

Yes, there is! It could make more sense to revive it.

Comment by RyanCarey on A Twitter Bot that regularly tweets current top posts from the EA Forum · 2021-07-25T09:23:53.038Z · EA · GW

Would be worth trying out AutoTLDR, to see if it offers good Tweetable summaries.

Comment by RyanCarey on Certificates of impact · 2021-07-23T02:01:48.254Z · EA · GW

If Bob is not providing any money and cannot 'personally lose the cash' and is never 'any worse off' because he just resells it, what is he doing, exactly? Extending Anne some sort of disguised interest-free loan? (Guaranteed and risk-free how?) Why can't he be replaced by a smart contract if there are zero losses?

1. (Coordination). Bob does lose cash of his balance sheet, but his net asset position stays the same, because he's gained an IC that he can resell.

3. (Price discovery). I agree that in cases of repeated events, the issues with price discovery can be somewhat routed around.

2&4. (Philanthropic capital requirement & Incentive for resellers to research). The capitalist IC system gives non-altruistic people an incentive to do altruistic work, scout talent, and research activities' impact, and it rewards altruists for these. Moreover, it reallocates capital to individuals - altruistic or otherwise - who perform these tasks better, which allows them to do more. Nice features, and very standard ones for a capitalist system. I do agree that the ratchet system will allow altruists to fund some talent scouting and impact research, but in a way that is more in-line with current philanthropic behaviour. We might ask the question: do we really want to create a truly capitalist strand of philanthropy? So long as prices are somewhat tethered to reality, then this kind of strand might be really valuable, especially since it need not totally displace other modes of funding.

Comment by RyanCarey on Certificates of impact · 2021-07-23T00:57:06.131Z · EA · GW

The impact certificate is resold when someone wants to become the owner of it and pays more than the current owner paid for it

Oh, selling is compulsory. 

A certificate can't be sold at a 'loss' by the terms of the smart contract. It just ratchets.

OK. That's what I meant when I said "If you're having only profits accrue to the creator, but not the losses, then all of these concerns except for the last would still hold, and the price discovery mechanism would be even more messed up." I'll call my understanding of Paul's proposal the "capitalist" model and your model the "ratchet" model BTW.

The main thing is to avoid the pathologies of NFTs as collectibles and speculative bubbles

OK.

Re the downsides of the "ratchet" model, here are my responses:

  • (Coordination). If Anne writes a blog post, Bob and Chris may both want Anne to be funded, but not want to have to personally lose the cash. In the capitalist model, Bob can just buy Anne's IC, knowing that he's not any worse off, because he has gained an asset that he can easily sell later. Whereas in the ratchet model, Bob and Chris don't gain any profitable asset.
  • (Capital requirement). Sorry, I was unclear about the fact that I was referencing Paul's quote "The ability to resell certificates makes a purchase less of a commitment of philanthropic capital, and less of a strategic decision; instead it represents a direct vote of confidence in the work being funded." In the capitalist model, talent scouts who buy up undervalued projects can retain and grow their capital, and scout more talent. Not so in the "ratchet" version.
  • (Equilibrium). Price discovery will have problems due to the price not being able to go down. Suppose I do an activity that further investigation will be revealed to have had value $0 or $2, with equal probability. Until we figure that out, the price will be $1. If someone discovers that the value was really $0, there is no way for that information to be revealed via the price (which can only increase). Edit: or alternatively, the price never goes up to $1 in the first place. So then the price only reaches a level $n when people are sure it really couldn't be worth less than that, and the price will only serve as a lower bound on the EV of the impact.
  • (Incentive for resellers to research).
  • (Selling at a loss). OK, I agree this is not an issue if you ratchet.
Comment by RyanCarey on Certificates of impact · 2021-07-22T23:40:05.709Z · EA · GW

a CoI NFT can be purchased/transferred at any time from its current owner by sending the last price + N ETH to the contract, where the last owner gets the last price as a refund and the creator gets the marginal N ETH as further payment for their impact.

As I understand, you're having the profits/losses from resale accrue to the creator, rather than the reseller. But then, why would an impact certificate ever be resold? And I see a lot of other potential disadvantages:

  • You lose benefit 3 (coordination)
  • You lose benefit 4 (less commitment of capital required)
  • You lose the incentive for resellers to research the effectiveness of philanthropic activities.
  • No-longer will we find that "at equilibrium, the price of certificates of impact on X is equal to the marginal cost of achieving an impact on X."
  • If an impact certificate is ever sold at a loss, then the creator could be in for an unwelcome surprise, so they would always need to account for all impact certificates sold, and store much of the sum in cash (!!)

If you're having only profits accrue to the creator, but not the losses, then all of these concerns except for the last would still hold, and the price discovery mechanism would be even more messed up.

It seems like your main goal is to avoid a scenario where creators sell their ICs for too little, thereby being exploited. But in that case, maybe you could just use a better auction, or have the creator only sell some fraction of any impact certificate, for a period of time, until some price discovery has taken place. Or if you insist, you could interpolate between the two proposals - requiring resellers to donate n% of any profits/losses to the creator - and still preserve some of the good properties. Which would dampen speculation, if you want that.

Comment by RyanCarey on RyanCarey's Shortform · 2021-07-21T10:48:40.064Z · EA · GW

EA Highschool Outreach Org (see Catherine's and Buck's posts, my comment on EA teachers)

Running a literal school would be awesome, but seems too consuming of time and organisational resources to do right now.Assuming we did want to do that eventually, what could be a suitable smaller step? Founding an organisation with vetted staff, working full-time on promoting analytical and altruistic thinking to high-schoolers - professionalising in this way increases the safety and reputability of  these programs. Its activities should be targeted to top schools, and could include, in increasing order of duration:

  1. One-off outreach talks at top schools
  2. Summer programs in more countries, and in more subjects, and with more of an altruistic bent (i.e. variations on SPARC and Eurosparc)
  3. Recurring classes in things like philosophy, econ, and EA. Teaching by visitors could be arranged by liaising to school teachers, similarly to how external teachers are brought in for chess classes.
  4. After-school, or weekend, programs for interested students

I'm not confident this would go well, given the various reports from Catherine's recap and Buck's further theorising. But targeting the right students, and bringing the right speakers, gives it a chance of success. If you get to (3-4), all is going well, and the number of interested teachers and students are rising, it would be very natural for the org to scale into a school proper.

Comment by RyanCarey on RyanCarey's Shortform · 2021-07-19T13:26:40.813Z · EA · GW

Making community-building grants more attractive

An organiser from Stanford EA asked me today how community building grants could be made more attractive. I have two reactions:

  1. Specialised career pathways. To the extent that this can be done without compromising effectiveness, community-builders should be allowed to build field-specialisations, rather than just geographic ones. Currently, community-builders might hope to work at general outreach orgs like CEA and 80k. But general orgs will only offer so many jobs. Casting the net a bit wider, many activities of Forethought Foundation, SERI, LPP, and FLI are field-specific outreach. If community-builders take on some semi-specialised kinds of work in AI, or policy, or econ, (in connection with these orgs or independently) then this would aid their prospects of working for such orgs or returning to a more mainstream pathway.
  2. "Owning it". To the extent that community building does not offer a specialised career pathway, the fact that it's a bold move should be incorporated into the branding. The Thiel Fellowship offers $100k to ~2 dozen students per year, to drop out of their programs to work on a startup that might change the world. Not everyone will like it, but it's bold, it's a round, and reasonably-sized number, with a name attached, and a dedicated website. Imagine a  "Macaskill fellowship" that offers $100k for a student from a top university to pause their studies and spend one year focusing on promoting prioritisation and long-term thinking - it'd be a more attractive path.
Comment by RyanCarey on Some thoughts on EA outreach to high schoolers · 2021-07-18T12:13:02.663Z · EA · GW

An EA teaching pathway?

Building effective altruism is currently one of 80000 Hours's four top cause areas. One of the most promising avenues for doing this is to do safe and reputable high-school outreach - about a dozen people seem to be currently pursuing this. If at least a couple of people doing this had experience teaching, especially at a magnet school, or a gifted program, then their skills and credentials could move the needle on the quality of these summer programs, and have an impact via teaching itself. Especially so if one wanted to start a new school for kids interested in public service and EA.

So I think a teaching career, properly designed, could be pretty good by EA standards. Suppose you plan to teach at a gifted school, while helping with EA high-school summer programs, and eventually to make oneself available to work at any EA-leaning school. For such a career, I'd be inclined to update the 80,000 Hours' review rates teaching as follows:

  • career capital: 1/5  -> 2/5
  • earnings: 2/5 -> 3/5
  • ease of competition: 5/5 -> 4/5
  • direct impact: 2/5 -> 2/5
  • advocacy potential: 2/5 -> 5/5 [assuming you count the effects of training up altruistic kids here]
  • job satisfaction: 4/5 -> 4/5

I'd speculate that this could be a good idea for those EAs who love teaching, and are US-based, and want a career that's not extremely competitive - something like 0.3-1% of the EA community.

Comment by RyanCarey on How to explain AI risk/EA concepts to family and friends? · 2021-07-12T13:33:29.391Z · EA · GW

Explaining AI x-risk directly will excite about 20% of people and freak out the other 80%. Which is fine if you want to be a public intellectual, or chat to people within EA, but not fine for interacting with most family/friends, moving about in academia etc. The standard approach for the latter is to say you're working on researching safe and fair AI, where shorter term risks, and longer term catastrophes are particular examples.

Comment by RyanCarey on Economics PhD application support - become a mentee! · 2021-07-08T15:43:21.799Z · EA · GW

Nice idea. If this works out, we (some part of the AIS community) should do the same with AI safety. Maybe also for other EA-adjacent fields!

Comment by RyanCarey on Opinion: Digital marketing is under-utilized in EA · 2021-06-23T11:31:05.394Z · EA · GW

Ultimately it's the funder who'll judge that. But if I had all of the donors' funds, maybe I'd pay ~$1B to double the size of the EA movement (~3k->~6k) while preserving its average quality?

Comment by RyanCarey on Opinion: Digital marketing is under-utilized in EA · 2021-06-23T10:24:57.299Z · EA · GW

I think it'd be worthwhile to try advertising longtermist websites and books to people (targeting by interests/location to the largest extent possible). I think it's been tried a bit (e.g. at the tens of thousands of dollars scale) years ago, and it was already nearly at the threshold for cost-effectiveness. And funding availability has more than doubled since then. What I don't know is what further experiments have been run in the last two years...

Comment by RyanCarey on RyanCarey's Shortform · 2021-06-21T11:34:55.418Z · EA · GW

Agreed that in her outlying case, most of what she's done is tap into a political movement in ways we'd prefer not to. But is that true for high-performers generally? I'd hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it'd be worth knowing how all the variables in these different cases contribute.

Comment by RyanCarey on RyanCarey's Shortform · 2021-06-20T14:03:53.783Z · EA · GW

A case of precocious policy influence, and my pitch for more research on how to get a top policy job.

Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year - 2021 - she was appointed by Biden.

The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government.

I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny's success is arguably a related example.

This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointments (such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [Trump years] and here [Obama years], most of the CVs are in the public domain - it just needs doing.

Comment by RyanCarey on Forum update: New features (June 2021) · 2021-06-17T20:57:11.231Z · EA · GW

I might've asked this before, but would we be in a better place if posts just counted for 2-3x karma (rather than the previous 10x or the current 1x)?

Comment by RyanCarey on What should CEEALAR be called? · 2021-06-16T00:42:50.421Z · EA · GW

Building: Athena House? Athena Centre? Charity: I guess it should describe that you give people funding and autonomy to focus on their high-priority work, together. Independent Research Centre? Impact Hub?

Comment by RyanCarey on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:05:23.839Z · EA · GW

Ah. If global IFR is worse than rich-countries' IFR, that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.

Comment by RyanCarey on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T11:00:45.332Z · EA · GW

Was the prediction for infection fatality rate (IFR) or case fatality rate (CFR)? And high-income or all countries? Globally, the CFR is 2% (3.7M/173M), but the IFR is <0.66%, because <1/3 of cases were detected.

Comment by RyanCarey on Max_Daniel's Shortform · 2021-06-08T10:28:37.170Z · EA · GW

I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.

Comment by RyanCarey on Draft report on existential risk from power-seeking AI · 2021-06-02T09:42:01.798Z · EA · GW

The upshot seems to be that Joe, 80k, the AI researcher survey (2008), Holden-2016 are all at about a 3% estimate of AI risk, whereas AI safety researchers now are at about 30%. The latter is a bit lower (or at least differently distributed) than Rob expected, and seems higher than among Joe's advisors.

The divergence is big, but pretty explainable, because it concords with the direction that apparent biases point in. For the 3% camp, the credibility of one's name, brand, or field benefits from making a lowball estimates. Whereas the 30% camp is self-selected to have severe concern. And risk perception all-round has increased a bit in the last 5-15 years due to Deep Learning.

Comment by RyanCarey on [deleted post] 2021-06-01T12:44:04.620Z

I think things like "collective rationality", "collective epistemics", or "quality of public discourse" would be reasonable though.

Comment by RyanCarey on Should EA Buy Distribution Rights for Foundational Books? · 2021-05-31T22:41:59.071Z · EA · GW

A related idea would be to buy copies of e.g. the precipice for university libraries...

Comment by RyanCarey on Propose and vote on potential EA Wiki entries · 2021-05-31T16:13:42.157Z · EA · GW

Yeah, the ultra-pedantic+playful parenthetical is a very academic thing. "Psychology of effective altruism" seems to cover giving/x-risk/speciesism/career choice - i.e. it covers everything we want.

Comment by RyanCarey on Should EA Buy Distribution Rights for Foundational Books? · 2021-05-28T16:23:13.881Z · EA · GW

Nice, so we should buy the rights to all the other EA books...

Comment by RyanCarey on EA Survey 2020: Demographics · 2021-05-26T16:39:11.311Z · EA · GW

My personal non-data-driven impression is that things are steady overall. Contracting in SF, steady in NYC and Oxford, growing in London, DC. "longtermism" growing. Look forward to seeing the data!

Comment by RyanCarey on RyanCarey's Shortform · 2021-05-23T10:10:32.011Z · EA · GW

Yeah, I'd revise my view to: moderation seems too stringent on the particular axis of politeness/rudeness. I don't really have any considered view on other axes.

Comment by RyanCarey on RyanCarey's Shortform · 2021-05-18T12:17:38.191Z · EA · GW

Thanks, this detailed response reassures me that the moderation is not way too interventionist, and it also sounds positive to me that the moderation is becoming a bit more public, and less frequent.

Comment by RyanCarey on Our plans for hosting an EA wiki on the Forum · 2021-05-14T17:57:56.339Z · EA · GW

If and when they break, we will replace them with corresponding links to the PDFs.

Ah, sounds great!

"the links could be of the form wiki.effectivealtruism.org/article"

Yeah, this would be more elegant!

Comment by RyanCarey on Our plans for hosting an EA wiki on the Forum · 2021-05-14T16:07:39.192Z · EA · GW

I don't quite see it. For example, where are the pdfs for "Christiano, Paul (2014) Certificates of impact, Rational Altruist, November 15." here? Ideally, a link to an archived version should be continuously available, or at least appear when the link goes down.

Totally separate issue: I wonder if the wiki hompage should have the address wiki.effectivealtruism.org? 

Comment by RyanCarey on Our plans for hosting an EA wiki on the Forum · 2021-05-14T14:51:03.451Z · EA · GW

Have you thought of how to address link rot? I could imagine it making sense to automatically store one archived version of each external link using perma.cc or something!

Comment by RyanCarey on RyanCarey's Shortform · 2021-05-12T11:35:17.992Z · EA · GW

Overzealous moderation?

Has anyone else noticed that the EA Forum moderation is quite intense of late?

Back in 2014, I'd proposed quite limited criteria for moderation: "spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy". I'd said then: "Largely, I expect to be able to stay out of users' way!" But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being "warned" for unsubstantiated true comments), "rudeness" and "Other behavior that interferes with good discourse" being criteria for content deletion. Generally I get the impression that we need more, not less, people directly speaking harsh truths, and that it's rarely useful for a moderator to insert themselves into such conversation, given that we already have other remedies: judging a user's reputation, counterarguing, or voting up and down. Overall, I'd go as far as to conjecture that if moderators did 50% less (by continuing to delete spam, but standing down in the less clear-cut cases) the forum would be better off.

  • Do we have any statistics on the number of moderator actions per year?
  • Has anyone had positive or negative experiences with being moderated?
  • Does anyone else have any thoughts on whether I'm right or wrong about this?
Comment by RyanCarey on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-12T11:12:01.836Z · EA · GW

Substantiated true claims are the best, but sometimes merely stating important true facts can also be a public service...

Comment by RyanCarey on [deleted post] 2021-05-08T09:24:31.261Z

It's a separate concept!

Comment by RyanCarey on [deleted post] 2021-05-07T20:08:47.040Z

"Scalably using labour"? Since it's about getting people to do things, not about recruiting them.

Comment by RyanCarey on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-03T15:57:25.712Z · EA · GW

So you've shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T15:29:33.289Z · EA · GW

Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn't find their approach useful, and quickly switched to working autonomously, on starting the  EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I've discouraged people from working there! So what is the theory exactly?

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T23:43:28.751Z · EA · GW

You cited.. prioritization

OK, so essentially you don't own up to strawmanning my views?

You... ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”

This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders' forum. And the leaders' forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.

I’m...stuff like

Yes, Gates has thought about cause prio some, but he's less engaged with it, and especially the cutting edge of it than many others.

You’ve ..."authentic"

You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.

I agree... EA brand? 

You seem to have missed my point again. As I said, "It's [tough] to ask people to switch unilaterally". That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it's tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.

Per my bolded text, I don't get the sense that I'm being debated in good faith, so I 'll try to avoid making further comments in this subthread.

Comment by RyanCarey on EA Forum feature suggestion thread · 2021-04-21T20:06:39.811Z · EA · GW

One underlying reason your comment got a lot of upvotes was because the post was viewed many times. Controversy leads to pageviews. Arguably "net upvotes" is an OK metric for post quality (where popularity is important) whereas "net upvotes"/"pageviews" might make more sense for comments.

Side-issue: isn't Karma from posts weighted at 10x compared to Karma in comments? Or at least, I think it once was. And that would help a bit in this particular instance.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T17:25:26.403Z · EA · GW

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.

I agree that incentives within EA lean (a bit) longtermist. The incentives don't come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden's case, he switched due to a combination of "the force of the arguments" and being impressed with the quality of thought of some longtermists. For example, Holden writes "I've been particularly impressed with Carl Shulman's reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell's." It's reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the "incentive structure" as something that is monolithic, or that can explain away major (reasonable) changes.

B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they're experts in content selection, then great! But I think authenticity is a strong default.

Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I'm already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an "Introduction to prioritization", and also, online conversation would happen on a "priorities forum", and so on (or something similar). It's tougher to ask people to switch unilaterally.

Comment by RyanCarey on [deleted post] 2021-04-21T02:18:38.179Z

This largely seems reasonable to me. However, I'll just push back on the idea of treating near/long-term as the primary split:

  • I don't see people on this forum writing a lot about near-term AI issues, so does it even need a category?
  • It's arguable whether near-term/long-term is a more fundamental division than technical/strategic. For example, people sometimes use the phrase "near-term AI alignment", and some research applies to both near-term and long-term issues.

One attractive alternative might be just to use the categories AI alignment and AI strategy and forecasting.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-20T19:50:06.787Z · EA · GW

I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.

Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:

  1. We're presenting introductory material, and the resources are readers attention
  2. B is popular with people who identify with the EA community
  3. B is popular with people who are using logical arguments?

I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation - better to either (A) present the arguments, (e.g. arguments against Nick Beckstead's thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people's views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics - as a relative non-expert, I certainly didn't. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.

Comment by RyanCarey on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-16T19:45:24.321Z · EA · GW

Let's look at the three arguments for focusing more on shorttermist content:

1. The EA movement does important work in these two causes
I think this is basically a "this doesn't represent members views" argument: "When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement". Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:

  • people demanded EA Handbook 2.0 refocus away from longtermism, or
  • Bostrom's excellent talk on crucial considerations was removed from effectivealtruism.org

it would have been better to focus on the merits of the ideas, rather than follow  majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: "ask not what EA will do for you but what together we can do for utility"! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement's members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear "this doesn't represent members views", we  hear alarm bells ringing...

2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don't get interested in longtermism (or aren't a fit for careers in it) might think that the EA movement is not for them.

Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience's attention, so it's necessary to focus on attracting those who can do the most good in priority areas.

Comment by RyanCarey on Resources on the expected value of founding a for-profit start-up? · 2021-04-06T14:27:51.619Z · EA · GW

I looked at some literature on this question, considering various reference classes back in 2014: YC founders, Stanford Entrepreneurs, VC-funded companies.

The essence of the problem in my view is 1) choosing (and averaging over) good reference classes, 2) understanding the heavy tails, and 3) understanding that startup founders are selected to be good at founding (a correlation vs causation issue).

First, consider the first two points:

1. Make very sure that your reference class consists mostly of startups, not less-ambitious family/lifestyle businesses.

2. The returns of startups are so heavy-tailed that you can make a fair estimate based on just the richest <1% of founders in the reference class (based on the public valuation and any dilution, or based on the likes of Forbes billionaire charts.).

For example, in YC, we see that Stripe and AirBnB are worth ~$100B each, and YC has maybe graduated ~2k founders, so each founder might make ~$100M on-expectation

I'd estimated $6M and $10M on-expectation for VC-funded founders and Stanford-founders respectively.

A more controversial reference class is "earn-to-give founders". Sam Bankman-Fried has made about $10B from FTX. If 50 people have pursued this path, the expected earnings are $200M.

The YC and "earn-to-give" founder classes are especially small. In aggregate, I think we can say that the expected earnings for a generic early-stage EA founder are in the range of $1-100M, depending on their reference class (including the degree of success and situation). Having said this, 60-90% of companies make nothing (or lose money). With such a failure rate, checking against one's tolerance for personal risk is important.

Then, we must augment the analysis by considering the third point:

3. Startup founders are selected to be good at founding (correlation vs causation)

If we intervene to create more EA founders, they'll perform less well than the EAs that already chose to found startups, because the latter are disproportionately suited to startups. How much worse is unclear - you could try to consider more and less selective classes of founders (i.e. make a forecast that conditions on / controls for features of the founders) but that analysis takes more work, and I'll leave it to others.

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-04-03T00:00:23.808Z · EA · GW

EA popsci would be fun! 

§1. The past was totally fucked. 

§2. Bioweapons are fucked. 

§3. AI looks pretty fucked. 

§4. Are we fucked? 

§5. Unfuck the world!

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-04-02T23:56:29.988Z · EA · GW

EA popsci would be fun:

§1 the past was totally fucked

 

 

§2 bioweapons are still pretty fucked

§3 AI looks fucked

§4 are we fucked?

(...)

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T15:23:44.659Z · EA · GW

Good point - this has changed my model of this particular issue a lot (it's actually not something I've spent much time thinking about).

I guess we should (by default) imagine that if at time T you recruit a person, that they'll do an activity that you would have valued, based on your beliefs at time T.

Some of us thought that recruitment was even better, in that  the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/don't.

There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might've used for thinking, and for reducing your uncertainties.

On balance, recruitment now seems like a pretty bad way to deal with uncertainty.

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T13:37:58.549Z · EA · GW

I'm picturing that the original person switches to working on Q when they realise it's more valuable, at least more often than the new recruit, which describes what I've seen in reality: recruits sometimes themselves as having been recruited for a more narrow purpose than the goal of the person who recruited them.

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-31T02:46:49.142Z · EA · GW

How the Haste Consideration turned out to be wrong.

In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:

  1. Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.
  2. When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised - severely - so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn't counter the haste consideration per se. It's just one way that recruitment is less good than one might hope-> See AGB's subthread].
  3. The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally - the people who did the most good in the EA movement with me, and/or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.

Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on - as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!

Comment by RyanCarey on RyanCarey's Shortform · 2021-03-28T12:46:13.322Z · EA · GW

A step that I think would be good to see even sooner is any professor at a top school getting in a habit of giving talks at gifted high-schools. At some point, it might be worth a few professors each giving dozens of talks per year, although it wouldn't have to start that way.

Edit: or maybe just people with "cool" jobs. Poker players? Athletes?

Comment by RyanCarey on Some quick notes on "effective altruism" · 2021-03-28T00:07:22.499Z · EA · GW

What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?