Posts

Starting a Small Charity to Give Grants 2021-10-03T01:52:54.115Z
Sapphire's Shortform 2021-05-08T18:05:04.465Z
Small and Vulnerable 2021-05-03T06:08:22.005Z
Being Inclusive 2021-01-17T08:08:53.630Z
Georgia on my Mind: Effectively Flipping the Senate 2020-11-08T07:24:51.209Z
How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? 2020-09-21T18:42:50.160Z
What is the financial size of the Effective Altruism movement? 2020-08-30T21:17:11.227Z
Animal Rights, The Singularity, and Astronomical Suffering 2020-08-20T20:23:10.229Z
Replaceability Concerns and Possible Responses 2020-08-02T18:19:27.477Z

Comments

Comment by sapphire (deluks917) on Is it still hard to get a job in EA? Insights from CEA’s recruitment data · 2022-07-15T02:43:18.484Z · EA · GW

Bottom line is actually 'CEA is four times as selective'. This was pointed out elsewhere but its a big difference. 

Comment by sapphire (deluks917) on The Future Might Not Be So Great · 2022-07-05T17:11:39.012Z · EA · GW

I find the following simple argument disturbing:

P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale.
P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition)
P3 - Inequality between agents is likely to become much more extreme as AI develops
P4 - The scale of potenital suffering will increase by many orders of magnitude

C1 - We are fucked?

Personal Note - There is also no reason to assume me or my loved ones will remain relatively powerful beings

C2 - Im really fucked!

Comment by sapphire (deluks917) on The Future Might Not Be So Great · 2022-07-04T20:06:54.245Z · EA · GW

In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments.

Comment by sapphire (deluks917) on Critiques of EA that I want to read · 2022-06-28T06:09:37.587Z · EA · GW

There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-09T16:40:25.059Z · EA · GW

The story is more complicated but I can't really get into it in public. Since you work at Rethink you can maybe get the story from Peter.  I've maybe suggested too simplistic a narrative before. But you should chat Peter or Marcus about what happened with Rethink and EA funding.

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-09T14:05:12.046Z · EA · GW

https://forum.effectivealtruism.org/posts/3c8dLtNyMzS9WkfgA/what-are-some-high-ev-but-failed-ea-projects?commentId=7htva3Xc9snLSvAkB 

"Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed project?"

Seems somewhat misleading to leave this out.

Comment by sapphire (deluks917) on The Strange Shortage of Moral Optimizers · 2022-06-08T20:57:10.399Z · EA · GW

DXE Bay is not very decentralized. It's run by the five people in 'Core Leadership'. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership. 

Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh 'churn' in EA's leadership. I endorse the current leadership quite a bit and strongly prefer that several previous 'Core' members lost their elections.

note: I haven't been very involved in DXE since I left California. Its really quite concentrated in the Bay.

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T19:20:53.234Z · EA · GW

If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke's ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn't make sense to write leadership a blank check. But it also doesn't make sense to worry about the 'unilateralists curse' when deciding if you should buy your friend a laptop!

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T15:43:38.413Z · EA · GW

This level of support for centralization and deferral is really unusual. I actually don't know of any community besides EA that endorses it. I'm aware it's a common position in effective altruism. But the arguments for it haven't been worked out in detail anywhere I know. 

"Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don't get any money, I'd guess there were better options you would have missed but should have funded first. You may also be unaware of ways it would backfire, and the reason something doesn't get funded is because others judge it to be net negative." 

I genuinely don't think there is any evidence (besides some theory-crafting around unilateralists curse) to think this level of second-guessing yourself and deferring is effective. Please keep in mind the history of the EA funds. Several funds basically never dispersed the funds. And the fund managers explicitly said they didn't have time. Of course things can improve but this level of deferral is really extreme given the communities history.

Suffice to day I don't think further centralizing resources is good nor is making things more bureaucratic. Im also not sure there is actually very much risk of 'unilateralist curse' unless you are being extremely careless. I trust most EAs to be at least as careful as the leadership. Probably the most dangerous thing you could possible fund is AI capabilities. Openphil gave 30M to OpenAI and the community has been pretty accepting of ai capabilities. This is way more dangerous than anything I would consider funding!

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T06:12:59.193Z · EA · GW

That doesnt really engage with the argument. If some other agent is values aligned and approximately equally capable why would you keep all the resources? It doesnt really make sense to value 'you being you' so much.

I dont find donor lotteries compelling. I think resources in Ea are way too concentrated. 'Deeper investigations' is not enough compensation for making power imbalances even worse.

Comment by sapphire (deluks917) on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T04:19:06.427Z · EA · GW

I think the Aumann/outside-view argument for 'giving friends money' is very strong. Imagine your friend is about as capable and altruistic as you. But you have way more money. It just seems rational and efficient to make the distribution of resources more even? This argument does not at all endorse giving semi-random people money.

Comment by sapphire (deluks917) on Moral Weights of Six Animals, Considering Viewpoint Uncertainty - Seeds of Science call for reviewers · 2022-05-26T19:20:42.332Z · EA · GW

Where is the article?

Comment by sapphire (deluks917) on Solving the replication crisis (FTX proposal) · 2022-04-26T03:38:39.653Z · EA · GW

What was the approximate budget? When I read this my first thought was 'did they ask for a super ton of money and get rejected on that basis'?

Comment by sapphire (deluks917) on Some thoughts on vegetarianism and veganism · 2022-03-01T05:52:44.198Z · EA · GW

Effective altruists talk a lot about cooperation but I actually think its sort of pathologically uncooperative to eat meat. It seems pretty hard to dismiss the arguments for veganism. Lots of provably well informed EAs (in the sense they could score well on a test of EA knowledge) are going to settle on 'dont personally participate in enormous moral outrages'. Why would you personally make the problem worse? And make the social situation worse for vegan EAs. It's a very serious breach of solidarity. (Though maybe using the term solidarity outs me as a leftie)

Comment by sapphire (deluks917) on Agrippa's Shortform · 2022-02-25T16:57:21.771Z · EA · GW

As a recent counterpoint to some collaborationist messages: https://forum.effectivealtruism.org/posts/KoWW2cc6HezbeDmYE/greg_colbourn-s-shortform?commentId=Cus6idrdtH548XSKZ

"It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has given over $50M to set up."

Comment by sapphire (deluks917) on Agrippa's Shortform · 2022-02-24T05:42:11.443Z · EA · GW

I don't get it. https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang?commentId=o58cMKKjGp87dzTgx 

I wont associate with people doing serious capabilities research.

Comment by sapphire (deluks917) on FTX EA Fellowships · 2021-10-25T06:38:09.567Z · EA · GW

Why is the stipend 10K? How are these numbers chosen? Funds are not exactly tight on this scale. I understand that EA wants to filter for dedicated people. But I feel like these really low pay/stipends should be justified a little more explicitly.  Wouldn't it make sense to offer more money since 'moving to the Bahamas for 6 months is not exactly a low-cost decision for many people? [I am aware the EA hotel gets people despite a lack of very generous stipends]

Comment by sapphire (deluks917) on Starting a Small Charity to Give Grants · 2021-10-04T12:42:49.835Z · EA · GW

I am aware but the benefit is still quite large.

Comment by sapphire (deluks917) on Starting a Small Charity to Give Grants · 2021-10-04T05:41:27.763Z · EA · GW

Oh wow. This was super informative. Thanks so much.

Comment by sapphire (deluks917) on Starting a Small Charity to Give Grants · 2021-10-03T22:03:41.233Z · EA · GW

This would have been way better than holding everything in my individual account. But it doesn't let you make 'grants' to individuals. We need something like a smaller version of EA Funds.

Comment by sapphire (deluks917) on What would you do if you had half a million dollars? · 2021-07-21T05:07:27.871Z · EA · GW

I would unrestrictedly give it to individual EAs you trust. 

Comment by sapphire (deluks917) on Why should we be effective in our altruism? · 2021-05-31T03:07:20.148Z · EA · GW

When I was small I needed help. Instead, I was treated very badly. Many people and animals need help right now. We have to help as many of them as possible. 

Comment by deluks917 on [deleted post] 2021-05-18T00:13:46.398Z

Earlier in our relationship, I told my wife that we should legally marry other people so they could move to the USA. She is usually quite open-minded but she very much hated the plan so we never did it. 

I am very big on living out your values. If you are a citizen of a highly desired country you can help make open borders a reality. I encourage you to consider this in who you legally marry. This is especially relevant if you are poly. There are a lot of versions that differ quite a bit in terms of risk. 

Good luck.

Comment by sapphire (deluks917) on Vitalik Buterin just donated $54M (in ETH) to GiveWell · 2021-05-17T02:52:04.126Z · EA · GW

Groups are not public. Here is an example from 'EAs in crypto'.  The original thread was in 'highly speculative EA investing'. The EAs in crypto thread got the most engagement.

note: Anthony Deluca is my deadnmae (I wasnt out when Greg wrote this comment. he didn't deadnmae me), Greg is a well-known EA.

Comment by sapphire (deluks917) on Vitalik Buterin just donated $54M (in ETH) to GiveWell · 2021-05-16T06:18:47.059Z · EA · GW

Multiple people connected to the lesswrong/ea investing groups tried to contact him. We both contacted him directly and got some people closer to Vitalik to talk to him. I am unsure how much influence we had. He donated less than two days after the facebook threads went up.

We definitely tried!

Comment by sapphire (deluks917) on Vitalik Buterin just donated $54M (in ETH) to GiveWell · 2021-05-16T06:18:07.642Z · EA · GW

Multiple people connected to the lesswrong/ea investing groups tried to contact him. We both contacted him directly and got some people closer to vitalik tovtalk to him. I am unsure how much influence we had. He donated less than two days after the facebook threads went up.

We definitely tried!

Comment by sapphire (deluks917) on Being Vocal About What Works · 2021-05-11T21:24:05.228Z · EA · GW

I am not sure Effective Altruism has been a net hedonic positive for me. In fact, I think it has not been. 

Recently in order to save money to donate more, I chose to live in very cheap housing in California. This resulted in many serious problems. Looking back arguably the biggest problem was the noise. If you cram a bunch of people into a house it's going to be noisy.  This very badly affected my mental health. There were other issues as well. My wife and I could have afforded a much more expensive place. That would have been money very well spent. I was really quite miserable. 

During the 2017 crypto bull run, I held a decent amount of ETH. Pretty close to the top I gave away half since I felt like I had hit a huge windfall. Of course, ETH crashed to around 87 from a high of 1400. So I ended up not as rich as I thought. It didn't help that I handled the bear market poorly. Maybe it was good that I donated the ETH instead of selling it for far less. But maybe I would have handled the bear market better had I kept more ETH or cashed some out for myself. 

 In the end, things went fine for me. But the decision to donate so much at the top really haunted me for years. Of course, I did not donate 10%. A 10% donation threshold would mean donating 10% of the ETH I cashed out (potentially 0 dollars). Until you sell you don't have any taxable income. I have again donated all the crypto I cashed out. But this time I have donated a much smaller percentage of my bankroll. 

I am also quite terrified of the singularity. It has not been easy for me to deal with the 'singularity is near' arguments I hear in the rationality and EA communities.

Of course, I think my involvement with EA has been positive for the world. In addition to donations, I gave some money to some poorer friends. They certainly appreciated it. But effective altruism has not been an easy road.

Comment by sapphire (deluks917) on Should you do a PhD in science? · 2021-05-09T01:09:04.695Z · EA · GW

It is hard for me to think of much advice that has gone worse for rationalists/EAs on average than 'Get a PHD'.  I know dozens of people in the community who spent at least some time in a PHD program. There is a huge amount of people expressing strong regret. A small number of people think their PHD went ok. Very few people think their PHD was a great and effective idea. Notably, I am only counting people who have already left grad school and had some time to reflect. 

The track record is incredibly bad in the community. The opportunity cost is extremely high. I very strongly urge people to reconsider. 

Another angle is that it is just a very unhealthy environment on average. Here is Ben Kuhn explaining the data:

First I looked for a bigger survey of graduate student depression and anxiety rates. It wasn’t too hard to find one, and the numbers were almost the same: 41% of graduate students had “moderate to severe” anxiety compared to 6% of the general population; 39% had moderate to severe depression compared to 6% of the general public.

Comment by sapphire (deluks917) on Sapphire's Shortform · 2021-05-08T18:05:04.664Z · EA · GW

I don't like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don't be too critical of the critical either!

Comment by sapphire (deluks917) on What does failure look like? · 2021-04-10T02:57:01.156Z · EA · GW

My biggest mistake was not buying, and holding, crypto early. This was an extremely costly mistake. If I bought and held I would have hundreds of millions of dollars that could have been given as grants. I doubt I will ever make such a costly mistake again.

Going to graduate school was a very bad decision too. After 2.5 years I had to take my L and get out. It was very painful to admit I had been wrong but that is life.

Comment by sapphire (deluks917) on Mundane trouble with EV / utility · 2021-04-04T20:36:44.305Z · EA · GW

The problem is real. Though for 'normal' low probabilities I suggest biting the bullet. A practical example is the question of whether to found a company. If you found a startup you will probably fail and make very little or no money. However, right now a majority of effective altruist funding comes from Facebook co-founder Dustin Moskovich. The tails are very thick. 

If you have a high-risk plan with a sufficiently large reward I suggest going for it even if you are overwhelming likely to fail. Taking the risk is the most altruistic thing you can do. Most effective altruists are unwilling to take on the personal risk.

Comment by sapphire (deluks917) on What Makes Outreach to Progressives Hard · 2021-03-21T20:02:32.918Z · EA · GW

Really cool to learn about resource generation. These fellows are hardcore. I promote the following to EA type people:
-- Donate at least 10% of pre-tax income (I am above this)
-- Be as frugal as you can. Certainly don't spend more than could be supported by the median income in your city. 
-- Once you have at least ~500K net worth give away all additional income. In my opinion, 500K is enough to fund a lean retirement if you are willing to accept a little risk. 

--If you get a big windfall I suggest either putting it in a trust or just earmarking it for charity instead of immediately donating the whole thing; your cause prioritization may change (I regret how I donated a big windfall during the first crypto bull market. )

I don't think people should have to work if they don't want to so I think it's reasonable to 'save yourself'. But don't strive for too much security and keep your spending lean. I was objectively raised in a far from top 10% household and have no received much money from my parents. For example, they contributed zero dollars to my college. But anyone who is able to 'speedrun to 500K while donating' (or even seriously consider it) must be very privileged somehow.

If you actually take my advice seriously it is quite strict. But RG seems a lot more hardcore than that. 

Comment by sapphire (deluks917) on What Makes Outreach to Progressives Hard · 2021-03-14T22:19:45.321Z · EA · GW

From this perspective, a corporate lawyer who went to Harvard is not a class traitor. They are just acting in their own class interests.

Comment by sapphire (deluks917) on What Makes Outreach to Progressives Hard · 2021-03-14T06:31:06.184Z · EA · GW

I think of the intersectionality/social justice/anti-oppression cluster as being a bit more specific than just 'progressive' so I will only discuss the specific cluster. Through activism, I met many people in this cluster. I myself am quite sympathetic to the ideology. 

But I have to ask: How do you hold this ideology while attending Harvard Law? From this perspective, Harvard law is a seat of the existing oppressive power structure and you are choosing to become part of this power structure by attending. The privileges that come from attending Harvard Law are enormous. Harvard law graduates earn extremely high salaries (even the starting salaries are high)and often end up with very high net worths. Harvard law is also obviously strongly connected to many parts of the neoliberal capitalist system. 

From a certain perspective being a leftist at Harvard law can be viewed as trying to become some sort of 'class traitor' to the neoliberal elite. This does not seem like the obvious thing to do from a leftist perspective. Much leftist analysis would suggest that it's much more likely you just end up part of the neo-liberal power structure instead of subverting it. 

In your experience how do these people resolve the contradiction?

Comment by sapphire (deluks917) on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-07T17:17:16.941Z · EA · GW

Parasitic wasps may be the most diverse group of animals (beating out beetles). In some areas environments, a shocking fraction of prey insects are parasitized.

 If you value 'life' you should probably keep humans around so we can spread life beyond earth. The expected amount of life in the Galaxy seems much higher if humans stick around. Imo the other logical position is 'blow up the sun'. Don't just take out the humans, take out the wasps too. The earth is full of really horrible suffering and if the humans die out then wasp parasitism will probably go on for hundreds of millions of additional years.

Of course, humans literally spread parasitic wasps as a form of 'natural' pest control so maybe the life spread by humans will be unusually terrible? I suppose 'life on earth is net-good, but life specifically spread by humans will be net bad'. It is worth noting humans might create huge amounts of digital life. RobinHanson's 'Age of Em' makes me wonder about their quality of life.

Just killing all humans but leaving the rest of the biosphere intact seems like it's 'threading the needle'. Maybe you can clarify more what you are valuing specifically.

Of course, don't do anything crazy. Give the absurdity heuristic a little respect.

Comment by sapphire (deluks917) on Money Can't (Easily) Buy Talent · 2021-01-24T03:27:42.346Z · EA · GW

"The 99th percentile probably isn't good enough either." If you more than 99th percentile talented maybe you can give yourself a chance to earn a huge amount of money if you are willing to take on risk. Wealth is extremely fat-tailed so this seems potentially worthwhile.

If Dustin had not been a Facebook co-founder EA would have something like one-third of its current funding. Sam Bankman Fried strikes me as quite talented. He originally worked at Jane Street and quit to work at a major EA org.  Instead, he ended up founding the crypto exchange FTX. FTX is now valued at around a billion dollars. I am quite happy he decided against 'direct work'.

It seems difficult but not impossible to replace top talent with multiple less talented people at many EA jobs (for example charity evaluation). It seems basically impossible to replace a talented cofounder with a less talented one without decimating your odds of success. However, it is plausible that top talent should directly work on AI issues.

It is also important to note most people are not 'top talent' and so they need to follow different advice.

Comment by sapphire (deluks917) on Careers Questions Open Thread · 2020-12-14T03:57:14.298Z · EA · GW

You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It's not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousand dollars is enough to stochastically save over a hundred lives. There are several high impact EA orgs with a budget of around a million dollars a year (Rethink Priorities comes to mind). If trading goes very well you could personally fund such an org. 

How are you going to feel if you decide to do the PHD and after five years you decide that it was not the best path?  You will have left approximately a million dollars and a huge amount of earning potential on the table. You could have been free to work for no compensation if you want. You would have been able to bankroll a medium sized project if you keep trading. 

There are a lot of ways to massively regret turning down the quant job. It is plausible that the situation is so dire that you need to drop other paths and work on AI safety right now.  But you need to be confident in a  very detailed world model to justify giving up so much optionality. There are a lot of theories on how to do the most good. Stay upstream. 

Comment by sapphire (deluks917) on Georgia on my Mind: Effectively Flipping the Senate · 2020-11-08T07:21:32.000Z · EA · GW

There are many plausible EA reasons to prefer the Democrats have control of the Senate. One example is that if the Republicans have a majority it is extremely unlikely we will get a serious climate bill. Of course, even if the Democrats have 50 seats and the tie-breaker they may not get rid of the filibuster and will have serious trouble maintaining 100% of their caucus. But the expected value still seems high to many EAs.

Political races do not tend to be very neglected. And the EA community is not the only group of people who are very interested in winning these seats. However, it is unclear to me what is the most effective way to influence the races. The candidates are likely going to be flooded with donations. Who should people donate to? Is there any plausible case some EAs should go to Georgia? As I understand it the deadline for registering new voters is Dec 7th. 

This is definitely a 'speculative' cause but it is empirically on the mind of many EAs I know.

Comment by sapphire (deluks917) on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-17T03:53:15.022Z · EA · GW

I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.

Several people are discussing allegations of DXE being abusive and/or a cult. I joined in early 2020. I have not personally observed or heard any credible accusations of abusive or abuse enabling behavior by the leadership of DXE during the time I have been a member. It is hard for me to know what happened in 2016 or 2017.

Given my history in the rationality you should trust that if I had evidence I could post about systematic abuse within DXE I would post it. Even if I did not have the consent of victims to share evidence I will still publicly state I knew of abuse. I will note it is highly plausible DXE is acting badly behind closed doors. If this becomes clear to me I will certainly let people know.

(This is explicitly not a claim there is no evidence I find concerning. But I think you should be quite critical of most organizations and your eyes open for signs of abusive behavior.)

Comment by sapphire (deluks917) on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-23T18:14:40.859Z · EA · GW

Good point that Open Phil makes all donations public. I found a CSV on their site and added up the donations dated 2018/2019/2020.

2018: $190,477,938

2019: $273,279,362

2020 so far: $145,405,362

This is a really useful answer.

Comment by sapphire (deluks917) on What is the financial size of the Effective Altruism movement? · 2020-08-31T05:06:35.363Z · EA · GW

https://www.givewell.org/about/impact is something I already found.

Comment by sapphire (deluks917) on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-07T03:36:00.463Z · EA · GW

I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.

Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn't the most serious issue for a Mayor.

Comment by sapphire (deluks917) on Replaceability Concerns and Possible Responses · 2020-08-02T22:01:39.172Z · EA · GW

Why do you think orgs labelled 'effective altruist' get so much talent applying but those orgs don't? How big do you think the difference is? I am somewhat informed about the job market in Animal Advocacy. It does not seem nearly as competitive as the EA market. But I am not sure the magnitude of the difference in the replaceability analysis.

Comment by sapphire (deluks917) on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-17T14:38:48.365Z · EA · GW

Really good article. I have been critical of 80K hours in the past but this article caused me to substantially update my views. I am happy to hear you will be at 80K hours.

Comment by sapphire (deluks917) on What to do with people? · 2019-03-06T19:52:53.613Z · EA · GW

I think we are pretty far from exhausting all the good giving oppurtunities. And even if all the highly effective charities are filled something like Give Directly can be scaled up. It is possible in the future we will eventually get to the point where there are so few people in poverty that cash transfers are ineffective. But if that happens there is nothing to be sad about. the marignal value of donations will go down as more money flows into EA. That is an argument for giving more now. A future where marginal EA donaions are ineffective is a very good future.

Comment by sapphire (deluks917) on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-05T23:04:52.701Z · EA · GW
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement's current bottlenecks?)

Promoting donations or Earnign to Give seems fine. I think we should stop promoting 'EA is talent constrained'. There is a sense in which EA is 'talent constrained'. But the current messaging around 'EA is talent constrained' consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly absorb much more money. Many smaller orgs are certainly funding constrained. And at a minimum people can donate to Give Directly if the other giving oppurtunities are filled.

Comment by sapphire (deluks917) on Simultaneous Shortage and Oversupply · 2019-01-26T20:46:45.420Z · EA · GW

At least some people at OpenAI are making a ton of money: https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.

The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:

I want to push back a bit against point #1 ("Let's divide problems into 'funding constrained' and 'talent constrained'.) In my experience recruiting for MIRI, these constraints are tightly intertwined. To hire talent, you need money (and to get money, you often need results, which requires talent). I think the "are they funding constrained or talent constrained?" model is incorrect, and potentially harmful. In the case of MIRI, imagine we're trying to hire a world-class researcher for $50k/year, and can't find one. Are we talent constrained, or funding constrained? (Our actual researcher salaries are higher than this, but they weren't last year, and they still aren't anywhere near competitive with industry rates.)
Furthermore, there are all sorts of things I could be doing to loosen the talent bottleneck, but only if I knew the money was going to be there. I could be setting up a researcher stewardship program, having seminars run at Berkeley and Stanford, and hiring dedicated recruiting-focused researchers who know the technical work very well and spend a lot of time practicing getting people excited -- but I can only do this if I know we're going to have the money to sustain that program alongside our core research team, and if I know we're going to have the money to make hires. If we reliably bring in only enough funding to sustain modest growth, I'm going to have a very hard time breaking the talent constraint.
And that's ignoring the opportunity costs of being under-funded, which I think are substantial. For example, at MIRI there are numerous additional programs we could be setting up, such as a visiting professor + postdoc program, or a separate team that is dedicated to working closely with all the major industry leaders, or a dedicated team that's taking a different research approach, or any number of other projects that I'd be able to start if I knew the funding would appear. All those things would lead to new and different job openings, letting us draw from a wider pool of talented people (rather than the hyper-narrow pool we currently draw from), and so this too would loosen the talent constraint -- but again, only if the funding was there. Right now, we have more trouble finding top-notch math talent excited about our approach to technical AI alignment problems than we have raising money, but don't let this fool you -- the talent constraint would be much, much easier to address with more money, and there are many things we aren't doing (for lack of funding) that I think would be high impact.

source: https://forum.effectivealtruism.org/posts/k6bBgWFdHH5hgt9RF/peter-hurford-thinks-that-a-large-proportion-of-people#DvKfX3iN5Z8kuaFs7

Comment by sapphire (deluks917) on Earning to Save (Give 1%, Save 10%) · 2018-12-12T05:22:27.812Z · EA · GW

Great Comment. Thanks for the detailed explanation. This was especially useful for me to understand your model:

Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.
I think this is quite achievable for the median EA.
Comment by sapphire (deluks917) on Earning to Save (Give 1%, Save 10%) · 2018-11-30T00:30:08.809Z · EA · GW

I feel like this post illustrates large inferential gaps. In my experience trying to work in EA works for a rather small number of people. I certainly don't recommend it. Let me quote something I posted on the 80K hours thread:

80K Hour's advice seems aimed, perhaps implicitly, at extremely talented people. I >would roughly describe the level of success/talent as 'top half of Oxford'. If you do >not have that level of ability, then the recommend career paths are going to be long >shots at best. Most people are not realistically capable of getting a job at Jane Street > (I am certainly not). It is also very hard to get a job at a well regarded EA organization.
Unless someone has a very good track record of success I would advise them not to > follow 80K style advice. Trying to get a 'high impact job' has lead to failure for every > rationalist I know who was not 'top half of Oxford' talented. In some cases they made > it to 'work sample' got an internship, but they still failed to land a job. Many of these > rationalists are well regarded and considered quite intelligent. These people are fairly > talented and in many cases make low six figures.
80K is very depressing to read. Making 'only' 200K and donating 60K a year is implicitly treated like a failure. We at least need advise for people who are 'only' Google-programmer levels of talented. And ideally we need advice for EAs of all skill levels. But the fact that our standard advice is not even applicable to 'normal Google programmer' levels of talent is extremely depressing.

Maybe there are talent constraints but they don't seem to me like talent constraints that are satisfied by pushing more EAs into trying to work in EA. I think that mostly works if you are unusually talented or extremely dedicated and 'agenty'. I do think you can probably find a way to work on an EA cause if you are willing to accept low wages and hustle.

EA is really not set up to handle an influx of people trying to work in the field. Maybe this is a crux?

Comment by sapphire (deluks917) on Earning to Save (Give 1%, Save 10%) · 2018-11-29T16:21:44.335Z · EA · GW

I feel like your post would be harder to misunderstand if it included some hard numbers. In particular hard numbers on income.