Comment by deluks917 on Simultaneous Shortage and Oversupply · 2019-01-26T20:46:45.420Z · score: 18 (7 votes) · EA · GW

At least some people at OpenAI are making a ton of money: https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.

The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:

I want to push back a bit against point #1 ("Let's divide problems into 'funding constrained' and 'talent constrained'.) In my experience recruiting for MIRI, these constraints are tightly intertwined. To hire talent, you need money (and to get money, you often need results, which requires talent). I think the "are they funding constrained or talent constrained?" model is incorrect, and potentially harmful. In the case of MIRI, imagine we're trying to hire a world-class researcher for $50k/year, and can't find one. Are we talent constrained, or funding constrained? (Our actual researcher salaries are higher than this, but they weren't last year, and they still aren't anywhere near competitive with industry rates.)
Furthermore, there are all sorts of things I could be doing to loosen the talent bottleneck, but only if I knew the money was going to be there. I could be setting up a researcher stewardship program, having seminars run at Berkeley and Stanford, and hiring dedicated recruiting-focused researchers who know the technical work very well and spend a lot of time practicing getting people excited -- but I can only do this if I know we're going to have the money to sustain that program alongside our core research team, and if I know we're going to have the money to make hires. If we reliably bring in only enough funding to sustain modest growth, I'm going to have a very hard time breaking the talent constraint.
And that's ignoring the opportunity costs of being under-funded, which I think are substantial. For example, at MIRI there are numerous additional programs we could be setting up, such as a visiting professor + postdoc program, or a separate team that is dedicated to working closely with all the major industry leaders, or a dedicated team that's taking a different research approach, or any number of other projects that I'd be able to start if I knew the funding would appear. All those things would lead to new and different job openings, letting us draw from a wider pool of talented people (rather than the hyper-narrow pool we currently draw from), and so this too would loosen the talent constraint -- but again, only if the funding was there. Right now, we have more trouble finding top-notch math talent excited about our approach to technical AI alignment problems than we have raising money, but don't let this fool you -- the talent constraint would be much, much easier to address with more money, and there are many things we aren't doing (for lack of funding) that I think would be high impact.

source: https://forum.effectivealtruism.org/posts/k6bBgWFdHH5hgt9RF/peter-hurford-thinks-that-a-large-proportion-of-people#DvKfX3iN5Z8kuaFs7

Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-12-12T05:22:27.812Z · score: 2 (2 votes) · EA · GW

Great Comment. Thanks for the detailed explanation. This was especially useful for me to understand your model:

Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.
I think this is quite achievable for the median EA.
Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-11-30T00:30:08.809Z · score: 14 (5 votes) · EA · GW

I feel like this post illustrates large inferential gaps. In my experience trying to work in EA works for a rather small number of people. I certainly don't recommend it. Let me quote something I posted on the 80K hours thread:

80K Hour's advice seems aimed, perhaps implicitly, at extremely talented people. I >would roughly describe the level of success/talent as 'top half of Oxford'. If you do >not have that level of ability, then the recommend career paths are going to be long >shots at best. Most people are not realistically capable of getting a job at Jane Street > (I am certainly not). It is also very hard to get a job at a well regarded EA organization.
Unless someone has a very good track record of success I would advise them not to > follow 80K style advice. Trying to get a 'high impact job' has lead to failure for every > rationalist I know who was not 'top half of Oxford' talented. In some cases they made > it to 'work sample' got an internship, but they still failed to land a job. Many of these > rationalists are well regarded and considered quite intelligent. These people are fairly > talented and in many cases make low six figures.
80K is very depressing to read. Making 'only' 200K and donating 60K a year is implicitly treated like a failure. We at least need advise for people who are 'only' Google-programmer levels of talented. And ideally we need advice for EAs of all skill levels. But the fact that our standard advice is not even applicable to 'normal Google programmer' levels of talent is extremely depressing.

Maybe there are talent constraints but they don't seem to me like talent constraints that are satisfied by pushing more EAs into trying to work in EA. I think that mostly works if you are unusually talented or extremely dedicated and 'agenty'. I do think you can probably find a way to work on an EA cause if you are willing to accept low wages and hustle.

EA is really not set up to handle an influx of people trying to work in the field. Maybe this is a crux?

Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-11-29T16:21:44.335Z · score: 1 (1 votes) · EA · GW

I feel like your post would be harder to misunderstand if it included some hard numbers. In particular hard numbers on income.

Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-11-29T13:58:28.198Z · score: 28 (8 votes) · EA · GW

I feel like you are generalizing from a small sample of very dedicated EAs. In my opinion the data does not support 'EAs have often prioritized giving 10% and living frugally *too* heavily'. See data here: https://forum.effectivealtruism.org/posts/S2ypk8fsHFrQopvyo/ea-survey-2017-series-donation-data.

The median donation percentage among EAs who reported 10K+ income was only 4.28%. The following example you give is not typical 'For example, I've heard from some of the early Australian EAs that when EA was just starting out they all lived illegally in a hallway and ate out of the garbage. That was probably not good for their productivity or their physical or mental health.'.

My posts invovled a specific salary number for NYC. And the claim you quote gives a specific condition 'make at least as much as median local household income'. Conversely Ray's post has no numbers in it at all. You will also notice in the concrete budget I posted I included 200 dollars/week in consumption. My stated advice does not support living off garbage.

I can support more nuanced advice that tells young or very dedicated EAs not to harm themselves in order to donate 10%. But I think most EAs should actually donate more. So I am pretty skeptical of advice that suggests donating less unless it comes with appropriate concrete caveats. And I really do think the caveats need to be concrete. Its very easy to implicitly treat luxuries as nescessary. Several people I talked to seemed skeptical it was possible to find rent in NYC for 1K (I was able to quickly point out places).

Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-11-27T10:30:41.109Z · score: 9 (5 votes) · EA · GW

55K is, rather surprisingly, more than the median household income in NYC. 46K is 9K less than 55K. And the hypothetical person making 55K was only donating + saving 11K a year. Though I still think if you are making 46K you could afford to donate and save substantially more than 10% / 1% of discretionary.

The bigger crux is I want to pushback on the idea that the average individual making more than the local median household, and living in one of the richest societies on the planet, cannot afford to be generous.

Comment by deluks917 on Earning to Save (Give 1%, Save 10%) · 2018-11-27T02:03:41.031Z · score: 7 (3 votes) · EA · GW

I feel like these numbers are way too low for general advice aimed at EAs/rationalists. You don't give any threshold at which you should shift to loftier goals. If things are going reasonably well economically you should be able to save 10% and donate 10% of your gross income. Let me give an example that demonstrates approxmiately how much you need to make in NYC to hit 10%/10% of gross income.

A 55K salary in NYC translates to about 2970 take home after taxes and bare bones healthcare (you should also expect to get a tax rebate). This 3K salary take home is based on an actual person not theory. If you want to save/donate 20% you can do a monthly budget of:

Rent/Util - $1130

Donate + Save - $920

Unlimited Metro Card - $120

Living Expenses / Leisure - $800

800 is 200 dollars per week. It does not pay for fancy vacations but its a fine amount to buy food, clothes and go to a bar with friends. I personally spend less than 200/week on misc expenses. I understand not everyone can get a 55K+ job. And not everyone can afford to skimp on healthcare. But this assumes you live in NYC and making 55K in NYC is fairly reasonable for many rationalists. If you live in a chepaer area your income may be less but so is the percent you spend on rent. One should not feel bad if they legitmately cannot hit 10/10. But it is acheiveable for many people with relatively normal rationalist salaries.

Caveats:

This logic may not apply if you have dependents.

Of course choosing to donate less than 10% is different from being unable to do so. I can certainly understand prioritizing savings over donations if you are not especially financially secure.

Comment by deluks917 on Towards Better EA Career Advice · 2018-11-21T17:40:42.972Z · score: 21 (14 votes) · EA · GW

80K Hour's advice seems aimed, perhaps implicitly, at extremely talented people. I would roughly describe the level of success/talent as 'top half of Oxford'. If you do not have that level of ability, then the recommend career paths are going to be long shots at best. Most people are not realistically capable of getting a job at Jane Street (I am certainly not). It is also very hard to get a job at a well regarded EA organization.

Unless someone has a very good track record of success I would advise them not to follow 80K style advice. Trying to get a 'high impact job' has lead to failure for every rationalist I know who was not 'top half of Oxford' talented. In some cases they made it to 'work sample' got an internship, but they still failed to land a job. Many of these rationalists are well regarded and considered quite intelligent. These people are fairly talented and in many cases make low six figures.

80K is very depressing to read. Making 'only' 200K and donating 60K a year is implicitly treated like a failure. We at least need advise for people who are 'only' Google-programmer levels of talented. And ideally we need advice for EAs of all skill levels. But the fact that our standard advice is not even applicable to 'normal Google programmer' levels of talent is extremely depressing.

Comment by deluks917 on EA Hotel with free accommodation and board for two years · 2018-06-04T19:21:26.790Z · score: 9 (11 votes) · EA · GW

Is this real life? This is mindblowingly cool. I wish I had the option to study here when I was younger.

Comment by deluks917 on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T20:47:15.424Z · score: 10 (14 votes) · EA · GW

You made an extremely long list of suggestions. Implementing such a huge list would mean radically overhauling the EA community. Is that a good idea?

I think its important to keep in mind that the EA community has been tremendously successful. Givewell and OpenPhil now funnel tremendous amounts of money towards effective global poverty reduction efforts. EA has also made substantial progress at increasing awareness of AI-risk and promoting animal welfare. There are now many student groups in universities around the world. EA has achieved these things in a rather rapid timeframe.

Its rather rare for a group to have comparable success to the current EA community. Hence I think its very dangerous to overhaul our community and its norms. We are doing very well. We could be doing better, but we are doing well. Making changes to the culture of a high performance organization is likely to reduce performance. Hence I think you should be very careful about which changes you suggest.

In addition to being long your list of changes has many rather speculative suggestions. Here are some examples: " -- You explicitly say we should be more welcoming towards things like "dog rescue". Does this not risk diluting EA into just another ineffective community. -- You say that suing the term "AI" without explanation is too much jargon. Is that really a reasonable standard? AI is not an obscure term. If you want us to avoid the term "AI" your standards of accessibility seem rather extreme. -- You claim we should focus on making altruistic people effective instead of effective people altruistic. However Toby Ord claims he initially had the same intuition but his experience is that the later is actually easier. How many of your intuitions are you checking empirically? (This has been mentioned by other commenters)

In general I think you should focus on a much smaller list of core suggestions. It is easier to argue rigorously for a more conservative set of changes. And as I said earlier EA is doing quite well so we should be skeptical of dramatic culture shifts. Obviously we should be open to new norms, but those norms should be vetted carefully.