Posts

"Neglectedness" is a potentially confusing simplification of true impact 2022-08-18T17:58:23.432Z

Comments

Comment by JoshYou on I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? · 2022-09-29T16:22:16.930Z · EA · GW

What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?

Comment by JoshYou on It’s not effective to call everything effective and how (not) to name a new organisation · 2022-09-15T23:18:04.590Z · EA · GW

see also

Comment by JoshYou on Earn To Give $1M/year or Work Directly? · 2022-08-29T16:28:11.165Z · EA · GW

For the big-buck EtGers, what sort of donation percentages is this advice assuming? I imagine that if you're making $1M and even considering direct work then you're giving >>10% (>50%?) but I'm not sure.

Comment by JoshYou on Grantees: how do you structure your finances & career? · 2022-08-05T02:07:13.799Z · EA · GW

I also actually have no idea how people do this, curious to see answers!

Also, the questions seem to assume that grantees don't have another (permanent, if not full-time) job. I'm not sure how common that is.

Comment by JoshYou on Reducing nightmares as a cause area · 2022-07-19T13:07:27.346Z · EA · GW

Melatonin supplements can increase the vividness of dreams, which seems counterproductive here. But maybe there is a drug with the opposite effect?

Comment by JoshYou on What is the top concept that all EAs should understand? · 2022-07-06T01:22:23.247Z · EA · GW

The margin/marginal value.  

Anyone trying to think about how to do the most good will be very quickly and deeply confused if they aren't thinking at the margin. E.g. "if everyone buys bednets, what happens to the economy?" 

Comment by JoshYou on The dangers of high salaries within EA organisations · 2022-06-10T16:19:19.183Z · EA · GW

It might help to put some rough numbers on this. Most of the EA org non-technical job postings that I have seen recently have been in the $60-120k/year range or so. I don't think those are too high, even at the higher end of that range. But value alignment concerns (and maybe PR and other reasons) seem like a good reason to not offer, say, 300k or more for non-executive and non-technical roles at EA orgs.

Comment by JoshYou on Jobs at EA-organizations are overpaid, here is why · 2022-06-08T17:40:02.775Z · EA · GW

I think EA orgs generally pay higher salaries than other non-profits, but below-market for the EA labor market (many of whom have software, consulting, etc as alternatives). I don't think they're anywhere close to "impact value" based on anecdotal reports of how much EA orgs value labor. I believe 80k did a survey on this (Edit: it's here). 

Comment by JoshYou on Former high school debaters: Share this VBI EA scholarship with your old team! · 2022-06-03T17:11:15.518Z · EA · GW

whoa I used to teach there back in the day. This is cool!

Comment by JoshYou on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T13:20:34.737Z · EA · GW

Fundraising is particularly effective in open primaries, such as this one. From the linked article:

But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.

Basically, said Darrell West, vice president and director of governance studies at the Brookings Institution, advertising is useful for making voters aware that a candidate or an issue exists at all. Once you’ve established that you’re real and that enough people are paying attention to you to give you a decent chunk of money, you reach a point of diminishing returns (i.e., Paul Ryan did not have to spend $13 million to earn his seat). But a congressperson running in a close race, with no incumbent — or someone running for small-potatoes local offices that voters often just skip on the ballot — is probably getting a lot more bang for their buck.

Comment by JoshYou on US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk · 2022-05-06T16:41:06.604Z · EA · GW

Note that large funders such as SBF can and do support political candidates with large donations via PACs, which can advertise on behalf of a candidate but are not allowed to coordinate with them directly. But direct donations are probably substantially more cost-effective than PAC money because campaigns have more options on how to spend the money (door-knocking, events, etc not just ads) and it would look bad if a candidate was exclusively supported by PACs.

Comment by JoshYou on Should you try to be a straight-A college student as a utilitarian? · 2022-02-20T15:34:26.807Z · EA · GW

If you're not planning to go to grad school (and maybe even if you are), getting straight As in college probably means a lot of unnecessary effort.

Comment by JoshYou on Where are you donating in 2021, and why? · 2021-12-16T15:03:53.197Z · EA · GW

I gave most of my donations to the EA Funds Donor Lottery because I felt pretty uncertain about where to give. I am still undecided on which cause to prioritize, but I have become fairly concerned about existential risk from AI and I don't think I know enough about the donation opportunities in that space. If I won the lottery, I would then take some more time to research and think about this decision.

I also donated to Wild Animal Initiative and Rethink Priorities because I still want to keep a regular habit of making donation decisions. I think they are the two best organizations working on wild-animal welfare, which is potentially a highly cost-effective cause area because of the very large number of wild animals in existence. I also donated to GiveWell's maximum impact fund.

Comment by JoshYou on What stops you doing more forecasting? · 2021-11-16T01:58:36.037Z · EA · GW

I did Metaculus for a while but I wasn't quite sure how to assess how well I was doing and I lost interest. I know Brier score isn't the greatest metric. Just try to accumulate points?

Comment by JoshYou on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T00:22:06.816Z · EA · GW

What does "consequentialist" mean in this context?

Comment by JoshYou on We’re Rethink Priorities. Ask us anything! · 2021-11-15T17:28:02.645Z · EA · GW

A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you've hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?

Comment by JoshYou on What Makes Outreach to Progressives Hard · 2021-03-17T01:39:59.116Z · EA · GW

Longtermism isn't just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.

Comment by JoshYou on Ask Rethink Priorities Anything (AMA) · 2020-12-14T17:18:26.125Z · EA · GW

How's having two executive directors going?

Comment by JoshYou on Ask Rethink Priorities Anything (AMA) · 2020-12-14T17:16:40.318Z · EA · GW

How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?

Comment by JoshYou on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T02:40:10.294Z · EA · GW

My description was based on Buck's correction (I don't have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don't believe. I don't mean to imply anything stronger than what Buck claimed about Leverage.

I invoked white nationalists not as a hypothetical representative of ideologies I don't like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.

Comment by JoshYou on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T02:34:07.838Z · EA · GW

I also agree that it's ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I'm not talking about conservatives, or the "IDW", or people who don't like the BLM movement or think racism is no big deal. I'd be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett's leaked emails. I don't think his defense is credible at all, by the way).

I don't think it's accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they're not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don't think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.

Comment by JoshYou on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-08T22:39:18.370Z · EA · GW

We've already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don't moderate away or at least discourage such views will tend to attract them - it's not the pattern of activity you'd see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It's also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.

Comment by JoshYou on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-26T21:03:11.936Z · EA · GW

This is essentially the premise of microfinance, right?

Comment by JoshYou on Will Three Gorges Dam Collapse And Kill Millions? · 2020-07-26T15:01:35.638Z · EA · GW

From what I understand, since Three Gorges is a gravity dam, meaning it uses the weight of the dam to hold back water rather than its tensile strength, a failure or collapse would not necessarily be catastrophic one. So if some portion falls, the rest will stay standing. That means there's a distribution of severity within failures/collapses, it's not just a binary outcome.

Comment by JoshYou on Longtermism ⋂ Twitter · 2020-06-16T16:40:01.436Z · EA · GW

To me it feels easier to participate in discussions on Twitter than on (e.g.) the EA Forum, even though you're allowed to post a forum comment with fewer than 280 characters. This makes me a little worried that people feel intimidated about offering "quick takes" here because most comments are pretty long. I think people should feel free to offer feedback more detailed than an upvote/downvote without investing a lot of time in a long comment.

Comment by JoshYou on 80,000 Episode Re: Size of Community · 2020-06-16T15:57:42.325Z · EA · GW

Not from the podcast but here's a talk Rob gave in 2015 about potential arguments against growing the EA community: https://www.youtube.com/watch?v=TH4_ikhAGz0

Comment by JoshYou on Notes on how a recession might impact giving and EA · 2020-03-16T23:12:40.172Z · EA · GW

EAs are probably more likely than the general public to keep money they intend to donate invested in stocks, since that's a pretty common bit of financial advice floating around the community. So the large drop in stock prices in the past few weeks (and possible future drops) may affect EA giving more than giving as a whole.

Comment by JoshYou on AMA: Rob Mather, founder and CEO of the Against Malaria Foundation · 2020-01-27T16:46:32.674Z · EA · GW

How far do you think we are from completely filling the need for malaria nets, and what are the barriers left to achieving that goal?

Comment by JoshYou on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T18:08:09.397Z · EA · GW

What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?

Comment by JoshYou on [Link] A new charity evaluator (NYTimes) · 2019-11-27T06:11:04.596Z · EA · GW

Seems like its mission sits somewhere between GiveWell's and Charity Navigator's. GiveWell studies a few charities to find the very highest impact ones according to its criteria. Charity Navigator attempts to rate every charity, but does so purely on procedural considerations like overhead. ImpactMatters is much broader and shallower than GiveWell but unlike Charity Navigator does try to tell you what actually happens as the result of your donation.

Comment by JoshYou on Has any EA oriented organization tried promoting donors on their social media? · 2019-10-29T16:06:49.664Z · EA · GW

I think I would be more likely to share my donations this way compared to sharing them myself, because it would feel easier and less braggadocious (I currently do not really advertise my donations).

Comment by JoshYou on How do you, personally, experience "EA motivation"? · 2019-08-18T02:03:47.065Z · EA · GW

Among other things, I feel a sense of pride and accomplishment when I do good, the way I imagine that someone who cares about, say, the size of their house feels when they think about how big their house is.

Comment by JoshYou on Four practices where EAs ought to course-correct · 2019-08-03T22:55:19.760Z · EA · GW

Absolutely, EAs shouldn't be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I've seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.

Comment by JoshYou on Four practices where EAs ought to course-correct · 2019-07-30T23:36:10.564Z · EA · GW

Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it's infamous for.

There's a bit of an extended universe of Vox writers, economists, and "neoliberals" that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.

Comment by JoshYou on Four practices where EAs ought to course-correct · 2019-07-30T22:50:50.638Z · EA · GW

On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more "ruthless" communication there, and it might be a good way to spread and popularize ideas. In any case it's a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.

Comment by JoshYou on What Do Unconscious Processes in Humans Tell Us About Sentience? · 2019-06-15T15:52:22.070Z · EA · GW

This all seems to assume that there is only one "observer" in the human mind, so that if you don't feel or perceive a process, then that process is not felt or perceived by anyone. Have you ruled out the possibility of sentient subroutines within human minds?

Comment by JoshYou on Is preventing child abuse a plausible Cause X? · 2019-05-06T05:09:23.680Z · EA · GW

Sadly, Jiwoon passed away last year.

Comment by JoshYou on [Question] Pros/Cons of Donor-Advised Fund · 2019-04-22T21:16:27.483Z · EA · GW

Some links if you haven't seen them yet:

https://reducing-suffering.org/advanced-tips-on-personal-finance/

https://80000hours.org/2013/06/how-to-create-a-donor-advised-fund/

I don't use a DAF but I've considered it in the past. In my view, the chief advantage is that they allow you to claim the tax deduction when you deposit money into the DAF, before you actually make the donation. They're also exempt from capital gains taxes, although you can also avoid capital gains taxes by donating appreciated assets directly to the charity, but that depends on whether the organization will accept them (not sure how universal this is). They also charge fees, which can be fairly expensive but are cheaper than capital gains taxes on expectation.

Comment by JoshYou on Should EA grantmaking be subject to independent audit? · 2019-04-18T02:47:20.761Z · EA · GW

Open Phil would be a good candidate for this, though that's a difficult proposition due to its sheer size. It is a somewhat odd situation that Open Phil throws huge amounts of money around, much of which happens without any comment from the EA community.

Comment by JoshYou on Why is the EA Hotel having trouble fundraising? · 2019-03-26T23:42:56.798Z · EA · GW

I wonder if the lack of tax deductibility and the non-conventional fundraising platform (GoFundMe) nudge people into not donating or donating less than they would to a more respectable-seeming charity.

(As a tangent, there's a donation swap opportunity for the EA Hotel that most people are probably not aware of).

Comment by JoshYou on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-13T00:07:55.997Z · EA · GW

Speaking as someone with a undergrad degree in math, I would have found a non-technical summary for this post to be helpful. So I expect this would apply much more to many other forum readers.

Comment by JoshYou on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T20:14:10.246Z · EA · GW

For one of the work tests I did for Open Phil, the instruction sheet specifically asked that the work test not be shared with anyone. That might have been intended as a temporary restriction, I'm not sure, but I'm not planning on sharing it unless I hear otherwise.

Comment by JoshYou on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-26T15:23:35.448Z · EA · GW

Agreed. I don't see any "poor journalism" in any of the pieces mentioned. A few of them would be "poor intervention reports" if we chose to judge them by that standard.

Comment by JoshYou on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T03:30:54.309Z · EA · GW

It's clear that climate change has at best a small probability (well under 10%) of causing human extinction, but many proponents of working on other x-risks like nuclear war and AI safety would probably give low probabilities of human extinction for those risks as well. I think the positive feedback scenarios you mention (permafrost, wetlands, and ocean hydrates) deserve some attention from an x-risk perspective because they seem to be poorly understood, so the upper bound on how severe they might be may be very high. You cite one simulation that burning all available fossil fuels would increase temperatures by 10 °C, but that isn't necessarily an upper bound because there are non-fossil fuel sources carbon on Earth that could be released to the atmosphere. It would of course also be necessary to estimate how high the extinction risk conditional on various levels of extreme warming (8°C, 10°C, 15°C, 20°C?) would be.

Regardless, it's a good idea to have a clear view of how big the risk is. You're right that the casual claims about extinction or planetary uninhabitability I hear from many people who are concerned about climate change are not justified, and they seem a bit irresponsible.

Comment by JoshYou on How should large donors coordinate with small donors? · 2019-01-10T03:48:24.082Z · EA · GW

Holden also wrote (by the way, I think your link is broken):

We fully funded things we thought were much better than the "last dollar" (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would've done slightly more good, but the dollars spent by others would've done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.

So an important crux here is the proportion of small-donor money to e.g. GiveWell charities that would be crowded out into much less effective charities or to new projects with high expected value. For reference, GiveWell has moved about $30-40 million a year in small donations. I am not sure what proportion of that comes from people who are not closely aligned/affiliated with the EA community, but I would guess it's the majority.

I would question whether Holden is correct though. Global health/development is a big space, so if Good Ventures increased funding to GiveWell top charities by a lot, GiveWell would still exist and would move their recommendations over to interventions that aren't fully funded yet. For example, cash transfers seemingly could absorb a lot of money, and the Gates Foundation probably moves more to global poverty causes every year than GoodVentures will spend per year at its peak. The claim seems to depend on small GiveWell donors being excited by GiveWell's specific top charities right now, such that they would not give to GiveWell top charities if the current top charities were fully funded and GiveWell issued new recommendations, and would instead give to charities even less effective than these new top charities. That might be true if donors are really motivated by the headline cost-per-life-saved number rather than being attracted by GiveWell's research and methodology. I don't have a very strong intuition either way, so I'd be curious if someone more knowledgeable could shed some light.

Comment by JoshYou on EA orgs are trying to fundraise ~$10m - $16m · 2019-01-06T14:56:09.321Z · EA · GW

If we're using these numbers to inform whether EA is funding constrained, it would be good if someone followed up and figured out how much these organizations actually ended up raising.

Comment by JoshYou on Challenges in Scaling EA Organizations · 2018-12-21T23:32:41.091Z · EA · GW

One thing I've wondered about is what the optimal rate at which new EA organizations should be founded, and whether that's an effective way around growth bottlenecks. For example, Rethink Priorities has grown rapidly this year, and it doesn't seem likely that that growth would have happened anyway within previously existing organizations had Rethink Priorities not been founded.

Comment by JoshYou on Animal Welfare Fund AMA · 2018-12-20T01:29:09.665Z · EA · GW

This fund has seemingly taken a very "hits-based" approach to funding small, international grassroots organizations. How do you plan on evaluating and learning from these grants?

Comment by JoshYou on Long-Term Future Fund AMA · 2018-12-20T01:14:56.644Z · EA · GW

This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the "Openness" section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?

Comment by JoshYou on Long-Term Future Fund AMA · 2018-12-20T01:11:00.715Z · EA · GW

Under what conditions would you consider making a grant directed towards catastrophic risks other than artificial intelligence?