Comment by joshyou on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-26T15:23:35.448Z · score: 8 (8 votes) · EA · GW

Agreed. I don't see any "poor journalism" in any of the pieces mentioned. A few of them would be "poor intervention reports" if we chose to judge them by that standard.

Comment by joshyou on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T03:30:54.309Z · score: 9 (9 votes) · EA · GW

It's clear that climate change has at best a small probability (well under 10%) of causing human extinction, but many proponents of working on other x-risks like nuclear war and AI safety would probably give low probabilities of human extinction for those risks as well. I think the positive feedback scenarios you mention (permafrost, wetlands, and ocean hydrates) deserve some attention from an x-risk perspective because they seem to be poorly understood, so the upper bound on how severe they might be may be very high. You cite one simulation that burning all available fossil fuels would increase temperatures by 10 °C, but that isn't necessarily an upper bound because there are non-fossil fuel sources carbon on Earth that could be released to the atmosphere. It would of course also be necessary to estimate how high the extinction risk conditional on various levels of extreme warming (8°C, 10°C, 15°C, 20°C?) would be.

Regardless, it's a good idea to have a clear view of how big the risk is. You're right that the casual claims about extinction or planetary uninhabitability I hear from many people who are concerned about climate change are not justified, and they seem a bit irresponsible.

Comment by joshyou on How should large donors coordinate with small donors? · 2019-01-10T03:48:24.082Z · score: 4 (3 votes) · EA · GW

Holden also wrote (by the way, I think your link is broken):

We fully funded things we thought were much better than the "last dollar" (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would've done slightly more good, but the dollars spent by others would've done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.

So an important crux here is the proportion of small-donor money to e.g. GiveWell charities that would be crowded out into much less effective charities or to new projects with high expected value. For reference, GiveWell has moved about $30-40 million a year in small donations. I am not sure what proportion of that comes from people who are not closely aligned/affiliated with the EA community, but I would guess it's the majority.

I would question whether Holden is correct though. Global health/development is a big space, so if Good Ventures increased funding to GiveWell top charities by a lot, GiveWell would still exist and would move their recommendations over to interventions that aren't fully funded yet. For example, cash transfers seemingly could absorb a lot of money, and the Gates Foundation probably moves more to global poverty causes every year than GoodVentures will spend per year at its peak. The claim seems to depend on small GiveWell donors being excited by GiveWell's specific top charities right now, such that they would not give to GiveWell top charities if the current top charities were fully funded and GiveWell issued new recommendations, and would instead give to charities even less effective than these new top charities. That might be true if donors are really motivated by the headline cost-per-life-saved number rather than being attracted by GiveWell's research and methodology. I don't have a very strong intuition either way, so I'd be curious if someone more knowledgeable could shed some light.

Comment by joshyou on EA orgs are trying to fundraise ~$10m - $16m · 2019-01-06T14:56:09.321Z · score: 32 (16 votes) · EA · GW

If we're using these numbers to inform whether EA is funding constrained, it would be good if someone followed up and figured out how much these organizations actually ended up raising.

Comment by joshyou on Challenges in Scaling EA Organizations · 2018-12-21T23:32:41.091Z · score: 3 (3 votes) · EA · GW

One thing I've wondered about is what the optimal rate at which new EA organizations should be founded, and whether that's an effective way around growth bottlenecks. For example, Rethink Priorities has grown rapidly this year, and it doesn't seem likely that that growth would have happened anyway within previously existing organizations had Rethink Priorities not been founded.

Comment by joshyou on Animal Welfare Fund AMA · 2018-12-20T01:29:09.665Z · score: 6 (5 votes) · EA · GW

This fund has seemingly taken a very "hits-based" approach to funding small, international grassroots organizations. How do you plan on evaluating and learning from these grants?

Comment by joshyou on Long-Term Future Fund AMA · 2018-12-20T01:14:56.644Z · score: 15 (7 votes) · EA · GW

This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the "Openness" section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?

Comment by joshyou on Long-Term Future Fund AMA · 2018-12-20T01:11:00.715Z · score: 14 (6 votes) · EA · GW

Under what conditions would you consider making a grant directed towards catastrophic risks other than artificial intelligence?

Comment by joshyou on New web app for calibration training funded by the Open Philanthropy Project · 2018-12-17T05:16:31.475Z · score: 1 (1 votes) · EA · GW

Vague or context-less questions might help you calibrate your views on topics you know very little about?

I am now somewhat better calibrated at claims about European football than I was before, I guess.

Comment by joshyou on 2017 Donor Lottery Report · 2018-11-16T00:30:52.661Z · score: 9 (3 votes) · EA · GW

This is a great writeup and also a good demonstration of the value of donor lotteries. Is CEA planning on running another one anytime soon? Their lotteries page just says "There are currently no active lotteries". I think the lottery experiments have gone well and this should be a regular thing, unless running a lottery consumes a lot of staff time or has some other large cost.

Comment by joshyou on One for the World as a potential vehicle to expand the reach of Effective Altruism · 2018-08-02T02:10:56.397Z · score: 4 (4 votes) · EA · GW

On the flip side, maybe it's a good idea for 1FTW to maintain some distance from the EA community/EA as a concept. If they specialize in promoting effective giving to global poverty to people who are unlikely to embrace EA as a whole, that might be a good way to avoid competing with existing EA outreach.

Comment by joshyou on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T23:11:47.647Z · score: 17 (12 votes) · EA · GW

It seems that Nick has not been able to leverage his position as EA fund manager to outperform his Open Phil grants (or at least meaningfully distinguish his EA fund grants from his Open Phil grants). This means that we can think of donating to the far future and community funds as having similar cost-effectiveness to individual donations to Open Phil earmarked for those causes. This seems like a problem, since the best individual donations should be able to outperform Open Phil, at least when you account for the benefits of not centralizing donations on too few decisionmakers. I don't see anyone calling for Open Phil to accept/solicit money from small donors.

The case for finding another manager seems pretty strong. EA funds is a fundamentally sound idea - we should be trying to consolidate donation decisions somewhat to take advantage of different levels of expertise and save small donors' time and mental energy, but this doesn't seem like the best way to do it.

Comment by joshyou on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T01:35:40.967Z · score: 3 (3 votes) · EA · GW

Lewis announced another round of grants for the Animal Welfare fund on Facebook on June 26, though it's not clear when exactly the grants were paid out or will be paid out. The Animal Welfare fund page has not been updated with this information. This seems surprising since Lewis has already written up an explanation of the grant; it just isn't on the website yet.

Comment by joshyou on Announcing the Effective Altruism Handbook, 2nd edition · 2018-05-03T04:33:37.800Z · score: 2 (4 votes) · EA · GW

Doing Good Better is more accessible and spends a lot more time introducing and defending the basic idea of EA instead of branching out into more advanced ideas. It is also much more focused on global poverty.

Comment by joshyou on How to improve EA Funds · 2018-05-01T05:00:39.367Z · score: 1 (1 votes) · EA · GW

I just noticed that Nick posted updates on the Community Fund and Far Future Fund pages (it's the same update on both pages) on 4/24. I'm commenting here for visibility since I have not seen these updates advertised anywhere.

Comment by joshyou on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T16:27:43.867Z · score: 3 (3 votes) · EA · GW

Support for a cause area isn't bias. That's just having an opinion. Your argument would imply that ACE is biased because they are run by animal activists, or that Givewell is biased because they advocate for reducing global poverty. These groups aren't necessarily an authority when you're deciding between cause areas, of course. But in deciding which organization is most effective within a given cause area, the "trusted experts" are almost always going to be advocates for that cause area.

More generally, you keep trying to frame your points as politically neutral "meta" considerations but it definitely feels like you have an axe to grind against the activist left which motivates a lot of what you're saying.

Comment by joshyou on Charity Science Effective Legacies · 2016-12-30T18:34:19.483Z · score: 2 (4 votes) · EA · GW

The title seems a little... harsh.

Comment by joshyou on We Must Reassess What Makes a Charity Effective · 2016-12-24T15:59:53.672Z · score: 4 (6 votes) · EA · GW

These are pretty unoriginal generic arguments against developing-world charity. I think you should do more research on how these arguments apply GiveWell charities and engage with the existing arguments they have made for why their charities are cost-effective. Local mosquito net industries are clearly not an important driver of economic growth that they would outweigh the benefit of large reductions in malaria. The second point is just a quote about a bad charity methodology with almost no explanation for why GiveWell charities do what Easterly criticizes. The third point is just wrong. GiveDirectly gives one-time cash transfers to individuals, not ongoing aid.

Comment by joshyou on Contra the Giving What We Can pledge · 2016-12-05T02:25:48.763Z · score: 1 (1 votes) · EA · GW

I'm still pretty confused about why you think donating 10% has to be time-confusing. People who outsource their donation decisions to, say, Givewell might only spend a few hours a year (or a few minutes, depending on how literally we interpret "outsourcing) deciding where to donate.

Comment by joshyou on Ethical offsetting is antithetical to EA · 2016-01-06T02:50:11.552Z · score: 0 (0 votes) · EA · GW

"And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them."

I think it would be helpful to distinguish between the claims (1) "given that one has imposed some harm, one is obligated to offset it" and (2) "any imposition of harm is justified if it is offset." This article argues against the first claim, while Scott argues that the second one seems false. It seems pretty easy to imagine someone accepting (1) and rejecting (2), and I'd be pretty skeptical of a causal connection between promoting (1) and more people believing in (2). The reverse seems just as (un)likely: "hey, if I don't have to offset my harms, maybe causing harm doesn't really matter to begin with."

Comment by joshyou on Ideas for new experimental EA projects you could fund! · 2014-12-03T00:40:54.026Z · score: 1 (1 votes) · EA · GW

Hire a full or part-time Personal Assistant for Prof Nick Bostrom

Is there a reason this couldn't be done with FHI funding? If FHI believed that this was the best use of an additional [however much it takes to hire an assistant], then an unrestricted donation of that amount would make it happen. If not, it's much less clear that this would be a good idea.

Comment by joshyou on The new GiveWell recommendations are out: here's a summary of the charities · 2014-12-01T18:41:55.294Z · score: 3 (3 votes) · EA · GW

DtWI has a relatively small funding gap of $1.3 million.

Comment by joshyou on Introduce Yourself · 2014-09-18T18:41:12.125Z · score: 9 (9 votes) · EA · GW

Hi, I'm Josh. I'm a sophomore in college and I'm tentatively planning on EtG through programming. I have been donating to CEA for movement-building purposes, but may switch to ACE and/or ACE-recommended charities in the near future. I became an EA after being heavily exposed to moral philosophy (esp. utilitarianism) through doing debate in high school.

When I'm not doing school work I enjoy playing video games, reading philosophy, working out, and programming.

Comment by joshyou on Open Thread · 2014-09-16T02:26:04.135Z · score: 1 (1 votes) · EA · GW

William Macaskill makes a few good points here about why EA does not rely on utilitarianism. It's true that a lot of EAs are utilitarian, but I've seen plenty of discussions on normative ethics among EA circles, so I wouldn't describe it as a silent unanimity.