Comment by jimrandomh on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-28T00:25:25.222Z · score: 15 (6 votes) · EA · GW

Post-mortem donation is fine, but being asked to sign up for kidney donation would be severely trust-destroying for me.

Comment by jimrandomh on EA is vetting-constrained · 2019-03-09T02:34:31.994Z · score: 22 (10 votes) · EA · GW

This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don't have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.

Comment by jimrandomh on Bounty: Guide To Switching From Farmed Fish To Wild-Caught Fish · 2019-03-04T04:43:22.340Z · score: 4 (3 votes) · EA · GW

That doesn't seem especially relevant to the question of whether first-world consumers should buy farmed or wild-caught fish; the amount caught form fisheries is set by regulations, not by demand, so consumer demand does not, on the margin, increase or decrease overfishing.

Comment by jimrandomh on Bounty: Guide To Switching From Farmed Fish To Wild-Caught Fish · 2019-02-23T03:13:28.503Z · score: 25 (14 votes) · EA · GW

I doubt this makes a difference. Most of the market treats farmed and wild-caught fish as close substitutes, the supply of wild-caught fish is inelastic, and the supply of farmed fish is highly elastic. So if you switch from farmed to wild-caught fish, you are probably affecting market prices in a way which causes one other person to make the opposite change.

Comment by jimrandomh on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-26T08:28:08.582Z · score: 5 (2 votes) · EA · GW

There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.

The first of these premises is probably true. The second is probably false. The third is definitely false.

Comment by jimrandomh on [deleted post] 2019-01-01T23:42:02.408Z

Test

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-08T00:25:47.113Z · score: 4 (3 votes) · EA · GW

I based this mainly on a combination of a model and personal experience/self-experimentation, but hadn't previously looked for data to quantify it. I've significantly downgraded my confidence in the correct quantity of extra food to eat being meal-sized, but am uncertain since none of the studies measure quite the thing I care about.

This study measured energy expenditure as a result of an all-nighter, in subjects whose food intake was controlled (ie not allowed to eat extra), and found that

Missing one night of sleep had a metabolic cost of ∼562 ± 8.6 kJ (∼134 ± 2.1 kcals) over 24 h, which equates to a ∼7% higher 24 h EE

This (134kcal) is smaller than I was expecting; on the other hand, not being able to eat extra calories puts a pretty sharp limit on ability to spend extra calories. From a different angle, this paper measured sleep and wake energy expenditure and found a ratio of 1.67:1 (in nonobese controls), which would imply that converting sleep hours to wake hours would increase TDEE by ~15%. A study which measured next-day intake rather than metabolic expenditure found a 22% increase; but it's possible subjects overcompensated by eating more extra than they consumed.

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-07T19:45:53.776Z · score: 2 (2 votes) · EA · GW

Nutrition problems tend to disguise themselves as other kinds of stress; being hungry makes people emotionally brittle, which creates a thousand red herrings when you're trying to figure out what's wrong.

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-07T19:36:46.376Z · score: 4 (4 votes) · EA · GW

Nutrition! Skipping meals or eating substandard meals pushes people into burnout-like symptoms fast (no pun intended). For an organization, that means making sure lunch is built into peoples' daily schedule in such a way that it won't get skipped under pressure. An all-nighter or a day of physical activity requires an extra meal to make up for the energy expenditure, which most people don't realize, amplifying the detrimental effects.

Comment by jimrandomh on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T05:31:51.188Z · score: 10 (10 votes) · EA · GW

A review of the OpenPhil grants database shows that since the EA funds were founded, Nick Beckstead was the grant investigator for OpenPhil grants made in both these areas, larger than either of these funds; for example $2.7M to CEA and $3.75M to MIRI. These are good grants, and there are more good ones in the grants database.

When the EA Funds were first announced, I wrote:

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck.

Basically, it looks like Nick Beckstead is in charge of two sources of funding designated for the same cause areas, EA Funds are the smaller of the two sources, and they aren't really needed.

Comment by jimrandomh on EA Hotel with free accommodation and board for two years · 2018-06-06T03:45:58.896Z · score: 4 (4 votes) · EA · GW

And if it's the latter, it's unclear to me why this idea would be better than just funding poor EAs directly and letting them decide where to live

That would cost much more per person. With that cost would come an expectation of filtering and grant proposals, which would keep out a lot of people who might otherwise use this to do good things.

Comment by jimrandomh on [deleted post] 2018-01-13T23:01:35.433Z

In this model, what is the probability that the initiative (which I see is modeled as costing $6-39M) is successful? Or is it assumed that in the case where it isn't going to succeed, the cost is limited to the cost of polling ($50-300k)?

Comment by jimrandomh on Introducing the EA Funds · 2017-02-10T18:32:02.585Z · score: 4 (4 votes) · EA · GW

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck. One of the major constraints on OpenPhil's giving has been wanting charities to have diverse sources of funding; this appears to reduce funding diversity, by converting donations from individual small donors into donations from OpenPhil. What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

Comment by jimrandomh on Introducing the EA Funds · 2017-02-09T00:55:03.040Z · score: 7 (7 votes) · EA · GW

What will be these funds' policy on rolling funds over from year to year, if the donations a fund gets exceed the funding gaps the managers are aware of?

(This seems particularly important for funds whose managers are also involved with OpenPhil, given that OpenPhil did not spend its entire budget last year.)

Comment by jimrandomh on Anonymous EA comments · 2017-02-08T01:03:43.928Z · score: 2 (2 votes) · EA · GW

The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling.

I'm confused by the bit about this not being reflected in organizations' public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.

Comment by jimrandomh on Proposed methodology for leafleting study · 2017-02-06T17:09:48.880Z · score: 5 (5 votes) · EA · GW

The questions about diet before and after the change seem to be pushing people strongly into claiming to be or to have been some sort of vegetarian; the only option you have there that isn't somehow anti-meat is "Other', which requires typing.

A better version of this question would have a no-dietary-restrictions option first, and a few options that aren't animal-welfare related like "low carb" and "Mediterranean".

Comment by jimrandomh on Proposed methodology for leafleting study · 2017-02-06T17:05:34.484Z · score: 5 (5 votes) · EA · GW

Statistics nitpick: I believe you should be using a two-sided test, as it is also possible for leafleting to reduce the rate of people going vegetarian if the leaflets alienate people somehow.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T21:03:12.562Z · score: 7 (7 votes) · EA · GW

using EA to justify their belief in technology as the supreme power and discredit spirituality.

Huh? I am genuinely confused as to what you mean by that.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T18:16:16.675Z · score: 4 (6 votes) · EA · GW

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

The issue in this case is not that he's in the EA community, but that he's trying to act as the EA community's representative to people outside the community who are not well placed to make that judgment themselves.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T17:49:38.086Z · score: 2 (2 votes) · EA · GW

Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T17:13:16.127Z · score: 8 (8 votes) · EA · GW

You're right, I missed that. I'll edit the parent post to fix the error.

(Given the history, I'm curious to find out what "reviewed the script and provided a high-resolution copy of their logo" means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T17:11:55.134Z · score: 9 (9 votes) · EA · GW

Gleb, Intentional Insights board meeting, 9/21/16 at 22:05:

"We certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. There's a personalization of hostility associated with Intentional Insights, so we want to decrease some of our visibility in central EA forums, while still doing effective altruism. We are still an effective altruist meta-charity. So focusing more on promoting effective giving to a broad audience."

(https://www.youtube.com/watch?v=WbBqQzM7Rto)

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T16:54:24.738Z · score: 2 (6 votes) · EA · GW

The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize that's what's happening.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T16:51:49.671Z · score: 5 (9 votes) · EA · GW

EDIT: Comment here was about a video by InIn, where I incorrectly speculated that they might've misused trademarks to signal affiliation with several other EA orgs. At least one of those orgs has confirmed that they did review the video prior to publication, so in fact there was not an issue. I apologize; it was wrong to speculate about that when it wasn't true, and without adequately investigating first.

Comment by jimrandomh on Ask MIRI Anything (AMA) · 2016-10-12T02:48:29.696Z · score: 8 (8 votes) · EA · GW

In 2013, MIRI announced it was shifting to do less outreach and more research. How has that shift worked out, and what's the current balance between these two priorities?

Comment by jimrandomh on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2016-05-23T03:59:22.162Z · score: 0 (0 votes) · EA · GW

For example, last year when Stanford Effective Altruism was considering making donations to charity, we preferred the Schistosomiasis Control Initiative over AMF because we believed that GiveWell gave too much significance to the “GiveWell view” of population ethics and not enough to the total view.

I'm confused about how the differences between SCI and AMF connect to population ethics. Neither charity seems like it would have obvious effects on the birth rate. Both schistomiasis and malaria do harm through a mix of killing people and lowering their subsequent quality of life, but I guess it's a different mix and the demographics of the people affected is different? It would help a lot to lay out specifically what those differences are.

Comment by jimrandomh on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-19T18:17:46.580Z · score: 5 (11 votes) · EA · GW

Ok, I admit I didn't think to check there. Arguing the semantics about what "currently spends" means would be pointless, and I recognize that this remark was in the context of estimating how MIRI's future budget would be affected, but I do think that in the context of a discussion about evaluating past performance, it's important not to anchor people's expectations on a budget they don't have yet.

Comment by jimrandomh on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-19T17:16:15.761Z · score: 2 (6 votes) · EA · GW

MIRI currently spends around $2 million dollars a year - including some highly skilled labour that is probably underpriced

Their 2014 financials on https://intelligence.org/transparency/ say their total expenditures in 2014 were $948k. Their 2015 financials aren't up yet, and I think they did expand in 2015, but I don't think you can claim this unremarked. This is not a neutral error; if you make them look twice as big as they are, then you also make them look half as efficient.

Comment by jimrandomh on I am Nate Soares, AMA! · 2015-06-11T22:15:35.412Z · score: 1 (1 votes) · EA · GW

There are different inputs needed to advance AI safety: money, research talent, executive talent, and others. How do you see the tradeoff between these resources, and which seems most like a priority right now?