Posts

Comments

Comment by jimrandomh on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T20:33:30.816Z · EA · GW

How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?

The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.

Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there's a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).

On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it's a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you're supposed to be thinking about both and not trying to oversimplify things.

What would moral/social progress actually look like?

This seems like a good place to mention Dath Ilan, Eliezer's fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.

What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?

I don't think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn't granular enough. There's a huge gulf between saying "social media is toxic" and saying "it is toxic for the closest thing to a downvote button to be reply/share", and I try to tune out/unfollow the people whose writings say things closer to the former.

Comment by jimrandomh on Being Vocal About What Works · 2021-05-08T17:41:40.089Z · EA · GW

I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there's some worry about blame. If the supplement helps, or the stock rises, there's some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.

Comment by jimrandomh on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T07:56:29.624Z · EA · GW

I was somewhat confused by the scale using Categorizing Variants of Goodhart's Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being "a particularly valuable paper" (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don't think these aspects of the rubric wound up impacting the specific estimates made here, though.

Comment by jimrandomh on When to get a vaccine in the Bay Area as a young healthy person · 2021-03-14T16:42:02.312Z · EA · GW
  • From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.

This was true in February, but I think it's no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups  being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.

(EDIT: See below, the map I linked to may be mixing vaccine and PCR-test appointments together in a way that confused me.)

Comment by jimrandomh on [deleted post] 2021-02-25T23:41:40.988Z

The core thesis here seems to be:

I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we're doing pretty well. )
  3. Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren't important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
  4. Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)

Comment by jimrandomh on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-09T19:21:36.296Z · EA · GW

Should competent EAs be pursuing local political offices?

Comment by jimrandomh on Support AMF in tab 4 a cause so it reaches its goal. · 2019-07-29T07:07:59.448Z · EA · GW

Looking at ads and introducing ads into your environment is not free, it's mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn't change that.

Comment by jimrandomh on I find this forum increasingly difficult to navigate · 2019-07-05T22:56:16.872Z · EA · GW

LessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn't adopted that yet, but it would probably help.

Comment by jimrandomh on The most cost-efficient way to convert money into personal health · 2019-05-06T19:38:16.613Z · EA · GW

Were you under the impression that I was disagreeing with the sodium-reduction guidelines because I was merely unaware that they existed? This is an area of considerable controversy.

Comment by jimrandomh on The most cost-efficient way to convert money into personal health · 2019-05-03T18:36:13.870Z · EA · GW
Quitting smoking, alcohol, salt, and sugar is also hard–they are quite addictive.

For most people, cutting salt intake is harmful, not helpful. Salt isn't new to human diets, and it isn't a matter of addiction; it's just a necessary nutrient.

Sugar can be harmful, but only insofar as it crowds out other calorie sources which are better. When people try to cut sugar, they often fail (and mildly harm themselves) because they neglect to replace it.

Comment by jimrandomh on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-28T00:25:25.222Z · EA · GW

Post-mortem donation is fine, but being asked to sign up for kidney donation would be severely trust-destroying for me.

Comment by jimrandomh on EA is vetting-constrained · 2019-03-09T02:34:31.994Z · EA · GW

This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don't have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.

Comment by jimrandomh on Bounty: Guide To Switching From Farmed Fish To Wild-Caught Fish · 2019-03-04T04:43:22.340Z · EA · GW

That doesn't seem especially relevant to the question of whether first-world consumers should buy farmed or wild-caught fish; the amount caught form fisheries is set by regulations, not by demand, so consumer demand does not, on the margin, increase or decrease overfishing.

Comment by jimrandomh on Bounty: Guide To Switching From Farmed Fish To Wild-Caught Fish · 2019-02-23T03:13:28.503Z · EA · GW

I doubt this makes a difference. Most of the market treats farmed and wild-caught fish as close substitutes, the supply of wild-caught fish is inelastic, and the supply of farmed fish is highly elastic. So if you switch from farmed to wild-caught fish, you are probably affecting market prices in a way which causes one other person to make the opposite change.

Comment by jimrandomh on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-26T08:28:08.582Z · EA · GW

There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.

The first of these premises is probably true. The second is probably false. The third is definitely false.

Comment by jimrandomh on [deleted post] 2019-01-01T23:42:02.408Z

Test

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-08T00:25:47.113Z · EA · GW

I based this mainly on a combination of a model and personal experience/self-experimentation, but hadn't previously looked for data to quantify it. I've significantly downgraded my confidence in the correct quantity of extra food to eat being meal-sized, but am uncertain since none of the studies measure quite the thing I care about.

This study measured energy expenditure as a result of an all-nighter, in subjects whose food intake was controlled (ie not allowed to eat extra), and found that

Missing one night of sleep had a metabolic cost of ∼562 ± 8.6 kJ (∼134 ± 2.1 kcals) over 24 h, which equates to a ∼7% higher 24 h EE

This (134kcal) is smaller than I was expecting; on the other hand, not being able to eat extra calories puts a pretty sharp limit on ability to spend extra calories. From a different angle, this paper measured sleep and wake energy expenditure and found a ratio of 1.67:1 (in nonobese controls), which would imply that converting sleep hours to wake hours would increase TDEE by ~15%. A study which measured next-day intake rather than metabolic expenditure found a 22% increase; but it's possible subjects overcompensated by eating more extra than they consumed.

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-07T19:45:53.776Z · EA · GW

Nutrition problems tend to disguise themselves as other kinds of stress; being hungry makes people emotionally brittle, which creates a thousand red herrings when you're trying to figure out what's wrong.

Comment by jimrandomh on Burnout: What is it and how to Treat it. · 2018-11-07T19:36:46.376Z · EA · GW

Nutrition! Skipping meals or eating substandard meals pushes people into burnout-like symptoms fast (no pun intended). For an organization, that means making sure lunch is built into peoples' daily schedule in such a way that it won't get skipped under pressure. An all-nighter or a day of physical activity requires an extra meal to make up for the energy expenditure, which most people don't realize, amplifying the detrimental effects.

Comment by jimrandomh on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T05:31:51.188Z · EA · GW

A review of the OpenPhil grants database shows that since the EA funds were founded, Nick Beckstead was the grant investigator for OpenPhil grants made in both these areas, larger than either of these funds; for example $2.7M to CEA and $3.75M to MIRI. These are good grants, and there are more good ones in the grants database.

When the EA Funds were first announced, I wrote:

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck.

Basically, it looks like Nick Beckstead is in charge of two sources of funding designated for the same cause areas, EA Funds are the smaller of the two sources, and they aren't really needed.

Comment by jimrandomh on EA Hotel with free accommodation and board for two years · 2018-06-06T03:45:58.896Z · EA · GW

And if it's the latter, it's unclear to me why this idea would be better than just funding poor EAs directly and letting them decide where to live

That would cost much more per person. With that cost would come an expectation of filtering and grant proposals, which would keep out a lot of people who might otherwise use this to do good things.

Comment by jimrandomh on [deleted post] 2018-01-13T23:01:35.433Z

In this model, what is the probability that the initiative (which I see is modeled as costing $6-39M) is successful? Or is it assumed that in the case where it isn't going to succeed, the cost is limited to the cost of polling ($50-300k)?

Comment by jimrandomh on Introducing the EA Funds · 2017-02-10T18:32:02.585Z · EA · GW

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck. One of the major constraints on OpenPhil's giving has been wanting charities to have diverse sources of funding; this appears to reduce funding diversity, by converting donations from individual small donors into donations from OpenPhil. What reason do donors have to think they aren't just crowding out donations from OpenPhil's main fund?

Comment by jimrandomh on Introducing the EA Funds · 2017-02-09T00:55:03.040Z · EA · GW

What will be these funds' policy on rolling funds over from year to year, if the donations a fund gets exceed the funding gaps the managers are aware of?

(This seems particularly important for funds whose managers are also involved with OpenPhil, given that OpenPhil did not spend its entire budget last year.)

Comment by jimrandomh on Anonymous EA comments · 2017-02-08T01:03:43.928Z · EA · GW

The fact that this seems to have happened more in private among the people who run key organizations than in those organizations' public faces is particularly troubling.

I'm confused by the bit about this not being reflected in organizations' public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.

Comment by jimrandomh on Proposed methodology for leafleting study · 2017-02-06T17:09:48.880Z · EA · GW

The questions about diet before and after the change seem to be pushing people strongly into claiming to be or to have been some sort of vegetarian; the only option you have there that isn't somehow anti-meat is "Other', which requires typing.

A better version of this question would have a no-dietary-restrictions option first, and a few options that aren't animal-welfare related like "low carb" and "Mediterranean".

Comment by jimrandomh on Proposed methodology for leafleting study · 2017-02-06T17:05:34.484Z · EA · GW

Statistics nitpick: I believe you should be using a two-sided test, as it is also possible for leafleting to reduce the rate of people going vegetarian if the leaflets alienate people somehow.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T21:03:12.562Z · EA · GW

using EA to justify their belief in technology as the supreme power and discredit spirituality.

Huh? I am genuinely confused as to what you mean by that.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T18:16:16.675Z · EA · GW

Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own?

The issue in this case is not that he's in the EA community, but that he's trying to act as the EA community's representative to people outside the community who are not well placed to make that judgment themselves.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-25T17:49:38.086Z · EA · GW

Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T17:13:16.127Z · EA · GW

You're right, I missed that. I'll edit the parent post to fix the error.

(Given the history, I'm curious to find out what "reviewed the script and provided a high-resolution copy of their logo" means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T17:11:55.134Z · EA · GW

Gleb, Intentional Insights board meeting, 9/21/16 at 22:05:

"We certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. There's a personalization of hostility associated with Intentional Insights, so we want to decrease some of our visibility in central EA forums, while still doing effective altruism. We are still an effective altruist meta-charity. So focusing more on promoting effective giving to a broad audience."

(https://www.youtube.com/watch?v=WbBqQzM7Rto)

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T16:54:24.738Z · EA · GW

The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize that's what's happening.

Comment by jimrandomh on Concerns with Intentional Insights · 2016-10-24T16:51:49.671Z · EA · GW

EDIT: Comment here was about a video by InIn, where I incorrectly speculated that they might've misused trademarks to signal affiliation with several other EA orgs. At least one of those orgs has confirmed that they did review the video prior to publication, so in fact there was not an issue. I apologize; it was wrong to speculate about that when it wasn't true, and without adequately investigating first.

Comment by jimrandomh on Ask MIRI Anything (AMA) · 2016-10-12T02:48:29.696Z · EA · GW

In 2013, MIRI announced it was shifting to do less outreach and more research. How has that shift worked out, and what's the current balance between these two priorities?

Comment by jimrandomh on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2016-05-23T03:59:22.162Z · EA · GW

For example, last year when Stanford Effective Altruism was considering making donations to charity, we preferred the Schistosomiasis Control Initiative over AMF because we believed that GiveWell gave too much significance to the “GiveWell view” of population ethics and not enough to the total view.

I'm confused about how the differences between SCI and AMF connect to population ethics. Neither charity seems like it would have obvious effects on the birth rate. Both schistomiasis and malaria do harm through a mix of killing people and lowering their subsequent quality of life, but I guess it's a different mix and the demographics of the people affected is different? It would help a lot to lay out specifically what those differences are.

Comment by jimrandomh on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-19T18:17:46.580Z · EA · GW

Ok, I admit I didn't think to check there. Arguing the semantics about what "currently spends" means would be pointless, and I recognize that this remark was in the context of estimating how MIRI's future budget would be affected, but I do think that in the context of a discussion about evaluating past performance, it's important not to anchor people's expectations on a budget they don't have yet.

Comment by jimrandomh on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-19T17:16:15.761Z · EA · GW

MIRI currently spends around $2 million dollars a year - including some highly skilled labour that is probably underpriced

Their 2014 financials on https://intelligence.org/transparency/ say their total expenditures in 2014 were $948k. Their 2015 financials aren't up yet, and I think they did expand in 2015, but I don't think you can claim this unremarked. This is not a neutral error; if you make them look twice as big as they are, then you also make them look half as efficient.

Comment by jimrandomh on I am Nate Soares, AMA! · 2015-06-11T22:15:35.412Z · EA · GW

There are different inputs needed to advance AI safety: money, research talent, executive talent, and others. How do you see the tradeoff between these resources, and which seems most like a priority right now?