Comment by ofer on The Case for the EA Hotel · 2019-04-10T10:31:57.799Z · score: 1 (1 votes) · EA · GW

Yes, thanks.

Comment by ofer on The Case for the EA Hotel · 2019-04-01T05:09:56.761Z · score: 14 (12 votes) · EA · GW

There's an additional argument in favor of the EA Hotel idea which I find very compelling (I've read it on this forum in a comment that I can't find; EDIT: it was this comment by the user Agrippa - the following is not at all a precise description of the original comment and contains extra things that Agrippa might not agree with):

A lot of people are optimizing to get money as an instrumental goal and funders don't always have a great way to evaluate how much a person that is asking for money is "EA-aligned" (for any reasonable definition of that term).

The willingness to travel and live for a while in a building with people that are excited about EA probably correlates with "being EA-aligned".

So supporting people via funding their residency in a place like the EA Hotel seems to allow an implicit weak vetting mechanism that doesn't exist when funding people directly.

Comment by ofer on Severe Depression and Effective Altruism · 2019-03-30T14:51:23.138Z · score: 3 (2 votes) · EA · GW

Just an additional point to consider:

If you (and therefore other people similar to you) decide to act in a way that causes a lot of harm/suffering to yourself or your family, and you wouldn't have acted in that way had you never heard about EA, then that would create a causal link between "Alice learns about EA" and "Alice or her family suffer". From a utilitarian perspective, such a causal link seems extremely harmful (e.g. making it less likely that a random talented/rich person would end up being involved in EA related efforts).

So this is an argument in favor of NOT making such decisions.

Comment by ofer on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-29T10:54:13.322Z · score: 1 (1 votes) · EA · GW
To verify I'm a real person that will in fact award $100, find me on FB here.

The link appears to be broken.

(my interest here is in finding/popularizing ways for users of this forum to easily prove their identity to other users in case they wish to).

Comment by ofer on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T15:26:58.098Z · score: 1 (1 votes) · EA · GW
For sure, forecasters who devoted more effort to it tended to make more accurate predictions. It would be surprising if that wasn't true!

I agree. But I am not referring to an extra effort that makes a person provide a better forecast (e.g. by spending more time looking for arguments), but rather an extra effort that allows one to improve their average daily Brier scores by simply using new public information that was not available when the question was first presented (e.g. new poll results).

Comment by ofer on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T10:38:01.285Z · score: 2 (2 votes) · EA · GW

Thank you for writing this.

Is the one-hour training module publicly available?

One might worry that training improves accuracy by motivating the trainees to take their jobs more seriously. Indeed it seems that the trained forecasters made more predictions per question than the control group, though they didn’t make more predictions overall. Nevertheless it seems that the training also had a direct effect on accuracy as well as this indirect effect.34

I could not find results like the ones in Table 4 in which the Brier scores are based only on the first answer that forecasters provide. Allowing forecasters to update their forecasts as frequently as they want (while reporting average daily Brier scores) plausibly gives an advantage to the forecasters who are willing to invest more time in their task.

The paper from which Table 4 is from stated that "Training was a significant predictor of average number of forecasts per question for year 1 and the number of forecasts per question was also significant predictor of accuracy (measured as mean standardized Brier score)". Consider Table 10 in the paper that shows "Forecasts per question per user by year". Notice that in year 3 the forecasters that got training made 4.27 forecasts per question, while forecasters that did not get training made only 1.90 forecasts per question. The paper includes additional statistical analyses related to this issue (unfortunately I don't have the combination of time and background in statistics to understand them all).

Comment by ofer on Three Biases That Made Me Believe in AI Risk · 2019-02-14T06:15:08.234Z · score: 34 (26 votes) · EA · GW
If people here would appreciate it, I would be happy to write one or more posts on object-level arguments as to why I am now sceptical of AI risk. Let me know in the comments.

I would like to read about these arguments.

Comment by ofer on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T11:21:15.595Z · score: 4 (4 votes) · EA · GW

When planning how to donate, it seems very important to consider the impact of market returns increasing due to progress in AI. But I think more considerations should be taken into account before drawing the conclusion in the OP.

For each specific cause, we should estimate the curve over time of EV-per-additional-dollar-invested-in-2019-and-used-now (given an estimate of market returns over time). As Richard pointed out, for reducing AI x-risk, it is not obvious we will have time to effectively use the money we invest today if we wait for too long (so "the curve" for AI safety might be sharply decreasing).

Here is another consideration I find relevant for AI x-risk: in slow takeoff worlds more people are likely to become worried about x-risk from AI (e.g. after they see that the economy has doubled in the past 4 years and that lots of weird things are happening). In such worlds, it might be the case that a very small fraction of the money that will be allocated for reducing AI x-risk would be donated by people who are currently worried about AI x-risk. This consideration might make us increase the weight of fast takeoff worlds.

On the other hand, maybe in slow takeoff worlds there is generally a lot more that could be done for reducing x-risk from AI (especially if slow takeoff correlates with longer timelines), which suggests we increase the weight of slow takeoff worlds.

If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (Larks' 2018 review (a) is a good starting point here).

I just wanted to note that some of the research directions for reducing AI x-risk, including ones that seem relevant in fast takeoff worlds, are outside of the technical AI alignment field (for example, governance/policy/strategy research).

Comment by ofer on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T15:29:17.183Z · score: 6 (6 votes) · EA · GW

In this FLI podcast episode, Andrew Critch suggested handling a potentially dangerous idea like a software update rollout procedure, in which the update is distributed gradually rather than to all customers at once:

... I would tell you the same thing I would tell anyone who discovers a potentially dangerous idea, which is not to write a blog post about it right away.

I would say, find three close, trusted individuals that you think reason well about human extinction risk, and ask them to think about the consequences and who to tell next. Make sure you’re fair-minded about it. Make sure that you don’t underestimate the intelligence of other people and assume that they’ll never make this prediction


Then do a rollout procedure. In software engineering, you developed a new feature for your software, but it could crash the whole network. It could wreck a bunch of user experiences, so you just give it to a few users and see what they think, and you slowly roll it out. I think a slow rollout procedure is the same thing you should do with any dangerous idea, any potentially dangerous idea. You might not even know the idea is dangerous. You may have developed something that only seems plausibly likely to be a civilizational scale threat, but if you zoom out and look at the world, and you imagine all the humans coming up with ideas that could be civilizational scale threats.


If you just think you’ve got a small chance of causing human extinction, go ahead, be a little bit worried. Tell your friends to be a little bit worried with you for like a day or three. Then expand your circle a little bit. See if they can see problems with the idea, see dangers with the idea, and slowly expand, roll out the idea into an expanding circle of responsible people until such time as it becomes clear that the idea is not dangerous, or you manage to figure out in what way it’s dangerous and what to do about it, because it’s quite hard to figure out something as complicated as how to manage a human extinction risk all by yourself or even by a team of three or maybe even ten people. You have to expand your circle of trust, but, at the same time, you can do it methodically like a software rollout, until you come up with a good plan for managing it. As for what the plan will be, I don’t know. That’s why I need you guys to do your slow rollout and figure it out.