Posts

SiebeRozendal's Shortform 2020-10-06T10:13:10.157Z · score: 5 (1 votes)
Four components of strategy research 2020-01-30T19:08:37.244Z · score: 19 (13 votes)
Eight high-level uncertainties about global catastrophic and existential risk 2019-11-28T14:47:31.695Z · score: 80 (36 votes)
A case for strategy research: what it is and why we need more of it 2019-06-20T20:18:09.025Z · score: 53 (27 votes)

Comments

Comment by sieberozendal on SiebeRozendal's Shortform · 2020-10-06T10:13:10.657Z · score: 7 (4 votes) · EA · GW

I have a concept of paradigm error that I find helpful.

A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.

Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.

It is related to what I see as

  • parameter errors (= the value of parameters being inaccurate)
  • model errors (= wrong model structure or wrong/missing parameters)

Paradigm errors are one level higher: they are the wrong type of model.


Relevance to EA

I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.

Comment by sieberozendal on jackmalde's Shortform · 2020-10-06T10:01:23.375Z · score: 3 (3 votes) · EA · GW

I agree with this: a lot of the argument (and related things in population ethics) depends on the zero-level of well-being. I would be very interested to see more interest into figuring out what/where this zero-level is.

Comment by sieberozendal on Open and Welcome Thread: October 2020 · 2020-10-04T10:16:59.749Z · score: 13 (9 votes) · EA · GW

I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.

The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate

  • the map (Do we think the team perceives th territory accurately? Do we agree that the territory looks promising for finding large mountains? and
  • the route (Does the strategy look feasible?)

To evaluate a team, we evaluate

  • their navigational ability (Can they find & recognise mountains? Can they find & recognise crevasses, i.e. disvalue?)
  • their executive ability (Can they executive their plan well & adapt to surprising events? Can they go the distance?)

Curious to hear what people think. It's got a bit of overlap with Cotton-Barratt's Prospecting for Gold, but I think it might be sufficiently original.

Comment by sieberozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:43:10.056Z · score: 10 (4 votes) · EA · GW

Great report! I have a two questions for you:

1. On the following:

There are already many ongoing and upcoming high-quality studies on psychedelic-assisted mental health treatments, and there are likely more of those to follow, given the new philanthropic funding that has recently come into the area.​ (p. 45-46)

Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?


2. Did you consider more organisations than Usona and MAPS? It seems a little bit unlikely that these are the only two organisations lobbying for drug approval?


*The flow-through effects I'm most excited about are a reduction in meat consumption, creative problem solving, and an improvement in good judgment (esp. for high-impact individuals). Effects on long-term judgment seem very hard to research, though.

Comment by sieberozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:35:19.527Z · score: 3 (2 votes) · EA · GW

I was confused about the usage of the term drug development as it sounds to me like it's about the discovery/creation of new drugs, which clearly does not seem to be the high-value aspect here. But from the report:

Drug development is a process that covers everything from the discovery of a brand new drug for treatment to this drug being approved for medical use.
Comment by sieberozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:26:58.617Z · score: 8 (2 votes) · EA · GW
I speculate that the particulars of the psychedelic experience may drive rescaling like this in an intense way.

I also think that the psychedelic experience, as well as things like meditation, affect well-being in ways that might not be captured easily. I'm not sure if it's rescaling per se. I feel that meditation has not made me happier in the hedonistic sense, but I strongly believe it's made optimize less for hedonistic wellbeing, and in addition given me more stability, resilience, better judgment, etc.

Comment by sieberozendal on How have you become more (or less) engaged with EA in the last year? · 2020-09-16T17:03:51.931Z · score: 3 (2 votes) · EA · GW

I recently moved to a (nearby) EA hub to live temporarily with some other EA's (and some non-EA's), while figuring out my next steps in my life/career.

This has considerably increased my involvement. The ability to talk about EA over lunch, dinner, and to join meetups that are 5 minutes away make a big difference. As well as finding nice people I connect with socially/emotionally.

I suppose COVID had somewhat of a positive influence here too: I am less likely to attend a wide range of events, because I don't know people's approaches to safety. This leaves more time for EA.

Comment by sieberozendal on Use resilience, instead of imprecision, to communicate uncertainty · 2020-08-25T06:16:12.777Z · score: 9 (3 votes) · EA · GW

Although communicating the precise expected resilience conveys more information, in most situations I prefer to give people ranges. I find it a good compromise between precision and communicating uncertainty, while remaining concise and understandable for lay people and not losing all those weirdness credits that I prefer to spend on more important topics.

This also helps me epistemically: sometimes I cannot represent my belief state in a precise number because multiple numbers feel equally justified or no number feels justified. However, there are often bounds beyond which I think it's unlikely (i.e. <20% or <10% or my rough estimates) that I'd estimate that even with an order of magnitude additional effort.

In addition, I think preserving resilience information is difficult in probabilistic models, but easier with ranges. Of course, resilience can be translated into ranges. However, a mediocre model builder might make the mistake of discarding the resilience if precise estimates are the norm.

Comment by sieberozendal on EA Focusmate Group Announcement · 2020-08-17T15:03:29.099Z · score: 1 (1 votes) · EA · GW

Just to clarify: Focusmate isn't meant to talk about your work, so most people don't try to find people with in-depth knowledge. I mostly don't explain things in detail and don't feel like I need to. It's more an accountability thing and to share general progress (e.g. "I wanted to get 3 tasks done: write an email, draft an outline for a blog post, and solve a technical issue for my software project. I got 2 of them done, and realized I need to ask a colleague about #3, so I did that instead).

Comment by sieberozendal on CEA's Plans for 2020 · 2020-05-03T12:27:35.068Z · score: 23 (6 votes) · EA · GW

Thanks for the elaborate reply!

I think there's a lot of open space in between sending out surveys and giving people binding voting power. I'm not a fan of asking people to vote on things they don't know about. However, I have something in mind of "inviting people to contribute in a public conversation and decision-making process". Final decision power would still be with CEA, but input is more than one-off, the decision-making is more transparant, and a wider range of stakeholders is involved. Obviously, this does not work for all types of decisions - some are too sensitive to discuss publicly. Then again, it may be tempting to classify many decisions as "too sensitive". Well, organisation "opening up" should be an incremental process, and I would definitely recommend to experiment with more democratic procedures.

Comment by sieberozendal on CEA's Plans for 2020 · 2020-04-26T11:23:27.409Z · score: 22 (10 votes) · EA · GW

Hi Max, good to read an update on CEA's plans.

Given CEA's central and influential role in the EA community, I would be interested to hear more on the approach on democratic/communal governance of CEA and the EA community. As I understand it, CEA consults plenty with a variety of stakeholders, but mostly anonymously and behind closed doors (correct me if I'm wrong). I see lack of democracy and lack of community support for CEA as substantial risks to the EA community's effectiveness and existence.

Are there plans to make CEA more democratic, including in its strategy-setting?

Comment by sieberozendal on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-20T09:34:00.060Z · score: 1 (1 votes) · EA · GW

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

Comment by sieberozendal on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-20T09:28:25.017Z · score: 6 (3 votes) · EA · GW

Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?

Comment by sieberozendal on EA should wargame Coronavirus · 2020-03-16T15:08:40.964Z · score: 2 (2 votes) · EA · GW

:(

Comment by sieberozendal on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-16T15:05:52.481Z · score: 1 (1 votes) · EA · GW

Not a funding opportunity, but I think a grassroots effort to employ social norms to enforce social distancing could be effective in countries in early stages where authorities are not enforcing, e.g. The Netherlands, UK, US, etc.

Activists (Student EA's?) could stand with signs in public places asking people non-aggressively to please go home.

Comment by sieberozendal on State Space of X-Risk Trajectories · 2020-02-08T16:26:07.995Z · score: 6 (4 votes) · EA · GW

I think this article very nicely undercuts the following common sense research ethics:

If your research advances the field more towards a positive outcome than it moves the field towards a negative outcome, then your research is net-positive

Whether research is net-positive depends on the current field's position relative to both outcomes (assuming that when either outcome is achieved, the other can no longer be achieved). It replaces this with another heuristic:

To make a net-positive impact with research, move the field closer to the positive outcome than the negative outcome with a ratio of at least the same ratio as distance-to-positive : distance-to-negative.

If we add uncertainty to the mix, we could calculate how risk averse we should be (where risk aversion should be larger when the research step is larger, as the small projects probably carry much less risk to accidentally make a big step towards FAI).

The ratio and risk-aversion could lead to some semi-concrete technology policy. For example, if the distance to FAI and UAI is (100, 10), technology policy could prevent funding any projects that either have a distance-ratio (for lack of a better term) lower than 10 or that have a 1% or higher probability a taking a 10d step towards UAI.

Of course, the real issue is whether such a policy can be plausibly and cost-effectively enforced or not, especially given that there is competition with other regulatory areas (China/US/EU).

Without policy, the concepts can still be used for self-assessment. And when a researcher/inventor/sponsor assesses the risk-benefit profile of a technology themselves, they should discount for their own bias as well, because they are likely to have an overly optimistic view of their own project.

Comment by sieberozendal on Comparing Four Cause Areas for Founding New Charities · 2020-01-24T17:08:00.133Z · score: 4 (4 votes) · EA · GW

I really love Charity Entrepreneurship :) A remark and a question:

1. I notice one strength you mention at family planning is "Strong funding outside of EA" - I think this is a very interesting and important factor that's somewhat neglected in EA analyses because it goes beyond cost-effectiveness. We are not asking the 'given our resources, how can we spend them most effectively?' but the more general (and more relevant) 'how can we do the most good?' I'd like to see 'how much funding is available outside of EA for this intervention/cause area' as a standard question in EA's cost-effectiveness analyses :)

2. Is there anything you can share about expanding to two of the other cause areas: long-termism and meta-EA?


Comment by sieberozendal on Final update on EA Norway's Operations Project · 2020-01-13T21:29:31.622Z · score: 1 (1 votes) · EA · GW

A consulting organisation aimed at EA(-aligned) organisations, as far as I'm aware: https://www.goodgrowth.io/.

Mark McCoy, mentioned in this post, is the Director of Strategy for it.

Comment by sieberozendal on Thoughts on doing good through non-standard EA career pathways · 2020-01-11T12:59:02.740Z · score: 8 (5 votes) · EA · GW

This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:

I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can't find where. As far as I recall, his point was that 'doing work that actually makes an impact' is a skill that needs to be trained, and you can't just first get general skills and then decide to make an impact.

Personally, even though I did a master's degree in Strategic Innovation Management with longtermism ideas in mind, I didn't have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn't have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.

Comment by sieberozendal on Thoughts on doing good through non-standard EA career pathways · 2020-01-11T12:41:58.820Z · score: 12 (8 votes) · EA · GW

Can I add the importance of patience and trust/faith here?

I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven't gotten into an impactful position yet.

Furthermore, trusting that one's commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as "on the way to impact" rather than "failing to get an EA job". (I'm not saying it's easy to build, maintain, and trust one's commitment though.)

In addition, I think having good language is really important for keeping these people motivated and involved. We have "building career capital" and Tara MacAulay's term of "Journeymen" but these are not catchy enough I'm afraid.

Comment by sieberozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:46:33.102Z · score: 5 (4 votes) · EA · GW

(Off-topic @JPAddison/@AaronGertler/@BenPace:)

Is tagging users going to be a feature on the Forum someday? It'd be quite useful! Especially for asking a question to non-OP's where the answer can be shared and would be useful publicly.

Comment by sieberozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:43:54.008Z · score: 2 (2 votes) · EA · GW

(@Meta Fund:)

Will any changes be made to the application and funding process in light of how this project went? I can imagine that it would be valuable to plan a go/no-go decision for projects with medium to large uncertainty/downside risk, and perhaps add a question or two (e.g., 'what information would you need to learn to make a go/no-go decision?') if that does not bloat the application process too much. I think this could be very valuable to explore more risky funding opportunities. For example, a two-stage funding commitment can be made where the involved parties can pre-agree to a number of conditions that would decide the go/no-go decision, making follow-up funding much more efficient than going through a new complete funding round.

Comment by sieberozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:42:52.485Z · score: 1 (1 votes) · EA · GW

(@Mark McCoy:)

I wonder what is currently happening with Good Growth and how it relates to this current so-far nameless operations project. It seems like it is an unfunded merging of the two projects? Could you briefly elaborate on the plans and funding situation for the project?

Comment by sieberozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:42:02.920Z · score: 3 (3 votes) · EA · GW

Props for making a no-go decision and switching the focus of the project - I think that is very commendable!

I am very curious about what is going to happen further, and have a few questions:

@EA Norway: Do you have any ideas/opinions on addressing operations bottlenecks that might also be highly impactful, such as

a) organisations doing highly impactful work but not explicitly branded as EA (e.g. top charities, research labs) and

b) other EA projects, such as large local/national groups, and early-stage projects.


Comment by sieberozendal on Long-term investment fund at Founders Pledge · 2020-01-11T10:57:37.221Z · score: 19 (9 votes) · EA · GW

This is a really interesting idea and I'm glad you are taking this up! Some considerations of the top of my head:

1. This set-up would probably not only 'take away' money that would otherwise have been donated directly. There is some percentage of 'extra' money this set-up would attract. So the discussion should not be solely decided by 'would the money be better spent investing or donated now?

2. There is probably a formal set-up for this (optimization) problem, and I think some economist or computer scientist would find it a worthwhile and publishable research question to work on. I'm sure there is related work somewhere, but I suppose the problem is somewhat new with the assumptions of 'full altruism', time-neutrality, and letting go of the fixed-resource assumption.

3. There is a difference between investing money for a) later opportunities that seem high-value that can be found by careful evaluation, and b) later opportunities that seem high-value and require a short-time frame to respond. I hope this fund would address both, and I think the case for b) might be stronger than for a). One option for a) would be a global catastrophic response fund. As far as I am aware, there is not a coordinated protocol to respond to global catastrophes or catastrophic crises, and the speed of funding can play a crucial role. A non-governmental fund would be much faster than trying to coordinate the international response. Furthermore, I think a) and b) play substantially different roles in the optimization problem.


Comment by sieberozendal on Managing risk in the EA policy space · 2019-12-12T02:18:01.369Z · score: 3 (3 votes) · EA · GW

Sam, this is a good post on an important topic! I believe EA's policy-thinking is very underdeveloped and I'm glad you're pulling the cart here! I look forward to seeing more posts and discussions on effective policy.

Is there an active network/meeting point for people to learn more about policy from an EA perspective?

Comment by sieberozendal on Shapley values: Better than counterfactuals · 2019-12-12T01:30:50.684Z · score: 1 (1 votes) · EA · GW

Thanks! Late replies are better than no replies ;)

I don't think this type of efficiency deals with the practical problem of impact credit allocation though! Because there the problem appears to be that it's difficult to find a common denominator for people's contributions. You can't just use man hours, and I don't think the market value of man hours would do that much better (although it gets in the right direction).

Comment by sieberozendal on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-12T01:22:26.197Z · score: 7 (3 votes) · EA · GW

Hey Matt, good points! This all relates to what Avin et al. call the spread mechanism of global catastrophic risk. If you haven't read it already, I'm sure you'll like their paper!

For some of these we actually do have an inkling of knowledge though! Nuclear winter is more likely to affect the northern hemisphere given that practically every nuclear target is located in the northern hemisphere. And it's my impression that in biosecurity geographical containment is a big issue: an extra case in the same location is much less threatening than an extra case in a new country. As a result there are border checks for a hazardous disease at borders where one might expect a disease (e.g. currently the borders with the Democratic Repbulic of the Congo).

Comment by sieberozendal on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-04T14:27:36.478Z · score: 4 (4 votes) · EA · GW

Yes, s-risks are definitely an important concept there! I mention them only at 7. but not because I thought they weren't important :)

Comment by sieberozendal on Eight high-level uncertainties about global catastrophic and existential risk · 2019-11-29T09:20:09.342Z · score: 4 (4 votes) · EA · GW

Yeah so the first point is what I'm referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.

Comment by sieberozendal on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T15:49:25.608Z · score: 20 (16 votes) · EA · GW

Not sure why the initials are only provided. For the sake of clarity to other readers, EY = Eliezer Yudkowsky.

Comment by sieberozendal on EA Community Building Grants - Recent Grants Made and Changes to the Application Process · 2019-11-19T14:13:01.813Z · score: 3 (3 votes) · EA · GW

Thanks for the elaborate response! Allow me to ask some follow-up questions, the topic is close to my heart :)

I expect that making relatively fewer grants will leave more capacity for trying things such as exploring different mechanisms of supporting community builders and different types of projects to fund. I expect this to increase the community’s collective understanding of how to do community building more than increasing the number of grants.

Am I right to take away from this that the EA CB Grants Programme is capacity-constrained? Because I believe this would be important for other funders. I'm afraid there is a dynamic where CB-efforts have trouble finding non-CEA funding because alternative funders believe CEA has got all the good opportunities covered. I believe we should in general be skeptical that a small set of funders leads to the efficient allocation of resources. The grants programme being capacity-constrained would be evidence towards there being impactful opportunities for other funders. How does the programme approach this coordination with other funders?

Relatedly, does CEA prefer to be a large part (>50%) or a smaller part of a community's funding? Say a community-building effort raises ~1 FTE for 1 year among their own community, would this affect the likelihood of being funded by CEA?

Comment by sieberozendal on Which Community Building Projects Get Funded? · 2019-11-14T17:24:28.851Z · score: 12 (6 votes) · EA · GW

I liked this post, but I had expected another one given the title.

The post only describes the location of the projects, but not so much what they are doing. I think it would be very valuable to see which type of projects are getting funded. What are e.g. EA Oxford and EA Geneva doing that warrants more support relative to other projects?

I have the intuition that what they are primarily being funded for is more likely to be network-building (increasing the community's connections to influential people, including making community members more influential) than community-building (a longer-term investment into tight networks that facilitate mutual support). I am not sure about how funding is actually distributed between these two types and what the optimal allocation would be though. Without more information it's hard to discuss.

Comment by sieberozendal on EA Community Building Grants - Recent Grants Made and Changes to the Application Process · 2019-11-14T16:02:26.564Z · score: 6 (5 votes) · EA · GW

Hi Harri, I have two questions for you.

We think that there is a large amount of variance in the impact of individual grants that we’ve made.

What makes you believe this? What kind of criteria are used to evaluate and compare the impact of individual grants?

After evaluating the grants made over the course of 2018 we also think that we now have a better understanding of which kinds of grantmaking opportunities will be most impactful.

Could you elaborate on this? Which kinds of opportunities do you think will be most impactful? This seems highly valuable information for aspiring community builders.

Furthermore, community-building seems like a long-term project, so I am quite surprised about the decision to focus so much on just a few opportunities and the confidence in which type of projects are valuable. I would think that exploration is enormously valuable in such an early stage of our international community. Is this because you believe there are large potential downsides?

Comment by sieberozendal on Which Community Building Projects Get Funded? · 2019-11-14T15:49:49.978Z · score: 16 (7 votes) · EA · GW

Yes, there are steps to mitigate it. But community building is by its very nature location-constrained. A tech firm can move to a particular hub. A community can not.

Furthermore, if I recall correctly, the VC landscape was as efficient as it could be and VC's were overreliant on their networks. Organizations like Y Combinator stepped into that market gap by being more approachable. This is a step that CB grantmakers can also take.

Comment by sieberozendal on Only a few people decide about funding for community builders world-wide · 2019-10-23T08:22:52.354Z · score: 11 (6 votes) · EA · GW

My understanding is that there is a blurry line between "community groups" and EA projects in general. And there do seem to be different approaches among groups.

Comment by sieberozendal on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-22T12:26:40.667Z · score: 2 (2 votes) · EA · GW
The scale importance of a problem is the maximal point that the curve meets on the y-axis - the higher up the y-axis you can go, the better it is. Neglectedness tells you where you are on the x-axis at present. The other factors that bear on tractability tell you the overall shape of the curve.

I think this is the core of describing the issue and why we don't need to talk about neglectedness as a separate factor from tractability! I have found this a useful and understandable visual interpretation of the ITN-framework.

One thing I worry about with the ITN-framework is that it seems to assume smooth curves: it seem to assume that returns diminish as more (homogenous) resources are invested. I think this is much more applicable to funding decisions than to career decisions. Dollars are more easily comparable than workers. Problems need a portfolio of skills. If I want to assess the value I could have by working on a particular problem, I'd better ask whether I can fill a gap in that area than what the overall tractability is of general, homogenous human resources.

Comment by sieberozendal on Shapley values: Better than counterfactuals · 2019-10-22T11:56:05.321Z · score: 2 (2 votes) · EA · GW

I'd like to hear more about this if you have the time. It seems to me that it's hard to find a non-arbitrary way splitting of players.

Say a professor and a student work together on a paper. Each of them spends 30 hours on it and the paper would counterfactually not have been written if either of them had not contributed this time. The Shapley values should not be equivalent, because the 'relative size' of the players' contributions shouldn't be measured by time input.

Similarly, in the India vaccination example, players' contribution size is determined by their money spent. But this is sensitive to efficiency: one should not be able to get a higher Shapley value just from spending money inefficiently, right? Or should it, because this worry is addressed by Shapley cost-effectiveness?

(This issue seems structurally similar to how we should allocate credence between competing hypotheses in the absence of evidence. Just because the two logical possibilities are A and ~A, does not mean a 50/50 credence is non-abitrary. Cf. Principle of Indifference)

Comment by sieberozendal on My experience on a summer research programme · 2019-09-25T08:31:36.771Z · score: 25 (12 votes) · EA · GW

You have impressive outputs Jaime!

I would like to add that I believe Summer Research Fellowships/Internships at non-EA branded organisations may be more valuable than those at EA-branded ones. I believe there are some very high-quality programs out there, although I haven't looked for them thoroughly. Reasons why I believe these could be better:

  • More dedicated training and supervision. EA-branded organizations are young and often run these programs without much prior experience.
  • Unique network. There are benefits to you personally and to the EA community (!) of building good professional networks outside of EA. These are especially valuable if you have academic ambitions, because EA research institutes cannot currently support PhD's, nor would these be as well-regarded as one of the top institutes in a field.

These would be especially beneficial to people who have academic ambitions, people who are not in the top-20% of 'self-directedness', and to people who are relatively secure in their EA motivation (this limits the risk of value drift).

Drawbacks of researching at these non-EA institutes for a summer would be limited freedom and fewer EA-minded people around. (Although it's probably a good opportunity to learn to work with non-EA's while 'remaining EA' - a valuable and possibly rare skill!)

Comment by sieberozendal on Are we living at the most influential time in history? · 2019-09-15T11:59:15.055Z · score: 3 (3 votes) · EA · GW

Must is a strong word, so that's one reason I don't think it's true. What do you mean by "civilization goes extinct"? Because

1) There might be complex societies beyond Earth

2) New complex societies made up of intelligent beings can arise even after Homo Sapiens goes extinct

Comment by sieberozendal on Are we living at the most influential time in history? · 2019-09-15T11:52:31.061Z · score: 12 (10 votes) · EA · GW

Upvote for using graphics to elucidate discussion on the Forum. Haven't seen it often and it's very helpful!

Comment by sieberozendal on Are we living at the most influential time in history? · 2019-09-15T11:49:50.435Z · score: 5 (4 votes) · EA · GW
I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Some ideas: "Leverage", "temporal leverage", "path-dependence", "moment" (in relation to the concept from physics), "path-criticality" (meaning how many paths are closed off by decisions in the current time). Anyone else with ideas?

Comment by sieberozendal on Are we living at the most influential time in history? · 2019-09-15T11:40:53.189Z · score: 9 (6 votes) · EA · GW

Exactly! It reminds me a lot of the Polymath Project in which maths problems were solved collaboratively. I really wish EA made more use of this - I think Will's recent choice to post his ideas to the Forum is turning out to be an excellent choice.

Comment by sieberozendal on Ask Me Anything! · 2019-09-09T07:10:11.050Z · score: 3 (2 votes) · EA · GW

If anyone decides to work one this, please feel free to contact me! There is a small but non-negligible probability I'll work on this question, and if I don't I'd be happy to help out with some contacts I made.

Comment by sieberozendal on Movement Collapse Scenarios · 2019-08-28T10:48:07.535Z · score: 13 (9 votes) · EA · GW

This is a very cool question I hoped to think about more. Here's the 5 I came up with (in a draft that I'm unlikely to finish for various reasons), but without further exploration how they would look like:

1. Collapse. The size and quality of the group of people that identify as community members reduces by more than 50%

2. Splintering. Most people identify themselves as '[cause area/faction] first, EA second or not at all'.

3. Plateau/stunted growth. Influence and quality stagnates (i.e size and quality change by -50% to +100%)

4. Harmless flawed realization. EA becomes influential without really making a decidedly positive impact

5. Harmful flawed realization. EA becomes influential and has a significantly negative impact.

6. 'Extinction'. No one identifies as part of the EA community anymore

I also asked Will MacAskill for "x-risks to EA", he said:

  1. The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)
  2. A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).
  3. Fizzle - it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.

Anyway, if you want to continue with this, you could pick yours (or a combination of risks with input from the community) and run a poll asking people's probability estimates for each risk.

Comment by sieberozendal on EAGxNordics 2019 Postmortem · 2019-08-28T10:44:31.496Z · score: 1 (1 votes) · EA · GW

Hmm, I find this a surprising result, even though it seems roughly in line with the outcomes of EAGxNetherlands 2018.

I really hope EAGx conferences will continue to be organized (in Europe and elsewhere), perhaps in an improved form. (Fewer talks, more workshops maybe? More coaching?) I am afraid these events will be cancelled when impact is i) hard to see directly, and ii) heavily skewed. For example, few people made big changes after EAGxNetherlands, but the seed was planted for the Happier Lives Institute, which might not have formed otherwise.

Comment by sieberozendal on Current Estimates for Likelihood of X-Risk? · 2019-08-27T12:53:04.415Z · score: 12 (5 votes) · EA · GW

Hi Carl, is there any progress on this end in the past year? I'd be very interested to see x-risk relevant forecasts (currently working on a related project).

Comment by sieberozendal on Current Estimates for Likelihood of X-Risk? · 2019-08-27T12:51:11.831Z · score: 2 (2 votes) · EA · GW

Shouldn't the 1% be "1000 or more"?

Comment by sieberozendal on Ask Me Anything! · 2019-08-21T09:12:45.276Z · score: 3 (4 votes) · EA · GW

Why do his beliefs imply extremely high confidence? Why do the higher estimates from other people not imply that? I'm curious what's going on here epistemologically.

Comment by sieberozendal on What's the most effective organisation that deals with mental health as an issue? · 2019-08-20T11:32:52.126Z · score: 8 (4 votes) · EA · GW

I think we are currently very uncertain about this, so there is a large value of information to be gained from supporting the evalution of interventions and charities in this space, like the Happier Lives Institute is doing. If you additionally believe they will do good research, supporting them is probably more impactful than supporting charities doing direct work currently.

(Disclaimer: I was involved in (only) the very early stages of setting up this institute)

When you believe current mental health charities are ineffective, you might also want to investigate supporting the founding of a new mental health charity (if you believe mental health charities can be more effective), though this is harder to do as a small donor. You could potentially support Charity Entrepreneurship if they decide to focus on mental health.