Posts

How much do you (actually) work? 2021-05-20T20:04:14.296Z
SiebeRozendal's Shortform 2020-10-06T10:13:10.157Z
Four components of strategy research 2020-01-30T19:08:37.244Z
Eight high-level uncertainties about global catastrophic and existential risk 2019-11-28T14:47:31.695Z
A case for strategy research: what it is and why we need more of it 2019-06-20T20:18:09.025Z

Comments

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T12:45:14.338Z · EA · GW

Oops! Sorry Peter, not my intention at all!

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T12:43:14.746Z · EA · GW

I think this is an excellent contribution to the forum: strong upvote! ;)

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T12:42:43.032Z · EA · GW

Retracting my comment because it's unclear what kind of event (game, ritual, experiment) this is.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T12:03:16.922Z · EA · GW

Yeah, my comments should be read as [in-game] comments, not as [ritual] comments, and I all mean it in good nature!
 

Damn, seeing the social complexity of this event with the uncertainty about what it is quickly made it feel more like a social minefield than a game.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:58:32.512Z · EA · GW

Er.. I'm reading Khorton's post now, and apparently people are viewing this game/event thing very differently, so I think with that meta-uncertainty I am unwilling to ruthlessly strategize.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:53:18.680Z · EA · GW

Also, the reference class of launches doesn't fully represent the current situation: last launch was more of a self-destruct. This time, it's harming another website/community, which seems more prohibitive. So I think the prior is lower than 40%.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:50:27.796Z · EA · GW

There is a chance to remove MAD by removing Peter's launch codes' validity, per my request.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:46:23.384Z · EA · GW

I have also used my strong downvote capability to reduce the signal of Peter's message. I hereby apologize for any harm outside of this game (Peter's total karma), but I saw no other way.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:44:25.182Z · EA · GW

I motion to 

  1. remove Peter Wildeford's launch codes from the list of valid launch codes for both this forum and LessWrong. Reason: he clearly does not understand that this precommitment is unlikely to deter any of the 'trusted' LW users to press the button (see this David Mannheim's comment and discussion below)
  2. evaluate our method of chosing 'trusted users'. We may want to put specific users that take dangerous actions like these on a black list for future instances of Petrov Day.

I would ask how users are chosen, but I imagine that making that knowledge more available increasing the information risk it will be misused by nefarious actors.

Comment by SiebeRozendal on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:34:22.386Z · EA · GW

Everyone cares about something, so maybe we should precommit to something more .. deterring? It should likely be something that's not really bad, but still somewhat uncomfortable for the person to experience. (I realize that going down this path of thinking might produce actual outside-game harm)

Comment by SiebeRozendal on Introducing TEAMWORK - an EA Coworking Space in Berlin · 2021-09-27T11:06:36.000Z · EA · GW

I'm curious: how much are you spending on this on a yearly basis, roughly? It seems a very effective thing to develop a real tight and collaborative community.

Comment by SiebeRozendal on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-21T09:54:39.340Z · EA · GW

Linking to an EA Slack is definitely not advertising ;)

Comment by SiebeRozendal on When pooling forecasts, use the geometric mean of odds · 2021-09-07T12:15:01.349Z · EA · GW

Interesting! Seems intuitively right.

I wonder though: how would this affect expected value calculations? Doesn't this have far-reaching consequences? 

One thing I have always wondered about is how to aggregate predicted values that differ by orders of magnitude. E.g. person A's best guess is that the value of x will be 10, person B's guess is that it will be 10,000. Saying that the expected value of x is ~5,000 seems to lose a lot of information. For simple monetary betting, this seems fine. For complicated decision-making, I'm less sure.

Comment by SiebeRozendal on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T14:54:12.292Z · EA · GW

(highly speculative and I see a lot of flaws, but I can see it scaled)

EA training institute/alternative university. Kind of like creating navy seals: highly selective, high dropout rate, but produces the most effective people (with a certain goal) in the world.

Comment by SiebeRozendal on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T14:45:25.710Z · EA · GW

let's add a high school/prep school to it ;-)

Seriously though, I think having an institute more separate than GPI would not be great for disseminating research and gaining reputation. It would be nice though for training up EA students.

Comment by SiebeRozendal on Example syllabus "Existential Risks" · 2021-07-03T10:53:52.173Z · EA · GW

"2. Judgement calibration test

The Judgement Calibration test is supposed to do two things: first, make sure that students have really read the material and know its content; and second, test whether they can properly calibrate their confidence regarding the truth of their own answers."

This is really cool Simon, and awesome that you actually got permission to give actual grades by this mechanism. Curious how it works out in practice!

Comment by SiebeRozendal on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2021-06-06T11:21:53.889Z · EA · GW

On 2: I know very little about the Chernobyl meltdown and meltdowns in general, but those numbers seem be the referring to the actual consequences of the meltdown. My understanding is that there was a substantial emergency reaction that liited the severity of the meltdown. I'm not sure, but I can imagine a completely unmanaged meltdown to be substantially worse?

Also on 1: I have no idea how hard it is to turn a nuclear power plant off, but I doubt that it's very easy for outsiders with no knowledge (and that are worried about survival so don't have time to research how to do it safely?)

Comment by SiebeRozendal on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T11:56:45.789Z · EA · GW

Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes."

Is your issue more along the following?

  1. Humans are bad at estimating very small percentages accurately, and can be orders of magnitudes off (and the same goes for astronomical values in the long-term future)
  2. Arguments for the cost-effectiveness of x-risk reduction rely on estimating very small percentages (and the same goes for astronomical values in the long-term future)
  3. (Conlusion) Arguments for the cost-effectiveness of x-risk reduction cannot be trusted.

If so, I would reject 2, because I believe we shouldn't try to quantify things at those levels of precision. This does get us to your question "How does XR weigh costs and benefits?", which I think is a good question to which I don't have a great answer to. It would be something along the lines of "there's a grey area where I don't know how to make those tradeoffs, but most things do not fall into the grey area so I'm not worrying too much about this. If I wouldn't fund something that supposedly reduces x-risk, it's either that I think it might increase x-risk, or because I think there are better options available for me to fund". Do you believe that many more choices fall into that grey area?

Comment by SiebeRozendal on Retrospective on Catalyst, a 100-person biosecurity summit · 2021-06-01T11:49:52.963Z · EA · GW

That sounds like a better title to me :) Kudos on the adaptation.

Comment by SiebeRozendal on Retrospective on Catalyst, a 100-person biosecurity summit · 2021-05-28T14:07:59.486Z · EA · GW

Thanks for the highly detailed post! Seems like it was a cool event.

Nitpicking: this is the second time I see an evaluation described as "postmortem" and it puts me on the wrong foot. To me "postmortem" suggests the project was overall a failure, while it clearly wasn't! "Evaluation" seems like a better word?

Comment by SiebeRozendal on MSc in Risk and Disaster Science? (UCL) - Does this fit the EA path? · 2021-05-26T11:28:41.435Z · EA · GW

I wrote some thoughts on risk analysis as a career path in my shortform here, which might be somewhat helpful. I echo people's concern that this program focuses overly much on non-anthropogenic risk.

I also know an EA that did this course - I'll send her details in a PM. :)

Comment by SiebeRozendal on New Job: Manager at Giving Green · 2021-05-18T16:52:00.764Z · EA · GW

Giving Green was fortunate enough to receive a grant from the EA Infrastructure fund, with the express purpose of addressing this criticism, by bringing our methods closer in line to that of the EA community and implementing other suggestions.

This is really interesting! I am happy to see that the cooperative nature of that disagreement is being continued, and I look forward to the progress of the person that ends up taking this role. It sounds like a very high-level of qualifications (good researcher, good ops skills, communications, management..), so I hope you're able to find someone!

Comment by SiebeRozendal on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-16T15:53:29.441Z · EA · GW

I think it stands for "depersonalisation" and "derealisation"

Comment by SiebeRozendal on SiebeRozendal's Shortform · 2021-01-15T09:39:40.439Z · EA · GW

This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!

I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up  not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.

Why I think risk analysis is useful:

EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.

The field of Risk Analysis

Risk analysis is a pretty small field, and most is focused on risks of limited scope and risks that are easier to quantify than the risks EAs commonly look at. There is a Society of Risk Analysis (SRA), which manages the Journal of Risk Analysis (the main journal of this field). I found most of their study topics not so interesting, but it was useful to get an overview of the field, and there were some useful contacts to make (1). The EA-aligned org GCRI is active and well-established in SRA, but no other EA orgs are.

Topics & advisers

I hoped to work on GCR/X-risk directly, which substantially reduced my options. It would have been useful to just invest in learning a method very well, but I was not motivated to research something not directly relevant. I think it's generally difficult to make an academic career as a general x-risk researcher, and it's easier to research 1 specific risk. However, I believes this leaves open a number of cross-cutting issues.

I have a shortlist of potential supervisors I considered/contacted/was in conversation with, including in public policy and philosophy. I can provide this list privately on request.

Best grad programs:

The best background for grad school seems to be mathematics or more specifically, engineering. (I did not have this, which excluded a lot of options). The following 2 programs seemed most promising, although I only investigated PRGS in depth:

-- 


(1) For example, I had a nice conversation with the famous psychology researcher Paul Slovic, who now does research into the psychology involved in mass atrocities. https://psychology.uoregon.edu/profile/pslovic/

Comment by SiebeRozendal on Key points from The Dead Hand, David E. Hoffman · 2020-12-16T12:11:25.122Z · EA · GW

Good points! I broadly agree with your assessment Michael! I'm not at all sure how to judge whether Sagan's alarmism was intentionally exaggerated or the result of unintentional poor methodology. And then, I think we need to admit that he was making the argument in a (supposedly) pretty impoverished research landscape on topics such as this. It's only expected that researchers in a new field make mistakes that seem naive once the field is further developed.

I stand by my original point to celebrate Sagan > Petrov though. I'd rather celebrate (and learn from) someone who acted pretty effectively even though it was flawed in a complex situation, than someone who happened to be in the right place at the right time. I'm sill incredibly impressed by Petrov though! It's just.. hard to replicate his impact.

Comment by SiebeRozendal on Please Take the 2020 EA Survey · 2020-12-02T18:04:00.266Z · EA · GW

Ah yes, that makes sense and I hadn't thought of that

Comment by SiebeRozendal on Please Take the 2020 EA Survey · 2020-11-21T09:08:42.474Z · EA · GW

Have you considered running different question sets to different people (randomly assigned)?

It could expand the range of questions you can ask.

Comment by SiebeRozendal on SiebeRozendal's Shortform · 2020-10-06T10:13:10.657Z · EA · GW

I have a concept of paradigm error that I find helpful.

A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.

Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.

It is related to what I see as

  • parameter errors (= the value of parameters being inaccurate)
  • model errors (= wrong model structure or wrong/missing parameters)

Paradigm errors are one level higher: they are the wrong type of model.


Relevance to EA

I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.

Comment by SiebeRozendal on jackmalde's Shortform · 2020-10-06T10:01:23.375Z · EA · GW

I agree with this: a lot of the argument (and related things in population ethics) depends on the zero-level of well-being. I would be very interested to see more interest into figuring out what/where this zero-level is.

Comment by SiebeRozendal on Open and Welcome Thread: October 2020 · 2020-10-04T10:16:59.749Z · EA · GW

I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.

The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate

  • the map (Do we think the team perceives th territory accurately? Do we agree that the territory looks promising for finding large mountains? and
  • the route (Does the strategy look feasible?)

To evaluate a team, we evaluate

  • their navigational ability (Can they find & recognise mountains? Can they find & recognise crevasses, i.e. disvalue?)
  • their executive ability (Can they executive their plan well & adapt to surprising events? Can they go the distance?)

Curious to hear what people think. It's got a bit of overlap with Cotton-Barratt's Prospecting for Gold, but I think it might be sufficiently original.

Comment by SiebeRozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:43:10.056Z · EA · GW

Great report! I have a two questions for you:

1. On the following:

There are already many ongoing and upcoming high-quality studies on psychedelic-assisted mental health treatments, and there are likely more of those to follow, given the new philanthropic funding that has recently come into the area.​ (p. 45-46)

Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?


2. Did you consider more organisations than Usona and MAPS? It seems a little bit unlikely that these are the only two organisations lobbying for drug approval?


*The flow-through effects I'm most excited about are a reduction in meat consumption, creative problem solving, and an improvement in good judgment (esp. for high-impact individuals). Effects on long-term judgment seem very hard to research, though.

Comment by SiebeRozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:35:19.527Z · EA · GW

I was confused about the usage of the term drug development as it sounds to me like it's about the discovery/creation of new drugs, which clearly does not seem to be the high-value aspect here. But from the report:

Drug development is a process that covers everything from the discovery of a brand new drug for treatment to this drug being approved for medical use.
Comment by SiebeRozendal on Founders Pledge Report: Psychedelic-Assisted Mental Health Treatments · 2020-10-01T12:26:58.617Z · EA · GW
I speculate that the particulars of the psychedelic experience may drive rescaling like this in an intense way.

I also think that the psychedelic experience, as well as things like meditation, affect well-being in ways that might not be captured easily. I'm not sure if it's rescaling per se. I feel that meditation has not made me happier in the hedonistic sense, but I strongly believe it's made optimize less for hedonistic wellbeing, and in addition given me more stability, resilience, better judgment, etc.

Comment by SiebeRozendal on How have you become more (or less) engaged with EA in the last year? · 2020-09-16T17:03:51.931Z · EA · GW

I recently moved to a (nearby) EA hub to live temporarily with some other EA's (and some non-EA's), while figuring out my next steps in my life/career.

This has considerably increased my involvement. The ability to talk about EA over lunch, dinner, and to join meetups that are 5 minutes away make a big difference. As well as finding nice people I connect with socially/emotionally.

I suppose COVID had somewhat of a positive influence here too: I am less likely to attend a wide range of events, because I don't know people's approaches to safety. This leaves more time for EA.

Comment by SiebeRozendal on Use resilience, instead of imprecision, to communicate uncertainty · 2020-08-25T06:16:12.777Z · EA · GW

Although communicating the precise expected resilience conveys more information, in most situations I prefer to give people ranges. I find it a good compromise between precision and communicating uncertainty, while remaining concise and understandable for lay people and not losing all those weirdness credits that I prefer to spend on more important topics.

This also helps me epistemically: sometimes I cannot represent my belief state in a precise number because multiple numbers feel equally justified or no number feels justified. However, there are often bounds beyond which I think it's unlikely (i.e. <20% or <10% or my rough estimates) that I'd estimate that even with an order of magnitude additional effort.

In addition, I think preserving resilience information is difficult in probabilistic models, but easier with ranges. Of course, resilience can be translated into ranges. However, a mediocre model builder might make the mistake of discarding the resilience if precise estimates are the norm.

Comment by SiebeRozendal on EA Focusmate Group Announcement · 2020-08-17T15:03:29.099Z · EA · GW

Just to clarify: Focusmate isn't meant to talk about your work, so most people don't try to find people with in-depth knowledge. I mostly don't explain things in detail and don't feel like I need to. It's more an accountability thing and to share general progress (e.g. "I wanted to get 3 tasks done: write an email, draft an outline for a blog post, and solve a technical issue for my software project. I got 2 of them done, and realized I need to ask a colleague about #3, so I did that instead).

Comment by SiebeRozendal on CEA's Plans for 2020 · 2020-05-03T12:27:35.068Z · EA · GW

Thanks for the elaborate reply!

I think there's a lot of open space in between sending out surveys and giving people binding voting power. I'm not a fan of asking people to vote on things they don't know about. However, I have something in mind of "inviting people to contribute in a public conversation and decision-making process". Final decision power would still be with CEA, but input is more than one-off, the decision-making is more transparant, and a wider range of stakeholders is involved. Obviously, this does not work for all types of decisions - some are too sensitive to discuss publicly. Then again, it may be tempting to classify many decisions as "too sensitive". Well, organisation "opening up" should be an incremental process, and I would definitely recommend to experiment with more democratic procedures.

Comment by SiebeRozendal on CEA's Plans for 2020 · 2020-04-26T11:23:27.409Z · EA · GW

Hi Max, good to read an update on CEA's plans.

Given CEA's central and influential role in the EA community, I would be interested to hear more on the approach on democratic/communal governance of CEA and the EA community. As I understand it, CEA consults plenty with a variety of stakeholders, but mostly anonymously and behind closed doors (correct me if I'm wrong). I see lack of democracy and lack of community support for CEA as substantial risks to the EA community's effectiveness and existence.

Are there plans to make CEA more democratic, including in its strategy-setting?

Comment by SiebeRozendal on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-20T09:34:00.060Z · EA · GW

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

Comment by SiebeRozendal on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-20T09:28:25.017Z · EA · GW

Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?

Comment by SiebeRozendal on EA should wargame Coronavirus · 2020-03-16T15:08:40.964Z · EA · GW

:(

Comment by SiebeRozendal on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-16T15:05:52.481Z · EA · GW

Not a funding opportunity, but I think a grassroots effort to employ social norms to enforce social distancing could be effective in countries in early stages where authorities are not enforcing, e.g. The Netherlands, UK, US, etc.

Activists (Student EA's?) could stand with signs in public places asking people non-aggressively to please go home.

Comment by SiebeRozendal on State Space of X-Risk Trajectories · 2020-02-08T16:26:07.995Z · EA · GW

I think this article very nicely undercuts the following common sense research ethics:

If your research advances the field more towards a positive outcome than it moves the field towards a negative outcome, then your research is net-positive

Whether research is net-positive depends on the current field's position relative to both outcomes (assuming that when either outcome is achieved, the other can no longer be achieved). It replaces this with another heuristic:

To make a net-positive impact with research, move the field closer to the positive outcome than the negative outcome with a ratio of at least the same ratio as distance-to-positive : distance-to-negative.

If we add uncertainty to the mix, we could calculate how risk averse we should be (where risk aversion should be larger when the research step is larger, as the small projects probably carry much less risk to accidentally make a big step towards FAI).

The ratio and risk-aversion could lead to some semi-concrete technology policy. For example, if the distance to FAI and UAI is (100, 10), technology policy could prevent funding any projects that either have a distance-ratio (for lack of a better term) lower than 10 or that have a 1% or higher probability a taking a 10d step towards UAI.

Of course, the real issue is whether such a policy can be plausibly and cost-effectively enforced or not, especially given that there is competition with other regulatory areas (China/US/EU).

Without policy, the concepts can still be used for self-assessment. And when a researcher/inventor/sponsor assesses the risk-benefit profile of a technology themselves, they should discount for their own bias as well, because they are likely to have an overly optimistic view of their own project.

Comment by SiebeRozendal on Comparing Four Cause Areas for Founding New Charities · 2020-01-24T17:08:00.133Z · EA · GW

I really love Charity Entrepreneurship :) A remark and a question:

1. I notice one strength you mention at family planning is "Strong funding outside of EA" - I think this is a very interesting and important factor that's somewhat neglected in EA analyses because it goes beyond cost-effectiveness. We are not asking the 'given our resources, how can we spend them most effectively?' but the more general (and more relevant) 'how can we do the most good?' I'd like to see 'how much funding is available outside of EA for this intervention/cause area' as a standard question in EA's cost-effectiveness analyses :)

2. Is there anything you can share about expanding to two of the other cause areas: long-termism and meta-EA?


Comment by SiebeRozendal on Final update on EA Norway's Operations Project · 2020-01-13T21:29:31.622Z · EA · GW

A consulting organisation aimed at EA(-aligned) organisations, as far as I'm aware: https://www.goodgrowth.io/.

Mark McCoy, mentioned in this post, is the Director of Strategy for it.

Comment by SiebeRozendal on Thoughts on doing good through non-standard EA career pathways · 2020-01-11T12:59:02.740Z · EA · GW

This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:

I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can't find where. As far as I recall, his point was that 'doing work that actually makes an impact' is a skill that needs to be trained, and you can't just first get general skills and then decide to make an impact.

Personally, even though I did a master's degree in Strategic Innovation Management with longtermism ideas in mind, I didn't have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn't have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.

Comment by SiebeRozendal on Thoughts on doing good through non-standard EA career pathways · 2020-01-11T12:41:58.820Z · EA · GW

Can I add the importance of patience and trust/faith here?

I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven't gotten into an impactful position yet.

Furthermore, trusting that one's commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as "on the way to impact" rather than "failing to get an EA job". (I'm not saying it's easy to build, maintain, and trust one's commitment though.)

In addition, I think having good language is really important for keeping these people motivated and involved. We have "building career capital" and Tara MacAulay's term of "Journeymen" but these are not catchy enough I'm afraid.

Comment by SiebeRozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:46:33.102Z · EA · GW

(Off-topic @JPAddison/@AaronGertler/@BenPace:)

Is tagging users going to be a feature on the Forum someday? It'd be quite useful! Especially for asking a question to non-OP's where the answer can be shared and would be useful publicly.

Comment by SiebeRozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:43:54.008Z · EA · GW

(@Meta Fund:)

Will any changes be made to the application and funding process in light of how this project went? I can imagine that it would be valuable to plan a go/no-go decision for projects with medium to large uncertainty/downside risk, and perhaps add a question or two (e.g., 'what information would you need to learn to make a go/no-go decision?') if that does not bloat the application process too much. I think this could be very valuable to explore more risky funding opportunities. For example, a two-stage funding commitment can be made where the involved parties can pre-agree to a number of conditions that would decide the go/no-go decision, making follow-up funding much more efficient than going through a new complete funding round.

Comment by SiebeRozendal on Final update on EA Norway's Operations Project · 2020-01-11T11:42:52.485Z · EA · GW

(@Mark McCoy:)

I wonder what is currently happening with Good Growth and how it relates to this current so-far nameless operations project. It seems like it is an unfunded merging of the two projects? Could you briefly elaborate on the plans and funding situation for the project?