Posts

$3M XPRIZE for a company fighting Malaria 2021-06-28T15:24:28.880Z
A new proposal for regulating AI in the EU 2021-04-26T17:25:07.032Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization 2021-03-04T16:57:44.352Z
List of Under-Investigated Fields - Matthew McAteer 2021-01-30T10:26:32.896Z
Impact of Charity Evaluations on Evaluated Charities' Effectiveness 2021-01-25T13:24:59.265Z
Is Earth Running Out of Resources? 2021-01-02T20:08:59.452Z
Requests on the Forum 2020-12-22T10:42:51.574Z
What are some potential coordination failures in our community? 2020-12-12T08:00:25.858Z
On Common Goods in Prioritization Research 2020-12-10T10:25:10.275Z
Does Qualitative Research improve drastically with increasing expertise? 2020-12-05T18:28:55.162Z
Summary of "The Most Good We Can Do or the Best Person We Can Be?" - a Critique of EA 2020-11-28T07:41:28.010Z
Proposal for managing community requests on the forum 2020-11-24T11:14:18.168Z
Prioritization in Science - current view 2020-10-31T15:22:07.289Z
What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? 2020-10-21T04:44:57.757Z
Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z
EA is risk-constrained 2020-06-24T07:54:09.771Z
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z
What is the size of the EA community? 2019-11-19T07:48:31.078Z
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z
Off-Earth Governance 2019-09-06T19:26:26.106Z
edoarad's Shortform 2019-08-16T13:35:05.296Z
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z

Comments

Comment by EdoArad (edoarad) on Testing Newport's "Digital Minimalism" at CEEALAR · 2021-07-25T08:33:44.981Z · EA · GW

Cool! Looking forward for the results and your takeaways from it :) 

Comment by EdoArad (edoarad) on Propose and vote on potential EA Wiki entries · 2021-07-19T15:24:27.632Z · EA · GW

What about posts that discuss personal career choice processes (like this)?

Comment by EdoArad (edoarad) on The act of giving itself has positive impact · 2021-07-18T11:24:52.581Z · EA · GW

Ah! Thanks, this makes more sense to me :) 

I'd be interested if you want to give some more information about what is the positive impact and how large that is. I'm assuming you think less of the effects of giving on happiness and more on some cultural change that generally makes people more moral? 

Comment by EdoArad (edoarad) on Should someone start a grassroots campaign for USA to recognise the State of Palestine? · 2021-07-18T11:13:02.613Z · EA · GW

I only saw this post now. We definitely want to look more into the Israel-Palestine conflict in EA Israel. I'm personally a bit skeptical about the potential traceability and neglectedness of this cause in general, and this space is politically hazardous, but I think we may be able to find good opportunities for people in Israel interested in working on this.    

Comment by EdoArad (edoarad) on The act of giving itself has positive impact · 2021-07-17T18:31:51.262Z · EA · GW

Why do you think giving by itself might have a negative impact? 

Comment by EdoArad (edoarad) on The case against “EA cause areas” · 2021-07-17T15:34:08.214Z · EA · GW

Also, The fidelity model of spreading ideas

Comment by EdoArad (edoarad) on Arne's Shortform · 2021-07-16T15:21:38.935Z · EA · GW

Related - The Upper Limit of Value

Comment by EdoArad (edoarad) on Intervention report: Agricultural land redistribution · 2021-07-14T21:21:36.762Z · EA · GW

It's really exciting for me to see this thorough investigation into a neglected area which I've never heard of, even though it turns out unlikely to be cost-effective. 

I'm curious, what prompted you to start this investigation? How did you discover How Asia Works or this otherwise learned about this suggested intervention?

Also, how excited would you be for further research into Land Reform?  (both more into Land Redistribution and into Land Tenure Reforms)

Comment by EdoArad (edoarad) on [linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." · 2021-07-09T06:19:56.577Z · EA · GW

Thanks for making this  :) 

Some technical suggestions for these posts: 

  1. Instead of writing "[linkpost]", use the link feature: 
  1. It'd be easier for forum readers if the title was shorter. So, maybe just use something like Narration: "name of post".
  2. It would be nice if the posts were all together in a sequence
  3. I suggest using pretty much the same tags as the original post. (Not "forum prize" though)
  4. I've suggested to JP (the developer of the forum) that posts tagged with audio would have an icon
Comment by edoarad on [deleted post] 2021-07-08T17:27:47.208Z

psychology of giving? 

Comment by EdoArad (edoarad) on edoarad's Shortform · 2021-07-05T08:32:03.752Z · EA · GW

GiveWell got about $33.5M in Ethereum donations and $3.5M in Bitcoin donations

Comment by EdoArad (edoarad) on [Link] Reading the EA Forum; audio content · 2021-07-05T07:42:19.218Z · EA · GW

One important difference is that the EA forum is a continuous stream and people probably mostly read posts by the frontpage feed, rather than looking directly for information (which is probably more the case for the skills profiles)

Comment by EdoArad (edoarad) on List of EA-related organisations · 2021-07-05T07:39:16.828Z · EA · GW

Turns out that 80k just published a talk with Max Roser (who leads OWID). He seems to be at least well acquainted with EA and funded by EAs

Max Roser: But still, I think we should do it. And I also saw on some effective altruism forums online that people are discussing that question, like how good of an idea is it to donate to Our World in Data. And they were relying on some of the information that was publicly available, but I think we could do a better job, when we have some time, to provide more of the information that those people discussed. And some of them also ended up donating. We got several grants in the last few years from effective altruist-aligned donors.

Comment by EdoArad (edoarad) on List of EA-related organisations · 2021-07-05T07:24:23.140Z · EA · GW

I was surprised to see Our World In Data on this list. Which of the criteria holds?

  • Have explicitly aligned themselves with EA
  • Are currently recommended by GiveWell or Animal Charity Evaluators
  • Were incubated by Charity Entrepreneurship
  • Have engaged with the EA community (e.g. by posting on the EA Forum or attending EA Global)
Comment by EdoArad (edoarad) on Big List of Cause Candidates · 2021-07-05T07:10:02.120Z · EA · GW

Related - Problem areas beyond 80,000 Hours current priorities (Jan 2020).

From there, at least Migration Restrictions and Global Public Goods seem to be missing from this list

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-03T07:44:54.638Z · EA · GW

Ah, I see! Yea, the way it's sorted makes it very confusing (it's based on the tag upvotes, which is rather irrelevant here)

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-03T04:52:29.415Z · EA · GW

The forum prize is ongoing, the most recent is for March (and I guess that the April edition should be out soon) 

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-02T06:00:44.104Z · EA · GW

How about the posts that won the Forum Prize

Comment by EdoArad (edoarad) on EA needs consultancies · 2021-07-01T08:28:47.452Z · EA · GW

Do you, or anyone else, have some more insight into the consultancy work that's needed around statistics and data science?

Comment by EdoArad (edoarad) on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-07-01T07:32:25.144Z · EA · GW

But I'm not that concerned about that! I'm sure that you can handle the upcoming strategic considerations and prioritization. It was mostly important for me to add that comment for readers who might make the mistake of taking your results as a prioritization within IIDM

Comment by EdoArad (edoarad) on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-07-01T07:28:55.223Z · EA · GW

Yea, mostly 😊 There might be a problem on the other direction, where people who took the survey had in mind "which topics are high priority" and that may have caused either early elimination of potentially relevant topics or, more likely I think, a scope-creep where somewhat related topics which seem important might find their way in.

Comment by EdoArad (edoarad) on EA needs consultancies · 2021-06-28T15:41:12.689Z · EA · GW

The EA Infrastructure Fund seems like the go-to place to support such projects if anyone reading this is up to it (other than perhaps Open Phil, where lukeprog works and he may give more information if that's relevant). They are actively encouraging and looking for people to apply, which you can apply to at any time.

So if you think that you may be a good fit for setting up a project or service along these lines, now would be a great opportunity of doing that!

Comment by EdoArad (edoarad) on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-06-27T17:56:57.567Z · EA · GW

I like the move from IIDM to Effective Institutions :)

Comment by EdoArad (edoarad) on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-06-27T17:56:00.691Z · EA · GW

Ian David Moss mentioned some overlap between IIDM and "improving science", which is something I've been thinking about a bit lately. From this survey, I think that overlap exists in at least

  1. Funding mechanisms.
  2. Designing mechanisms to align incentives more with social merit or scientific merit.
  3. Maybe any other thing that causes the academic institution to be more efficient.
Comment by EdoArad (edoarad) on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-06-27T17:43:31.278Z · EA · GW

Thanks for publishing the results of the survey! I found it very interesting and informative. 

When addressing the scope of IIDM, I think that the survey might conflate a bit between what topics can be considered in scope based on the approaches and goals taken and what topics should be prioritized when we want to improve decision making. So for example, I wouldn't take these results to mean that "Whole-institution governance, leadership, and culture" should be the most likely prioritized topic in IIDM. 

Comment by EdoArad (edoarad) on Intactivism as a potential Effective Altruist cause area? · 2021-06-26T07:53:10.301Z · EA · GW

Thanks for an interesting new cause area! I found myself feeling uneasy about a potential controversy here, so here are my 2 cents on the matter.

The large elephant that remains unaddressed in this analysis is that circumcision is done in large part for religious reasons. In Israel at least, any policy or procedural changes toward intactivism are likely very hard and will encounter a lot of resistance. 

More broadly, taking actions on causes that are explicitly against other people's moral agenda is risky for the reputation of the people involved or the EA movement if it's done under that name. If this cause does in fact appear promising under further investigation, I recommend whoever might take action on this to consult with CEA.

That said, I think it is very important to figure out the most promising causes and how to do the most good, even if the results might clash with other people's beliefs. So again, thanks for raising and argumenting for a potentially controversial cause.

Comment by EdoArad (edoarad) on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-14T14:58:07.693Z · EA · GW

Great points! Re peer-review, I think that your argument makes sense but I feel like most of the impact on quality from better peer review would actually be in raising standards for the field as a whole, rather than the direct impact on the papers who didn't pass peer review. I'd love to have a much clearer analysis of the whole situation :) 

Comment by EdoArad (edoarad) on MichaelA's Shortform · 2021-06-13T07:22:44.118Z · EA · GW

Hey schethik, did you make progess with this?

Comment by EdoArad (edoarad) on A proposal for a small inducement prize platform · 2021-06-06T11:09:46.209Z · EA · GW

Somewhat related, and potentially relevant if someone sets this up:

  1. The Nonlinear Fund wrote up why they use RFPs (Requests For Proposals). 
  2. Certificates of Impact.
  3. There is an upcoming project platform for EAs, designed to coordinate projects with volunteers. A forum post should be out soon, but meanwhile you can see a prototype here.
Comment by EdoArad (edoarad) on What is meta Effective Altruism? · 2021-06-02T07:43:37.723Z · EA · GW

Is it a common use to consider GPR when talking about Meta-EA?

Comment by EdoArad (edoarad) on ESG investing needs thoughtful trade-offs · 2021-05-27T09:41:57.821Z · EA · GW

I'm very excited to see your blog on this topic! The Social Impact / ESG Investing communities seem like strong movements that could be very impactful. I think that the points you raise here and in your previous post could be very influential if implemented. 

Do you want your blog posts to already be shared outside of the EA community, or would you prefer to wait with that?

Comment by EdoArad (edoarad) on saulius's Shortform · 2021-05-25T08:40:32.980Z · EA · GW

Interesting! 

Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don't care about it at all and even propose actions that would increase emissions so that they could trade "not doing that" with some concession from Party A. I'm sure that we could find lots of real-world examples of that.

Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries. 

Comment by edoarad on [deleted post] 2021-05-24T07:06:22.262Z

Recommender systems:

https://forum.effectivealtruism.org/posts/sX6mkaNN7mEKuWdRi/looking-for-collaborators-after-last-80k-podcast-with

https://forum.effectivealtruism.org/posts/CHfuH58thMHPN8zHX/is-there-evidence-that-recommender-systems-are-changing

https://forum.effectivealtruism.org/posts/xzjQvqDYahigHcwgQ/aligning-recommender-systems-as-cause-area

https://forum.effectivealtruism.org/posts/zAsL6P4jPncK9zwBH/is-the-youtube-algorithm-radicalizing-you-it-s-complicated

https://forum.effectivealtruism.org/posts/E4gfMSqmznDwMrv9q/are-social-media-algorithms-an-existential-risk 

misinformation proper:

https://forum.effectivealtruism.org/posts/pYHZ8dhZWPCSZ66dX/i-knew-a-bit-about-misinformation-and-fact-checking-in-2017

https://forum.effectivealtruism.org/posts/ixLPyMNCLH2Jg7aBc/ea-philly-s-infodemics-event-part-1-jeremy-blackburn and https://forum.effectivealtruism.org/posts/qsiFQyihEuQEeNsfJ/ea-philly-s-infodemics-event-part-2-aviv-ovadya 

https://forum.effectivealtruism.org/posts/SYzqYssnPTj7iMg8Y/day-one-project-technology-policy-accelerator 

https://forum.effectivealtruism.org/posts/vj3yGmc46nT4YY2K4/my-preliminary-research-on-the-adtech-marketplace 

sort of related:

https://forum.effectivealtruism.org/posts/Xjo23zhn6CPoijLSo/a-love-letter-to-civilian-osint-and-possibilities-as-a-tool

https://forum.effectivealtruism.org/posts/b4DC6o4vuPrncczQ2/are-we-doomed-memos#April_15___Cyber___Memos___Issue__9___jamesallenevans_AreWeDoomed__github_com_ 

https://forum.effectivealtruism.org/posts/SitudkgqA5Gnwfxdz/ea-considerations-regarding-increasing-political

Comment by edoarad on [deleted post] 2021-05-23T15:24:17.005Z

Ah, I was thinking of Aligning Recommender Systems. I will find more relevant posts tomorrow

Comment by edoarad on [deleted post] 2021-05-23T14:25:43.051Z

How about something like misinformation (Cause Area)? There are several posts on the topic and it appears under 80K's list of potential cause areas. 

This would be a subset of all forms of "improving collective epistemics", but I think that it's a widely enough discussed topic so that it makes sense to have it as a tag by itself 

Comment by EdoArad (edoarad) on MichaelA's Shortform · 2021-05-22T14:49:05.401Z · EA · GW

Ah, right! There still might be a need outside of longtermist research, but I definitely agree that it'd be very useful to reach out to them to learn more.

For further context for people who might potentially go ahead with this, BERI is a nonprofit that supports researchers working on existential risk. I guess that Sawyer is the person to reach out to.

Comment by EdoArad (edoarad) on MichaelA's Shortform · 2021-05-22T06:12:21.126Z · EA · GW

One idea that comes to mind is to set up an organization that hires RAs-as-a-service.  Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.

A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.

Comment by EdoArad (edoarad) on MichaelA's Shortform · 2021-05-22T06:04:38.548Z · EA · GW

In regards to the third bullet point, there might be a nontrivial boost to the senior researchers' productivity and well-being. 

Doing grunt-work can be disproportionally (to its time) tiring and demotivating, and most people have some type of work that they dislike or just not good at which could perhaps be delegated. Additionally, having a (strong and motivated) RA might just be more fun and help with making personal research projects more social and meaningful.

Regarding the salary, I've quickly checked GiveWell's salaries at Glassdoor

So from that I'd guess that an RA could cost about 60% as much as a senior researcher. (I'm sure that there is better and more relevant information out there)

Comment by EdoArad (edoarad) on edoarad's Shortform · 2021-05-22T05:18:15.126Z · EA · GW

A useful meme: 

Comment by edoarad on [deleted post] 2021-05-21T15:59:10.954Z

I think that this should be merged with https://forum.effectivealtruism.org/tag/immigration-reform

Comment by EdoArad (edoarad) on EA Survey 2020: How People Get Involved in EA · 2021-05-20T07:31:08.410Z · EA · GW

Thanks, looking forward for the full post :)

Comment by EdoArad (edoarad) on EA Survey 2020: How People Get Involved in EA · 2021-05-20T07:30:26.640Z · EA · GW

Thanks, yea! 

Going over this, I don't see anything particularly interesting. It looks like the ratio of people being highly engaged to not highly engaged per each factor is about the same for males and females in almost all categories.  Some of the slight differences that I could find:

Males who rated EAG as important were about twice as likely to be not highly engaged compared to non-males (though the error is high here).

The share of not highly engaged non-males which had 'personal connection' as an important factor for involvement was slightly higher than the male counterpart. This slightly reduces the gap between males and non-males when it comes to how important is 'personal connection' for involvement for people who are highly engaged. 

Comment by EdoArad (edoarad) on EA Survey 2020: How People Get Involved in EA · 2021-05-19T18:45:14.814Z · EA · GW

Is it possible to get the 'importance for involvement' for all the four options of Gender X Engagement? I'd like to understand whether the high engagement of people marked 'personal contact' or 'group' as important for their involvement could be partially explained by their gender, or something of this sort. Doing that with Race could also be interesting.

Comment by EdoArad (edoarad) on My attempt to think about AI timelines · 2021-05-19T06:01:14.583Z · EA · GW

Thanks for sharing the full process and your personal takeaways! 

Comment by edoarad on [deleted post] 2021-05-17T19:33:18.472Z

Hey DanF, I assume that this is intended to go on the recent AMA?

To participate in that and ask questions, go to the post and copy/paste this question to the comment section there:

Comment by EdoArad (edoarad) on [Link] 80,000 Hours Nov 2020 annual review · 2021-05-17T10:10:33.195Z · EA · GW

I'm trying to make a rough estimate for the value of doing local career advice (and writing this while I think it through).

The leading metric used by 80k is DIPY (Discounted, Impact-adjusted Peak Years). This table estimates how many DIPYs per employee working on advising (not necessarily people doing the advising, measured as FTE: Full-Time Employee equivalent for a year). 

One DIPY is roughly the (time discounted) value we expect someone to produce each year during the most productive few decades (the ‘peak’) of their career if they’re about as promising as someone taking an effective altruist approach to improving the world who is currently in a position such as: 

  • a postdoc focused on a top problem area at a top 30 world university, 
  • one of the 20 most important operations roles in our top problem areas, 
  • an ML PhD at a top 3 university with the aim of working on AI Safety or policy.

The results of 80k's career advice per year are:

  • 2017: 10.5 DIPY/FTE
  • 2018: 3.6    DIPY/FTE
  • 2019: 4.0    DIPY/FTE

(2016 has 41.5 DIPY/FTE! But they only write 0.2 FTE for that year, compared to 2.5-3 on later years while they had about 100 advise calls rather than about 250 in later years). 

2017 is addressed as an interesting anomaly, with possible explanations given here. They write

Our guesses as to why:

  1. Web engagement hours in 2017 were 6x higher than in 2015, which may have let us reach a new audience and uncover low-hanging fruit.
  2. Peter McIntyre (predominant 2017 advisor) might be unusually effective.
  3. In 2017, advising was more forceful in encouraging AI safety work, which might have driven more plan changes. The downside is that we probably pushed some people too hard, as reported in our 2018 annual review.
  4. We wrote the career guide and some of our most popular articles in 2015–2016 but didn’t generate large amounts of traffic until 2017. 2017 website plan changes therefore likely relied particularly heavily on work from earlier years. 
  5. The apparent pattern could also be partly due to errors in our estimates.

These numbers are good, but not astoundingly good, so in total it makes sense to me even if less than I'd hope for. Anything greater than 1 DIPY/FTE should be considered a success if 80k employees are valued at 1 DIPY/year (although both Michelle and Habiba are potentially valued more than that). 

So now we need to understand how local career advice compares to 80k's. 

From this section of the report:

The aim of the one-on-one team is to:

  1. Identify the most promising new readers who haven’t yet engaged in person.
  2. Activate them by providing — over the course of 1–2 calls and follow-up emails — (i) introductions to people and jobs, (ii) encouragement (e.g. making it clear that EA has a place for them, or that they have a good career plan), and (iii) basic checks on their plan (e.g. adding options, recommending further reading).

...

We analysed recent plan changes and concluded that the top three value adds are (in order): (i) introductions to people and jobs, (ii) encouragement (as defined above), and (iii) basic checks on their plan and information (as defined above).

[Ben/Michelle, is that analysis available?]

Let's see how these three metrics make sense for local groups. I expect introductions to be of much lower value (as the available network is likely much smaller), encouragement to potentially be of about an equal value (this can, and should, be practiced and done well), and basic checks to be somewhat less (as this demands skill and a good framework - this probably varies a lot between groups). Also, I expect 80k to appear to the advisees as higher status compared to local groups, which probably gives more weight to their advice.

First, this is prescriptive for local groups. They should 

  1. Make effort to expand their network. 
  2. Make more effort into making introductions. Even if the advisor is not familiar with anyone relevant, they should help promising people with reaching out to people in the broad EA network (cold emails or sending a direct message on EAHub should work well most of the time).
  3. Become more familiar with top local job opportunities.
  4. Be more encouraging, and practice doing it well. 
  5. Recommend them to also apply for 80k coaching (or AAC / Effective Thesis).

Also, I recommend this guide for conducting career consultation.

Furthermore, in Appendix A 80k offers two "alternative visions of advising":

EA welcoming committee

  • Minimal selection and prep, and speak to a larger number of people
  • Focus on being encouraging and welcoming; de-emphasise providing specific advice

EA mentoring

  • Work with a smaller number of people but have lots of meetings until they’re fully up-to-speed with EA.

For local groups it might make sense to focus on the first when doing career advice. I think it could be harder to recognize the most promising people and target them, and it requires more EA-expertise to do mentoring. More importantly, I think that local groups in general should be welcoming and less elitist, and giving career advice services is a great way to do that (and signal the relevant audience). That said, I do think that doing "EA mentoring" is very important, and in fact we are also doing something similar in EA Israel and it seems to be successful, I just recommend that in addition to the career advice.

Secondly, this makes me expect that overall the career consultation services done locally can perform quite well compared to 80k. Say, have 2-10 times less value with most of the loss from a poor network. However, I also expect local groups to have much more overhead because they aren't as skilled and have poorer infrastructure. Say, they take a lot of time to prepare and write a followup email / 1-1 sessions take longer to be fruitful / more time spent building network to make introductions. So all in all I guess something like 4-30 times less effective.

However, the people that apply for local career consultation are of different demography. They are probably less promising than people who get advised by 80k (at least within longtermism) and might be more closely related to people in the local group (which could help with setting long-term accountability but might have some complicated dynamics). 

I think that this probably lowers the total efficiency substantially (say 2-3 times, but could be much more). So in total, I guess local career advice to be about 10-100 times less effective compared to 80k coaching. This is highly uncertain. I'd like to know a lot more about 80k's process and the analysis of how different factors contribute to its success (and how dependant that is on the advisor). Also, there might be more important indicators of success, like potentially: involvement with the local group / EA community or having a better understanding of EA and feelings toward it.

If that's the case, that still feels like a good use of the local group resources. I expect most people to have a hard time doing work that's 10-100 times as good as Michelle and Habiba can do. Also, giving career advice for some (most?) people is a highly rewarding experience by itself and could be a valuable learning experience for people early on in their careers. At least for me, (and at least at this time in my life where I'm recovering from burnout), it seems to take less energy than doing other work.

Comment by EdoArad (edoarad) on Why should we *not* put effort into AI safety research? · 2021-05-16T06:30:33.160Z · EA · GW

A related talk by Ben Garfinkel that raises some specific questions about AI Safety and calls for further investigation - How Sure Are We About This AI Stuff?

Comment by EdoArad (edoarad) on HIPR: A new EA-aligned policy newsletter · 2021-05-12T11:41:36.846Z · EA · GW

I agree. Generally, we are not at a point where anyone should be concerned with cluttering the forum - the Karma and tag system helps to take care of that

Comment by EdoArad (edoarad) on Should you do a PhD in science? · 2021-05-09T09:42:57.857Z · EA · GW

See also http://www.shouldyouphd.com/

Comment by EdoArad (edoarad) on The EA Forum Editing Festival has begun! · 2021-05-06T11:55:01.562Z · EA · GW

Well, I'm out.. did basically nothing, sorry! 😊 (It's a good thing that editing the tag-wiki is still available after the Festival)