Posts

What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z · score: 12 (7 votes)
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z · score: 16 (8 votes)
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z · score: 28 (20 votes)
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z · score: 39 (23 votes)
What is the size of the EA community? 2019-11-19T07:48:31.078Z · score: 24 (8 votes)
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z · score: 51 (30 votes)
Off-Earth Governance 2019-09-06T19:26:26.106Z · score: 11 (5 votes)
edoarad's Shortform 2019-08-16T13:35:05.296Z · score: 3 (2 votes)
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z · score: 21 (9 votes)
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z · score: 11 (7 votes)
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z · score: 9 (6 votes)
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z · score: 1 (2 votes)
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z · score: 12 (5 votes)
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z · score: 8 (4 votes)
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z · score: 12 (6 votes)

Comments

Comment by edoarad on Research proposals? · 2020-05-26T11:15:45.650Z · score: 2 (2 votes) · EA · GW

Jaime Sevilla gave a detailed advice on how to generate research proposals, which might also be useful.

Comment by edoarad on When can I eat meat again? · 2020-05-25T15:18:42.390Z · score: 3 (2 votes) · EA · GW

I was a bit surprised to read what you wrote about Cultivated Meat. I am not an expert, but I've looked into this topic and my understanding is that there are fundamental technical challenges to be solved at least in cell expansion, the rate and specificity of cell growth, and the creation of thick cuts of any tissue. I'm sure that these can be solved in the end, but they seem very difficult (considering that cell expansion is needed for making blood cells and other non-tissue type of cells in the much more heavily funded biomedical field which is also less bottlenecked by medium cost).

I understand that today we may be possible to make some hybrid products, but that these won't really be similar to the real thing. Is this similar to your view?

Comment by edoarad on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-18T07:33:29.242Z · score: 5 (4 votes) · EA · GW

Regarding the possibility of Extinction level agents, there has been at least 2 species extinction cases that likely resulted from pathogens (here or in sci-hub). 

Also, the Taino people were pretty much extinct and that may be mostly the result of disease, though it seems contended:

In thirty years, between 80% and 90% of the Taíno population died.[76] Because of the increased number of people (Spanish) on the island, there was a higher demand for food. Taíno cultivation was converted to Spanish methods. In hopes of frustrating the Spanish, some Taínos refused to plant or harvest their crops. The supply of food became so low in 1495 and 1496, that some 50,000 died from the severity of the famine.[77] Historians have determined that the massive decline was due more to infectious disease outbreaks than any warfare or direct attacks.[78][79] By 1507, their numbers had shrunk to 60,000. Scholars believe that epidemic disease (smallpox, influenza, measles, and typhus) was an overwhelming cause of the population decline of the indigenous people,[80] and also attributed a "large number of Taíno deaths...to the continuing bondage systems" that existed.[81][82] Academics, such as historian Andrés Reséndez of the University of California, Davis, assert that disease alone does not explain the total destruction of indigenous populations of Hispaniola.

These two cases actually lower my fear of naturally accruing pandemics, because I'd expect to find more evidence. This in turn also lowers slightly my credence in the plausibility of engineered pandemics. 

I'm sure that other people here are much more knowledgeable than myself, and this brief analysis might be misleading. 

Comment by edoarad on Long-Term Future Fund and EA Meta Fund applications open until June 12th · 2020-05-15T15:04:33.391Z · score: 1 (1 votes) · EA · GW

Yes, thank you

Comment by edoarad on Long-Term Future Fund and EA Meta Fund applications open until June 12th · 2020-05-15T13:07:22.887Z · score: 5 (4 votes) · EA · GW

Are the grants decided by taking the top applications or by passing some bar? 

Comment by edoarad on Modelers and Indexers · 2020-05-13T06:01:27.111Z · score: 3 (2 votes) · EA · GW

This reminded me of the Birds and Frogs distinction of mathematicians.

In a very shallow literature search, I found this review of the cognitive diversity literature. The closest thing there is a diversity in problem solving style which only has the Adaptors-Innovators distinction which may be slightly correlated but is a different thing. 

Comment by edoarad on Forecasting Newsletter: April 2020 · 2020-05-06T13:56:22.374Z · score: 6 (2 votes) · EA · GW

I've written this interactive notebook in Foretold prediction platform. It is meant to be completely beginner friendly and takes about 2 hours to go through. I've used it as the basis for a workshop, and the accompanying slides can be found at the bottom of the notebook.

From the notebook:

In this interactive notebook, our goal is to actively try out forecasting and learn several basic tools. After this, you will be able to more easily use forecasts in your daily life and decision making, understand broadly how forecasters go about predicting stuff, and you should know if this is something you want to dive into deeper and how to go about that. We have 5 sections:

  1. We will start immediately with several examples.
  2. Then go on to understand how probabilities feel like, and how to be more calibrated.
  3. Work on the technique of outside view and inside view reasoning.
  4. Briefly discuss several interesting techniques - research, combining models and changing scope.
  5. Try out some actual forecasts from start to finish!
Comment by edoarad on A list of EA-related podcasts · 2020-05-03T09:48:23.242Z · score: 1 (1 votes) · EA · GW

I like that podcast a lot! I suggest to skip directly to 31:20, the second part where Singer comes in, unless you are interested in half an hour of discussion about typography :)

Comment by edoarad on Reducing long-term risks from malevolent actors · 2020-05-01T06:12:03.635Z · score: 6 (5 votes) · EA · GW

Thanks for a very thorough and interesting report! 

It seem plausible that institutional mechanisms that prevent malevolent use of power may work well today in democracies. I think that the comparison is very important for understanding the value of the suggested interventions. You have briefly touched this - 

Overall, it seems plausible that many promising political interventions to prevent malevolent humans from rising to power have already been identified and implemented—such as, e.g., checks and balances, the separation of powers, and democracy itself. After all, much of political science and political philosophy is about preventing the concentration of power in the wrong hands.[26] We nevertheless encourage interested readers to further explore these topics.

If these mechanisms are actually working quite well today, this somewhat lowers the importance of the suggested interventions. The analysis given above is mostly for non-modern institutions, but perhaps the court system, democracy and transparency has evolved so that malevolent actors can not really do much harm (or that it will be harder for them to get in power). 

Also, the major alternative to reducing the influence of malevolent actors may be in the institutional decision making itself, or some structural interventions. AI Governance as a field seems to mostly go in that route, for example. 

That said, I think that efforts going into your suggested interventions are largely orthogonal to these alternatives (and might actually be supportive of one another). Also, I intuitively find your arguments quite compelling.

Comment by edoarad on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-25T11:38:01.404Z · score: 2 (2 votes) · EA · GW

make sure to put in a random salt 🤠

Comment by edoarad on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-25T06:21:48.164Z · score: 2 (2 votes) · EA · GW

Is this a hash on a guessed result or something like that?

Comment by edoarad on A central directory for open research questions · 2020-04-25T06:04:26.689Z · score: 2 (2 votes) · EA · GW

I'm very interested in the work you are doing at READI, and it would be great to discuss ideas and collaborate. 

(by the way, what does READI stand for?)

Comment by edoarad on Does Utilitarian Longtermism Imply Directed Panspermia? · 2020-04-24T18:23:56.460Z · score: 2 (2 votes) · EA · GW

Panspermia (from Ancient Greek πᾶν (pan), meaning 'all', and σπέρμα (sperma), meaning 'seed') is the hypothesis that life exists throughout the Universe, distributed by space dust,[1] meteoroids,[2] asteroids, comets,[3] planetoids,[4] and also by spacecraft carrying unintended contamination by microorganisms.[5][6][7] Distribution may have occurred spanning galaxies, and so may not be restricted to the limited scale of solar systems.[8][9]

From Wikipedia. 

Sounds interesting! The article on direct panspermia has an ethical objection from a welfarist perspective:

A third argument against engaging in directed panspermia derives from the view that wild animals do not —on the average— have lives worth living, and thus spreading life would be morally wrong. Ng supports this view,[36] and other authors agree or disagree, because it is not possible to measure animal pleasure or pain. In any case, directed panspermia will send microbes that will continue life but cannot enjoy it or suffer. They may evolve in eons into conscious species whose nature we cannot predict. Therefore, these arguments are premature in relation to directed panspermia. .

I have not read about the subject deeply. Is panspermia close to being plausible?

Comment by edoarad on edoarad's Shortform · 2020-04-24T13:42:22.339Z · score: 1 (1 votes) · EA · GW

Perhaps some EA orgs can distribute "impact shares" of their organization/project to volunteers based on their success, where 'impact prizes' are given by a third party, perhaps much later on. That may have much more motivational value than paying similar amount of (comparably very small) amount of money, and the records of which might enable some sort of better vetting mechanism

Comment by edoarad on The Case for Impact Purchase | Part 1 · 2020-04-23T20:46:21.502Z · score: 1 (1 votes) · EA · GW

Regarding impactpurchase.org, there is some discussion in this comment thread.

Comment by edoarad on The Case for Impact Purchase | Part 1 · 2020-04-23T20:43:47.132Z · score: 1 (1 votes) · EA · GW

This is very similar to this notion of impact prizes. The main difference there seems to be that there is a specific allotted sum of money for a variety of possible possible projects, which share that allotted amount proportionally to their estimated impact. 

I think that the downside of impact prizes compared to Conditional Impact Finance is mainly that it is much more volatile for investors - both because of dependencies between different projects and somewhat due to the continuum of possible values of estimated impact. Also, it is much harder on the donors. Well, there is also the problem that it may be clear that other competing projects are closing in on something much better (9x is enough to limit the prize to 10% of the original amount), and also competing interests between projects. 

The major upside of impact prizes seems to be that the incentives of the project is better aligned with maximizing impact because they get a prize which scales sort of linearly with impact (unless they are enormously successful). 

Comment by edoarad on Why Don’t We Use Chemical Weapons Anymore? · 2020-04-23T09:10:35.080Z · score: 2 (2 votes) · EA · GW

Yea, this can be confusing. Posts can be divided into 3 categories - personal blogposts, frontpage and community. All posts start as personal blogposts, and then can be moved to frontpage by moderators. 

As you can now see, your post has been moved to frontpage (which broadly means that it "is relevant to doing good effectively and doesn't require background knowledge of the EA community").

The following is an excerpt from the about page

Personal blog posts

By default, your posts will be published to your personal blog on your profile page. Other users can follow your page to see notifications when you post.

Frontpage and Community posts

If you're writing about ideas relevant to doing the most good, and which might be useful even to people who aren't closely involved with the EA community, your post will be moved to the "Frontpage" section and be visible on the front page of the forum.

If you’re writing about the EA community itself, giving an organizational update, or discussing strategies for community building, your post will be moved to the “Community” section, which can be accessed from the forum's sidebar menu.

For more on this distinction, see this post.

Comment by edoarad on A central directory for open research questions · 2020-04-20T17:06:49.514Z · score: 3 (3 votes) · EA · GW

One thing which I thought about trying which might be related is to take on a small scale research problem and set up an open call to globally collaborate on this. To make it successful, we can set up something formal that some organisation is interested in this result (and better yet, possibly supply a prize - doesn't have to be monetary) and coordinate with local groups to collect an initial team. 

That could be fun and engaging, but I'm not sure how scalable this is and how much impact we can expect from that (which is uncertainty probably worth of testing out). I've tried to start a small ALLFED-directed research group locally, as part of our research team, but that also didn't work out. I think that going global might possibly work though.

Comment by edoarad on A central directory for open research questions · 2020-04-20T16:57:57.760Z · score: 3 (3 votes) · EA · GW

My current model is something like this. #BetterWrongThanVague

It is difficult to make noticeable research contribution. Even small incremental steps can be intimidating and time consuming. 

It is hard to motivate oneself to work alone on someone else's problems. I think that most people probably have their own passions and model of what's important, and it's unclear why subquestion 3.5.1 should be the single thing that they focus on.
Three of the main motivators that might mitigate that here are recognition for completing the work well and presenting something interesting, better career capital (learning something new or displaying skills) and socializing/partnering.

Comment by edoarad on Rejecting Supererogationism · 2020-04-20T16:35:04.966Z · score: 7 (5 votes) · EA · GW

"In ethics, an act is supererogatory if it is good but not morally required to be done. It refers to an act that is more than is necessary, when another course of action—involving less—would still be an acceptable action. " - Wikipedia 😊

Comment by edoarad on A central directory for open research questions · 2020-04-20T07:28:48.068Z · score: 6 (5 votes) · EA · GW

For calibration, so far no one has contacted me to take on one of the research projects in the list of concrete researchy projects. And even in 1-1s, with people that are interested in joining EA Israel and are interested in taking on research project, it had very limited success going over this list and thinking together about possible research questions. 

Comment by edoarad on What are some good online courses relevant to EA? · 2020-04-14T19:28:24.199Z · score: 1 (1 votes) · EA · GW

As for the MIT courses, there has been this recent (glowing!) review

I actually don't know how good is Singer's course 😊 This makes me curious about it's impact. It's probably more in outreach than getting ahead.

Comment by edoarad on A list of EA-related podcasts · 2020-04-14T08:25:04.934Z · score: 2 (2 votes) · EA · GW

A new podcast transcription of Nate Soares Replacing Guilt - https://anchor.fm/guilt

Comment by edoarad on edoarad's Shortform · 2020-04-13T20:00:07.060Z · score: 1 (1 votes) · EA · GW

Note to self: read on moral enhancement some day

Comment by edoarad on EA and suffering reduction. · 2020-04-11T06:46:25.269Z · score: 9 (4 votes) · EA · GW

EA as a whole tends to maximize welfare (and you can see relevant discussion in the proposed definition of EA here). While suffering and well-being are possibly not simply opposites, something which I'm currently trying to understand, the analyses are arguably similar with the tools we have today. So ACE and GiveWell should be pretty safe bets. 

Thinking about the long term, the Center on Long Term Risk is working with a suffering-focused ethics approach. This view can result in different cause prioritization

Comment by edoarad on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2020-04-07T10:33:05.681Z · score: 1 (1 votes) · EA · GW

The link has an extra '.' - https://www.causal.app/

Looks neat, good luck!

Comment by edoarad on A cause can be too neglected · 2020-04-04T10:09:35.413Z · score: 3 (2 votes) · EA · GW

Maybe here :)

Comment by edoarad on Charity Entrepreneurship’s 2020 research plans · 2020-04-03T13:59:02.614Z · score: 5 (4 votes) · EA · GW

Really impressed by your work so far, thanks for sharing this. 

I'm curious about how you are using multiple researchers for this. Most steps can be done in parallel, but I wonder- how much do you rely on multiple views on the same analysis, and how do you go about it? 

Also, is there anything that the EA community can do to assist the research process? If so, what could be the most valuable? (I'm interested specifically in small volunteer research projects that non-experts can take without your explicit direction, perhaps reviewing reports or rechecking ideas that did not successfully passed through the funnel)

Comment by edoarad on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-04-03T13:11:20.978Z · score: 2 (2 votes) · EA · GW

A Charity Entrepreneur School would be amazing. Thanks, and looking forward to the release of the handbook!

Comment by edoarad on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-03-31T15:11:15.513Z · score: 5 (3 votes) · EA · GW

How much do you think CE can effectively grow? Are the limits for growth in promising applicants, outreach, seed funding, charity ideas, diminishing returns for training, or something else entirely?

Comment by edoarad on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-29T19:40:31.450Z · score: 9 (5 votes) · EA · GW

This is great! I find this extremely important, and I agree that we have a lot of room to improve. Thank you for the clear explanation and the great suggestions.

Further ideas:

  1. A global research agenda / roadmap.
  2. Bounties for specific requests. 
    1. Perhaps someone can set a (capped) 1-1 matching for individual requesters. 
    2. Better, give established researchers or organizations credit to use for their requests. 
  3. A peer review mechanism in the forum. A concrete suggestion:
    1. Users submitting a "research post" can request peer review, which is displayed in the post [a big blue "waiting for reviewers"].
    2. Reviewer volunteers to review, and present their qualifications (and statement of no conflict of interest) to a dedicated board that consists of "EA experts", which can approve them for review. 
    3. There are strict-ish guidelines on what is expected from a good post, and a guide for reviewers. 
    4. The reviewers submit their review anonymously and publicly. 
    5. They can accept the post [a big green "peer reviewed"].
    6. They can also ask to fix some errors and improve clarity [a big yellow "in revision"].
    7. They can decide that it is just not good enough or irrelevant [a big red "rejected"].
  4. (The above is problematic in several ways. The reviewer is not randomized, so there is inherent bias. The incentive for reviewing is not clear. It can be tough to be rejected..)
  5. Better norms for linking to previous research and asking for it. Better norms for suitable exposition. These norms don't have to be strict on "non-research" posts. 
  6. The forum itself can contain many further innovations (Good luck, JP!):
    1. Polls and embedded prediction tools. 
    2. Community editable wiki posts. 
    3. Suggested templates. 
    4. Automated suggestion for related posts while editing (like in stackexchange). 
    5. An EA tag on lesswrong/alignment forum (or vice versa) with which posts can be displayed on both sites (like the LW/AF workflow). 
    6. A mechanism for highlighting and commenting like in Medium. (Not sure I like it)
    7. Suggestions that appear (only) to the editor like in google docs. 
    8. There are some great stuff already on their way also :) 
  7. Regarding a wiki, Viktor Petukhov wrote a post about it with some discussion following it on the post and in private communication.  
  8. More research mentorships. Better support for researchers at the start of their path.
  9. Better expository and introductory materials, and guides to the literature. 
  10. Better norm and infrastructure for partnering.
  11. A supportive infrastructure to coordinate projects globally, between communities. This can allow more easily to set up large scale, volunteer-led projects for better epistemic institutions. The importance of local communities here is as a vetting mechanism.
Comment by edoarad on What posts do you want someone to write? · 2020-03-28T08:14:33.668Z · score: 3 (2 votes) · EA · GW

Do you mind expanding a bit on CNS Imaging, Entropy for Intentional content, and Graph Traversal?

Comment by edoarad on What posts do you want someone to write? · 2020-03-26T06:17:14.114Z · score: 1 (1 votes) · EA · GW

No, the analysis does not seem to contain what I was going for. 

Curious about what you think is weird in the framing?

Comment by edoarad on What posts do you want someone to write? · 2020-03-25T19:44:11.587Z · score: 1 (1 votes) · EA · GW

This is not quite what I was going for, even though it is relevant. This problem profile focuses on existing institutions and on methods for collective decision making. I was thinking more in the spirit of market design, where the goal is to generate new institutions with new structures and rules so that people are selfishly incentivised to act in a way which maximizes welfare (or something else).

Comment by edoarad on What posts do you want someone to write? · 2020-03-24T16:12:22.821Z · score: 18 (11 votes) · EA · GW

Governance innovation as a cause area

Many people are working on new governance mechanisms from an altruistic perspective. There are many sub-categories such as Charter cities, space governance, decentralized governance,  RadicalXChange agenda..

I'm uncertain as to the marginal value in such projects, and I'd like to see a broad analysis that can serve as a good prior and analysis framework for specific projects.

Comment by edoarad on What posts do you want someone to write? · 2020-03-24T15:59:44.109Z · score: 9 (6 votes) · EA · GW

An analysis of how knowledge is constructed in the EA community, and how much weight we should assign to ideas "supported by EA". 

The recent question on reviews by non-EA researchers  is an example of that. There might be great opportunities to improve EA intellectual progress.

Comment by edoarad on synthetic indices used to rate all charities: What kind of star ratings exist? · 2020-03-21T16:22:41.363Z · score: 1 (1 votes) · EA · GW

What do you mean by "star ratings"?

And do you perhaps look for Guide Star

Comment by edoarad on Insomnia with an EA lens: Bigger than malaria? · 2020-03-04T08:20:54.131Z · score: 7 (6 votes) · EA · GW

Really appreciate the work that you are putting into this app, and this write-up. I'm excited by your app, and hope that it will help a lot of people to solve their sleeping problems! John Halstead also wrote a post on CBT-i a while ago, and while I assume that you've reached it independently, it's great to see attempt at real-world solutions and impact assessments.

There are two points that I think are missing from your analysis. First, regarding Tractability, I'm curious as to what would cause people with insomnia to seek help and find the CBT app. That is, even if CBT is very effective, it might still be very hard to reach people and to put the treatment in practice.

Second, I'd like to see an assessment of the marginal contribution for Slumber over existing efforts. There seem to be other apps for CBT-i. 

Thanks again! I've suggested Slumber for a friend to try out :)

Comment by edoarad on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-02T13:58:25.928Z · score: 4 (3 votes) · EA · GW

Thanks for this post, fascinating read!

Considering the hypothesis given here, I'm curious as to why we don't see more takeovers today. There are countries and small corporations involved in inner conflicts that I expect (following this post) a small but powerful organisation (or a large nation) could take over. Some reasons that we may not see this - 

  1. International laws or one of the world's leading nations might punish such takeover attempt.
  2. People with position of power may not want to take that kind of risk.
  3. There is not that much economic value to gain.
  4. Takeovers may be quiet (say by blackmail).
  5. Conspiratorially, the relevant opportunities are getting picked up by the more powerful nations/corporations.
Comment by edoarad on How to estimate the EV of general intellectual progress · 2020-02-28T14:22:35.354Z · score: 3 (2 votes) · EA · GW

seems relevant, and I want to look into more deeply - Back to Basics: Basic Research Spillovers, Innovation Policy and Growth

Comment by edoarad on edoarad's Shortform · 2020-02-23T19:06:44.835Z · score: 1 (1 votes) · EA · GW

This 2015 post by Rob Wiblin (One of the top-voted in that year) is a nice example of how the community is actively cohesive

Comment by edoarad on evelynciara's Shortform · 2020-02-23T10:51:33.465Z · score: 1 (1 votes) · EA · GW

The talk is here

Comment by edoarad on edoarad's Shortform · 2020-02-23T10:50:14.698Z · score: 6 (2 votes) · EA · GW

[a brief note on altruistic coordination in EA]

  1. EA as a community has a distribution over people of values and world-views (which themselves are uncertain and can bayesianly be modeled as distributions).
  2. Assuming everyone have already updated their values and world-view by virtue of epistemic modesty, each member of the community should want all the resources of the community to go a certain way.
    • That can include desires about the EA resource allocation mechanism.
  3. The differences between individuals undoubtedly causes friction and resentment.
  4. It seems like the EA community is incredible in it's cooperative norms and low levels of unneeded politics.
    • There are concerns about how steady this state is.
    • Many thanks to anyone working hard to keep this so!

There's bound to be a massive room for improvement, a clear goal of what would be the best outcome considering a distribution as above, a way of measuring where we're at, an analysis of where we are heading under the current status (an implicit parliamentary model perhaps?), and suggestions for better mechanisms and norms that result from the analysis.

Comment by edoarad on Request for feedback on my career plan for impact (560 words) · 2020-02-21T14:46:26.415Z · score: 1 (1 votes) · EA · GW

This is interesting. Do you have specific example in mind where this can be applied to an EA cause?

Comment by edoarad on My personal cruxes for working on AI safety · 2020-02-16T18:18:32.489Z · score: 2 (2 votes) · EA · GW

This reminds me of the discussion around the Hinge of History Hypothesis (and the subsequent discussion of Rob Wiblin and Will Macaskill).

I'm not sure that I understand the first point. What sort of prior would be supported by this view?

The second point I definitely agree with, and the general point of being extra careful about how to use priors :)

Comment by edoarad on My personal cruxes for working on AI safety · 2020-02-16T17:50:18.108Z · score: 5 (3 votes) · EA · GW

Jaime Sevilla wrote a long (albeit preliminary) and interesting report on the topic

Comment by edoarad on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T08:40:26.339Z · score: 1 (1 votes) · EA · GW

right, sorry 😊

Comment by edoarad on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T05:52:38.698Z · score: 6 (4 votes) · EA · GW

EAHub has a large and growing list of resources collected and written for local groups.

Comment by edoarad on How to estimate the EV of general intellectual progress · 2020-02-11T03:38:25.863Z · score: 3 (2 votes) · EA · GW

I think so. While the main value of research lies in it's value of information, the problem here seems to be about how to go about estimating the impact and not so much about the modeling.

Comment by edoarad on On Demopisty · 2020-02-11T01:23:08.369Z · score: 2 (2 votes) · EA · GW

Thanks. I'd be very excited to see a full post considering this set of ideas as a cause area proposal, possibly using the ITN framework, if you or anyone else is up to it.

I think that the discourse in EA is too thin on these topics, and that perhaps some posts exploring the basics while considering the effects of marginal contribution might be the way to see whether we should consider them worthwhile. I think this makes this post somewhat premature, although I appreciate the suggested terminology and the succinct but informative writing.