Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z · score: 27 (19 votes)
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z · score: 31 (19 votes)
What is the size of the EA community? 2019-11-19T07:48:31.078Z · score: 24 (8 votes)
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z · score: 49 (28 votes)
Off-Earth Governance 2019-09-06T19:26:26.106Z · score: 11 (5 votes)
edoarad's Shortform 2019-08-16T13:35:05.296Z · score: 3 (2 votes)
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z · score: 21 (9 votes)
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z · score: 11 (7 votes)
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z · score: 9 (6 votes)
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z · score: 1 (2 votes)
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z · score: 12 (5 votes)
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z · score: 8 (4 votes)
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z · score: 12 (6 votes)


Comment by edoarad on edoarad's Shortform · 2020-02-23T19:06:44.835Z · score: 1 (1 votes) · EA · GW

This 2015 post by Rob Wiblin (One of the top-voted in that year) is a nice example of how the community is actively cohesive

Comment by edoarad on evelynciara's Shortform · 2020-02-23T10:51:33.465Z · score: 1 (1 votes) · EA · GW

The talk is here

Comment by edoarad on edoarad's Shortform · 2020-02-23T10:50:14.698Z · score: 6 (2 votes) · EA · GW

[a brief note on altruistic coordination in EA]

  1. EA as a community has a distribution over people of values and world-views (which themselves are uncertain and can bayesianly be modeled as distributions).
  2. Assuming everyone have already updated their values and world-view by virtue of epistemic modesty, each member of the community should want all the resources of the community to go a certain way.
    • That can include desires about the EA resource allocation mechanism.
  3. The differences between individuals undoubtedly causes friction and resentment.
  4. It seems like the EA community is incredible in it's cooperative norms and low levels of unneeded politics.
    • There are concerns about how steady this state is.
    • Many thanks to anyone working hard to keep this so!

There's bound to be a massive room for improvement, a clear goal of what would be the best outcome considering a distribution as above, a way of measuring where we're at, an analysis of where we are heading under the current status (an implicit parliamentary model perhaps?), and suggestions for better mechanisms and norms that result from the analysis.

Comment by edoarad on Request for feedback on my career plan for impact (560 words) · 2020-02-21T14:46:26.415Z · score: 1 (1 votes) · EA · GW

This is interesting. Do you have specific example in mind where this can be applied to an EA cause?

Comment by edoarad on My personal cruxes for working on AI safety · 2020-02-16T18:18:32.489Z · score: 2 (2 votes) · EA · GW

This reminds me of the discussion around the Hinge of History Hypothesis (and the subsequent discussion of Rob Wiblin and Will Macaskill).

I'm not sure that I understand the first point. What sort of prior would be supported by this view?

The second point I definitely agree with, and the general point of being extra careful about how to use priors :)

Comment by edoarad on My personal cruxes for working on AI safety · 2020-02-16T17:50:18.108Z · score: 5 (3 votes) · EA · GW

Jaime Sevilla wrote a long (albeit preliminary) and interesting report on the topic

Comment by edoarad on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T08:40:26.339Z · score: 1 (1 votes) · EA · GW

right, sorry 😊

Comment by edoarad on How much will local/university groups benefit from targeted EA content creation? · 2020-02-16T05:52:38.698Z · score: 6 (4 votes) · EA · GW

EAHub has a large and growing list of resources collected and written for local groups.

Comment by edoarad on How to estimate the EV of general intellectual progress · 2020-02-11T03:38:25.863Z · score: 3 (2 votes) · EA · GW

I think so. While the main value of research lies in it's value of information, the problem here seems to be about how to go about estimating the impact and not so much about the modeling.

Comment by edoarad on On Demopisty · 2020-02-11T01:23:08.369Z · score: 2 (2 votes) · EA · GW

Thanks. I'd be very excited to see a full post considering this set of ideas as a cause area proposal, possibly using the ITN framework, if you or anyone else is up to it.

I think that the discourse in EA is too thin on these topics, and that perhaps some posts exploring the basics while considering the effects of marginal contribution might be the way to see whether we should consider them worthwhile. I think this makes this post somewhat premature, although I appreciate the suggested terminology and the succinct but informative writing.

Comment by edoarad on On Demopisty · 2020-02-10T08:20:32.391Z · score: 7 (2 votes) · EA · GW

This feels useful. Do you mind expanding on the relevance to the current EA framwork?

Comment by edoarad on edoarad's Shortform · 2020-02-09T14:40:54.596Z · score: 7 (6 votes) · EA · GW

MIT has a new master's program on Development Economics.

It is taught by Esther Duflo and Abhijit Banerjee, the recent Nobel Laureates. Seems cool :)

Comment by edoarad on Differential progress / intellectual progress / technological development · 2020-02-07T16:36:39.747Z · score: 5 (4 votes) · EA · GW

I appreciate, again, the clear writing and the clarification of terms.

A minor quibble:

Differential progress also includes slowing risk-increasing progress.

I don't think that should count as progress (unless it was some sort of "progress" that led to that). You may still have Differential Actions which could either increase safety or lower risk. I guess I'm not sure what is progress.

Comment by edoarad on Prioritizing among the Sustainable Development Goals · 2020-02-07T05:37:38.613Z · score: 1 (1 votes) · EA · GW

From the FAQ:

Q: What exactly did you ask the experts? A: We presented the experts with 117 options - which we had distilled from the 169 official SDG targets - and asked them to identify the first 20 that should be tackled in a multi-year effort to fulfill all of the SDGs. We then asked them to put the 20 they selected into the proper sequence, such that doing each facilitated the tackling of subsequent options.

Regarding the criteria the experts used, it has been free for them to choose. The experts where then asked for how they came up with the ranking. Their answers where coded with individualistic vs institutional perspective (about 2:1 ratio), and process vs urgency (1:1). Link.

Comment by edoarad on Four components of strategy research · 2020-01-31T10:29:51.764Z · score: 8 (5 votes) · EA · GW

Thanks for the write up. I think it's important to have specified methods for conducting any research, and this post does so clearly (at least as clearly as possible in this abstract work).

Have you looked at the literature on similar/analogous prioritisation research?

I intuitively think that the hard work will be in modelling causality. Have you done any work on that component?

Comment by edoarad on A Local Community Course That Raises Mental Wellbeing and Pro-Sociality · 2020-01-31T09:23:34.759Z · score: 2 (2 votes) · EA · GW

Seems interesting! I have some newbie questions:

Assuming that these results hold, do you know how good it's outcomes are relative to similar interventions (say MBSR or group therapy)?

They also seem to be worried about the meaning of having no change in biomarkers. How important is that?

Comment by edoarad on Announcing A Volunteer Research Team at EA Israel! · 2020-01-31T05:17:49.555Z · score: 3 (2 votes) · EA · GW

Thanks! We do have that as a possible project on our project list, but it wasn't on my mind as one of the first things to go for. And you are right that especially for people without much experience this really makes a lot of sense. Added to our workflow :)

Comment by edoarad on evelynciara's Shortform · 2020-01-30T18:19:41.519Z · score: 3 (2 votes) · EA · GW

Shafi Goldwasser at Berkeley is currently working on some definitions of privacy and their applicability for law. See this paper or this talk. In a talk she gave last month she talked about how to formalize some aspects of law related to cryptographic concepts to formalize "the right to be forgotten". The recording is not up yet, but in the meantime I paste below my (dirty/partial) notes from the talk. I feel somewhat silly for not realizing the possible connection there earlier, so thanks for the opportunity to discover connections hidden in plain sight!

Shafi is working directly with judges, and this whole program is looking potentially promising. If you are seriously interested in pursuing this, I can connect you to her if that would help. Also, we have someone in our research team at EA Israel doing some work into this (from a more tech/crypto solution perspective) so it may be interesting to consider a collaboration here.

The notes-

"What Crypto can do for the Law?" - Shafi Goldwasser 30.12.19:

  • There is a big language barrier between Law and CS, following a knowledge barrier.
  • People in law study the law of governing algorithms, but there is not enough participation of computer scientists to help legal work.
  • But, CS can help with designing algorithms and formalizing what these laws should be.
  • Shafi suggests a crypto definition for "The right to be forgotten". This should help
    • Privacy regulation like CCPA and GDPR have a problem - how to test whether one is compliant?
    • Do our cryptographic techniques satisfy the law?
      • that requires a formal definition
        • A first suggestion:
          • after deletions, the state of the data collector and the history of the interaction with the environment should be similar as to the case where information was never changed. [this is clearly inadequate - Shafi aims at starting a conversation]
  • Application of cryptographic techniques
    • History Oblivious Data Structure
    • Data Summarization using Differential Privacy leaves no trace
    • ML Data Deletion
Comment by edoarad on evelynciara's Shortform · 2020-01-29T14:38:10.533Z · score: 3 (2 votes) · EA · GW


Comment by edoarad on How to estimate the EV of general intellectual progress · 2020-01-27T14:58:39.103Z · score: 11 (5 votes) · EA · GW

That's a great question! I don't have any good answer, but I've looked online and found some interesting papers so I'll just post some stuff I've got so far.

It seems like there is recently a shift toward "societal-impact focused research", as opposed to "quality-focused", driven mostly by the need to calculate Return On Investment. I think that this biases the current metrics/evaluators to be more short-termed and focused on health/security/tech-innovations.

Here, the authors ask research evaluators how they think about assessing societal impact. They have identified 5 dimension -

1. The Importance of the Underpinning Research in Evaluating Impact.

For more quality-focused evaluators, the importance of underpinning research when evaluating impact was driven by an underlying value system depicting a strong link between scientific and societal impact.

2. The Value of the Impact Versus the Value of the “Right” Impact.

For some evaluators, the necessity for research of a high quality to underpin societal impact was guided by the assumption that impact referred to ‘good impact’, as opposed to ‘negative’ societal impact.

3. Impact as Linear, Controllable or Serendipitous

A major underpinning factor influencing evaluators’ opinions was related to whether to view impact as related to ‘outside factors’ separate to the research, or something that was viewed rationally, therefore related to the quality of the research.

Towards the quality-focused extreme, evaluators envisaged a ‘pipeline’ from high quality research to societal impact – “a sort of translational pipeline is the okay term that tends to get used for taking a scientific discovery and pushing it towards some sort of laboratory test, new drug, or whatever, which, I guess, many people would view as some sort of impact”(P1OutImp5). Thus, the relationship between scientific and societal impact hinged upon the idea that “impact requires that you generate the evidence and then that you, in turn you get into guidelines and the people start using that information to change their practice”

4. Push Factors and Assessing Impact

Towards the quality-focused evaluator extreme, the assessment of societal impact was influenced by a belief that a researcher’s role in ensuring societal impact was limited solely to providing high quality research, whereas it was the responsibility of other, non-researchers to use this as evidence to pursue societal impact.

5. Measurable Impact Outcomes Versus Unmeasurable Impact Journeys

The final factor which influenced the evaluation scale was whether evaluators valued societal impact as a single, measureable outcome, or as a process or journey that, in many cases, is impossible to be measured.
Comment by edoarad on Doing good is as good as it ever was · 2020-01-27T04:36:20.586Z · score: 1 (1 votes) · EA · GW

This is interesting. Do you feel that motivation is a bigger factor for you in this advice as opposed to increasing the variance of efforts for doing good as a way of doing more good?

I am not sure in what contexts you give this advice, but I worry that in some cases it might be inappropriate. Say in cases where people's gut feelings and immediate intuitions are clearly guiding them in non-effective altruistic directions.

I'd prefer a norm where people interested in doing the most good would initially delegate their decisions to people who have thought long and hard on this topic, and if they want to try something else they should elicit feedback from the community. At least, as long as the EA community also has a norm for being open to new ideas.

Comment by edoarad on [deleted post] 2020-01-24T19:36:50.629Z

It's also posted here on the forum - :)

Comment by edoarad on Coordinating Commitments Through an Online Service · 2020-01-18T09:21:03.288Z · score: 1 (1 votes) · EA · GW

Not what I was talking about, but a specific application of this idea for science -

Comment by edoarad on Improving the Effective Altruism Network · 2020-01-17T17:22:54.573Z · score: 3 (2 votes) · EA · GW

Encountered this now, I highly resonate with most of the points given and especially with the conclusion.

Comment by edoarad on edoarad's Shortform · 2020-01-13T13:22:30.508Z · score: 7 (2 votes) · EA · GW

Basic Research vs Applied Research

1. If we are at the Hinge of History, it is less reasonable to focus on long-term knowledge building via basic research, and vice versa.

2. If we have identified the most promising causes well, then targeted applied research is promising.

Comment by edoarad on Space governance is important, tractable and neglected · 2020-01-10T07:12:01.697Z · score: 10 (5 votes) · EA · GW

I wrote a little bit about space governance, but was demotivated exactly because of these kind of concerns.

Comment by edoarad on Pablo_Stafforini's Shortform · 2020-01-09T18:07:51.910Z · score: 7 (3 votes) · EA · GW

Note this post on the Community / Frontpage distinction.

I agree that the term 'Community Favorites' is confusing as well 😵

Comment by edoarad on edoarad's Shortform · 2020-01-09T17:55:12.625Z · score: 3 (3 votes) · EA · GW

I think that some causes may have increasing marginal utility. Specifically, I think that it may be true in some types of research that are expected to generate insights about it's own domain.

Testing another idea for a cancer treatment is probably of decreasing marginal utility (because the low hanging fruits are being picked up), but basic research in genetics may be of increasing marginal utility (because even if others may work on the best approaches, you could still improve their productivity by giving them further insights).

This is not true if the progress in a field relies on progressing along a single "dimension" (say, a specific research direction that everyone attempts), or if researchers in that field can easily and productively change their projects and expertise.

It is true if there are multiple dimensions available, and progress along a different dimension wields insight for others to use.

Comment by edoarad on How Fungible Are Interests? · 2020-01-08T08:32:30.487Z · score: 1 (1 votes) · EA · GW

hmm I was using "The Way Things Work Here" sarcastically (I think in a similar tone to how I was addressing status in the previous comment). So I'm taking away that in the internet no one know that you are a troll, or something like that 😊

I appreciate the clarifications.

Comment by edoarad on How Fungible Are Interests? · 2020-01-07T17:17:49.017Z · score: 1 (1 votes) · EA · GW

Oh, now I see that what I wrote is a bit off what I intended, sorry. I was mainly explaining that this was my way of showing conformity to (how I interpret) The Way Things Work Here, not that one should do that to achieve higher status or that I think that status in EA is very important to achieve.

As you say, and for the same reasons, I agree that it is very helpful for people to read up on similar EA content. However, I am not sure how much is it important for people to also link to relevant sources and explain the connections, which is more what I was going for.

Comment by edoarad on Coordinating Commitments Through an Online Service · 2020-01-05T05:34:54.109Z · score: 1 (1 votes) · EA · GW

I recall something exactly like you mention exists, but I can't find it! I think its quite recent.

Comment by edoarad on Which banks are most EA-friendly? · 2019-12-26T07:36:38.736Z · score: 1 (4 votes) · EA · GW

Can you expand a bit on what kind of information would be relevant? And how that would be perhaps more important than charities? I'm not very Fin-savvy 😇

Comment by edoarad on Genetic Enhancement as a Cause Area · 2019-12-25T12:13:22.337Z · score: 6 (4 votes) · EA · GW

Thanks for this post! Without knowing much, genetic enhancement feels to me exactly like the kind of cause we should look into deeply.

The paper of Shulman and Bostrom is from 2013, and focused on policy. I guess that advances in biological techniques and the "crispr babies" story has decision makers take it more seriously. Is there anything close to an accepted global ethical conventions around it? Or major conferences/initiatives that do good work on the policy side?

Also, how mature is the concept of Iterated Embryo Selection?

Comment by edoarad on Peaceful protester/armed police pictures · 2019-12-23T05:22:55.861Z · score: 3 (2 votes) · EA · GW

Ben previously posted computer generated EA art (using style transfer)

Comment by edoarad on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2019-12-17T09:02:19.022Z · score: 2 (2 votes) · EA · GW

Do you have reports on mental health? What are the types of interventions you are mainly aiming towards?

Comment by edoarad on How Fungible Are Interests? · 2019-12-16T16:02:17.497Z · score: 5 (3 votes) · EA · GW


I just listened to the last episodes of Global Optimum and it makes me think that maybe what I was actually saying is that in order to get more status in the EA community, one should display an understanding of the "EA Cannon". (and of course, writing that comment supposedly signals higher status..)

I still think that it is important to engage with previous discussions whenever explaining something new, but also I want to clarify that it can absolutely be a valuable and reasonable choice to do something completely new.

Comment by edoarad on How Fungible Are Interests? · 2019-12-16T09:22:00.970Z · score: 9 (4 votes) · EA · GW

This is very well written! It's clear that a lot of thought and effort went into this, and I like your upshots. I think that for this to be more useful to the community, you should have more engagement with current literature and writing in the community. Here, I feel almost obliged to add a link to an 80K article on this matter.

Comment by edoarad on Effective Altruism Sweden plans for 2018 · 2019-12-13T09:54:52.845Z · score: 1 (1 votes) · EA · GW

Hey! Curious if you know if there is anyone actively working on EA Fact Check or something like it

Comment by edoarad on But exactly how complex and fragile? · 2019-12-13T09:09:43.607Z · score: 3 (2 votes) · EA · GW

Some thoughts:

  • Not really knowledgeable, but wasn't the project of coding values into AI was attempted in some way by machine ethicists? That could serve as a starting point for guessing how much time it should take to specify human values.

  • I find it interesting that you are alarmed by current non-AI agents/optimization processes. I think that if you take Drexler's CAIS seriously, that might make that sort of analysis more important.

  • I think that Friendship is Optimal's depiction of a Utopia is relevant here.

    • Not much of a spoiler, but beware - It seems like the possibility of having future civilization living a life that is practically very similar to ours (autonomy, possibility of doing something important, community, food,.. 😇) but just better in almost every aspect is incredible. There are some weird stuff there, some of which are horrible, so I'm not that certain about that.
  • Regarding intuition of ML for learning faces, I am not sure that this is a great analogy because the module that tries to understand human morality might get totally misinterpreted by other modules. Reward hacking, overfitting and adversarial examples are some things that pop to mind here as ways this can go wrong. My intuition here is that any maximizer would find "bugs" in it's model of human morality to exploit (because it is complex and fragile).

  • It seems like your intuition is mostly based on the possibility of self correction, and I feel like that is indeed where a major crux for this question lies.

Comment by edoarad on Community vs Network · 2019-12-12T20:08:19.924Z · score: 12 (10 votes) · EA · GW

This feels very important, and the concepts of the EA Network, EA as coordination and EA as an incubator should be standard even if this will not completely transform EA. Thanks for writing it so clearly.

I mainly want to suggest that this relates strongly to the discussion in the recent 80k podcast about sub-communities in EA. And mostly the conversation between Rob and Arden at the end.

Robert Wiblin: [...] And it makes me wonder like sometimes whether one of these groups should like use the term EA and the other group should maybe use something else?

Like perhaps the people who are focused on the long-term should mostly talk about themselves as long-termists, and then they can have the kind of the internal culture that makes sense given that focus.

Peter Singer: That’s a possibility. And that might help the other groups that you’re referring to make their views clear.

So that certainly could help. I do think that actually there’s benefits for the longtermists too in having a successful and broad EA movement. Because just as you know, I’ve seen this in the animal movement. I spoke earlier about how the animal welfare movement, when I first got into it was focused on cats and dogs and people who were attracted to that.

And I clearly criticized that, but at the same time, I have to recognize that there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement. That is that people can get attracted to EA through the idea of helping people in extreme poverty.

And then they’re part of a community that will hear arguments about long-termism. And maybe you’ll be able to recruit more talented people to do that research that needs to be done if there’s a broad and successful EA movement.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-11T04:24:43.231Z · score: 6 (4 votes) · EA · GW

Some questions on Stackoverflow or other sites in SE are marked as community wiki. This means that anyone (above a minimum reputation/Karma) can edit the question or the answers, that there is no "main author" anymore, but instead a mix of authors defined by percentage of contribution, no one gets reputation/Karma on anything.

I think that the loss of authorship is important so that anyone would feel comfortable editing the question/answers to make it a better source of up to date knowledge

Comment by edoarad on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-10T20:43:23.768Z · score: 2 (2 votes) · EA · GW

I think that the HowieL did not close the square bracket (but then edited so that it now looks fine).

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T20:41:18.461Z · score: 3 (2 votes) · EA · GW

Like a community wiki on stackexchange? Sounds valuable. (I think suggestions should be a default)

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T17:07:15.618Z · score: 2 (2 votes) · EA · GW

I actually did not give that enough thought. I think using MediaWiki or Wikidot might be fine for start, and I am very fond of Roam. Notion might be great here as well. All of them require getting used to because the syntax is not straightforward, but that suffices for textual edits if there are people who go over and fix design problems. Roam is more difficult because it is... different.. and because it is less mature. Roam being in it's starting phases might actually be a good thing, because it's development can probably shift to the needs of the EA community in this case if the EA Wiki will be hosted there (Roam Research received a grant from the Long Term Future Fund)

That is all to say that I think a basic wiki infrastructure might be fine for start, if there is a good roadmap and support from the community. I assume that markets and fancy prizes can wait for later or be hacked into existence, but maybe that should be in the design from the start 🤷‍♂️

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T06:36:26.408Z · score: 2 (2 votes) · EA · GW

Re Github-like structures, I think that Google Docs can be sufficient for most cases. Instead of branches, you have non-published docs. And using a wiki page instead of issues might be fine.

I agree with your analysis of knowledge bases, thanks for clarifying that! I take back the suggestion of doubling down on the forum mostly because it seems difficult to properly keep the information updated and to have a clear consensus.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T06:14:56.600Z · score: 3 (2 votes) · EA · GW

I'm surprised that you think that the bottleneck is in funding, I guess that means that I overestimate the easiness and desirability of using some existing tools.
Interested in your take on it :)

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-09T08:52:24.700Z · score: 2 (2 votes) · EA · GW
  • Also, I found that I tend to access Wikipedia mostly as a search result, and sometimes go deeper if there are inner links that interest me. This means that we only need the information to be accessible by search, and to be good at referencing further material. This can be possibly implemented adequately on the forum (but requires better search, better norm for writing information, and a better norm of referencing to other materials, perhaps in the comments).

  • And this is an interesting experiment in a mechanism designed to improve incentives for collective knowledge production.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-09T08:23:19.961Z · score: 16 (8 votes) · EA · GW

Some thoughts:

In summary, the empirical results paint a somewhat different picture of sustained contribution than originally hypothesized. Specifically, sustained contributors appear to be motivated by a perception that the project needs their contributions (H1); by abilities other than domain expertise (H2); by personal rather than social motives (H3 & 4); and by intrinsic enjoyment of the process of contributing (H7) rather than extrinsic factors such as learning (H6). In contrast, meta-contributors seem to be motivated by social factors (H3 & 4), as well as by intrinsic enjoyment (H7).

  • I think that we should strategically plan how to incentivize possible contributors. Ideally, people should contribute based on what would be the most valuable, which is something that may be achievable through prizes (possibly "Karma" or money, but perhaps better is something like certificates of impact), bounties, peer support and acknowledgment, and requests and recognition from leaders of the community.
  • I think that it would take a big effort to bootstrap something new. The efforts going into EA Hub seems to me like a good place to start a centralized knowledge base.
  • I'd like something like a top/bottom research agenda on "how to do the most good", that ends with concrete problems ([like these])( Something that can help us be more strategic in our resource allocation, and through which we can more easily focus experts on where they can help the most (and have a good infrastructure for moral trade).
  • It seems that something like Roam could be great, because it is designed to make it easy to create pages and has backlinks to support exploration and has other neat stuff. It is still not mature enough though.
Comment by edoarad on What is the size of the EA community? · 2019-12-09T06:35:42.985Z · score: 1 (1 votes) · EA · GW

Thanks! This is helpful, and some of it was really surprising :)

Comment by edoarad on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T20:35:53.594Z · score: 4 (3 votes) · EA · GW

Sorry, yes.

There are two ways to use "risk averse" here.

Reducing the risk of saying the wrong advice or giving advice for safer career path.

I meant the first - What are things you would say if you didn't fear giving wrong advice?