Posts

A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z · score: 29 (17 votes)
What is the size of the EA community? 2019-11-19T07:48:31.078Z · score: 23 (7 votes)
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z · score: 49 (28 votes)
Off-Earth Governance 2019-09-06T19:26:26.106Z · score: 10 (4 votes)
edoarad's Shortform 2019-08-16T13:35:05.296Z · score: 3 (2 votes)
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z · score: 21 (9 votes)
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z · score: 11 (7 votes)
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z · score: 9 (6 votes)
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z · score: 1 (2 votes)
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z · score: 12 (5 votes)
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z · score: 8 (4 votes)
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z · score: 12 (6 votes)

Comments

Comment by edoarad on Effective Altruism Sweden plans for 2018 · 2019-12-13T09:54:52.845Z · score: 1 (1 votes) · EA · GW

Hey! Curious if you know if there is anyone actively working on EA Fact Check or something like it

Comment by edoarad on But exactly how complex and fragile? · 2019-12-13T09:09:43.607Z · score: 3 (2 votes) · EA · GW

Some thoughts:

  • Not really knowledgeable, but wasn't the project of coding values into AI was attempted in some way by machine ethicists? That could serve as a starting point for guessing how much time it should take to specify human values.

  • I find it interesting that you are alarmed by current non-AI agents/optimization processes. I think that if you take Drexler's CAIS seriously, that might make that sort of analysis more important.

  • I think that Friendship is Optimal's depiction of a Utopia is relevant here.

    • Not much of a spoiler, but beware - It seems like the possibility of having future civilization living a life that is practically very similar to ours (autonomy, possibility of doing something important, community, food,.. 😇) but just better in almost every aspect is incredible. There are some weird stuff there, some of which are horrible, so I'm not that certain about that.
  • Regarding intuition of ML for learning faces, I am not sure that this is a great analogy because the module that tries to understand human morality might get totally misinterpreted by other modules. Reward hacking, overfitting and adversarial examples are some things that pop to mind here as ways this can go wrong. My intuition here is that any maximizer would find "bugs" in it's model of human morality to exploit (because it is complex and fragile).

  • It seems like your intuition is mostly based on the possibility of self correction, and I feel like that is indeed where a major crux for this question lies.

Comment by edoarad on Community vs Network · 2019-12-12T20:08:19.924Z · score: 11 (9 votes) · EA · GW

This feels very important, and the concepts of the EA Network, EA as coordination and EA as an incubator should be standard even if this will not completely transform EA. Thanks for writing it so clearly.

I mainly want to suggest that this relates strongly to the discussion in the recent 80k podcast about sub-communities in EA. And mostly the conversation between Rob and Arden at the end.

Robert Wiblin: [...] And it makes me wonder like sometimes whether one of these groups should like use the term EA and the other group should maybe use something else?

Like perhaps the people who are focused on the long-term should mostly talk about themselves as long-termists, and then they can have the kind of the internal culture that makes sense given that focus.

Peter Singer: That’s a possibility. And that might help the other groups that you’re referring to make their views clear.

So that certainly could help. I do think that actually there’s benefits for the longtermists too in having a successful and broad EA movement. Because just as you know, I’ve seen this in the animal movement. I spoke earlier about how the animal welfare movement, when I first got into it was focused on cats and dogs and people who were attracted to that.

And I clearly criticized that, but at the same time, I have to recognize that there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement. That is that people can get attracted to EA through the idea of helping people in extreme poverty.

And then they’re part of a community that will hear arguments about long-termism. And maybe you’ll be able to recruit more talented people to do that research that needs to be done if there’s a broad and successful EA movement.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-11T04:24:43.231Z · score: 6 (4 votes) · EA · GW

Some questions on Stackoverflow or other sites in SE are marked as community wiki. This means that anyone (above a minimum reputation/Karma) can edit the question or the answers, that there is no "main author" anymore, but instead a mix of authors defined by percentage of contribution, no one gets reputation/Karma on anything.

I think that the loss of authorship is important so that anyone would feel comfortable editing the question/answers to make it a better source of up to date knowledge

Comment by edoarad on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-10T20:43:23.768Z · score: 2 (2 votes) · EA · GW

I think that the HowieL did not close the square bracket (but then edited so that it now looks fine).

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T20:41:18.461Z · score: 3 (2 votes) · EA · GW

Like a community wiki on stackexchange? Sounds valuable. (I think suggestions should be a default)

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T17:07:15.618Z · score: 1 (1 votes) · EA · GW

I actually did not give that enough thought. I think using MediaWiki or Wikidot might be fine for start, and I am very fond of Roam. Notion might be great here as well. All of them require getting used to because the syntax is not straightforward, but that suffices for textual edits if there are people who go over and fix design problems. Roam is more difficult because it is... different.. and because it is less mature. Roam being in it's starting phases might actually be a good thing, because it's development can probably shift to the needs of the EA community in this case if the EA Wiki will be hosted there (Roam Research received a grant from the Long Term Future Fund)

That is all to say that I think a basic wiki infrastructure might be fine for start, if there is a good roadmap and support from the community. I assume that markets and fancy prizes can wait for later or be hacked into existence, but maybe that should be in the design from the start 🤷‍♂️

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T06:36:26.408Z · score: 2 (2 votes) · EA · GW

Re Github-like structures, I think that Google Docs can be sufficient for most cases. Instead of branches, you have non-published docs. And using a wiki page instead of issues might be fine.

I agree with your analysis of knowledge bases, thanks for clarifying that! I take back the suggestion of doubling down on the forum mostly because it seems difficult to properly keep the information updated and to have a clear consensus.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-10T06:14:56.600Z · score: 3 (2 votes) · EA · GW

I'm surprised that you think that the bottleneck is in funding, I guess that means that I overestimate the easiness and desirability of using some existing tools.
Interested in your take on it :)

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-09T08:52:24.700Z · score: 2 (2 votes) · EA · GW
  • Also, I found that I tend to access Wikipedia mostly as a search result, and sometimes go deeper if there are inner links that interest me. This means that we only need the information to be accessible by search, and to be good at referencing further material. This can be possibly implemented adequately on the forum (but requires better search, better norm for writing information, and a better norm of referencing to other materials, perhaps in the comments).

  • And this is an interesting experiment in a mechanism designed to improve incentives for collective knowledge production.

Comment by edoarad on Should we use wiki to improve knowledge management within the community? · 2019-12-09T08:23:19.961Z · score: 14 (7 votes) · EA · GW

Some thoughts:

In summary, the empirical results paint a somewhat different picture of sustained contribution than originally hypothesized. Specifically, sustained contributors appear to be motivated by a perception that the project needs their contributions (H1); by abilities other than domain expertise (H2); by personal rather than social motives (H3 & 4); and by intrinsic enjoyment of the process of contributing (H7) rather than extrinsic factors such as learning (H6). In contrast, meta-contributors seem to be motivated by social factors (H3 & 4), as well as by intrinsic enjoyment (H7).

  • I think that we should strategically plan how to incentivize possible contributors. Ideally, people should contribute based on what would be the most valuable, which is something that may be achievable through prizes (possibly "Karma" or money, but perhaps better is something like certificates of impact), bounties, peer support and acknowledgment, and requests and recognition from leaders of the community.
  • I think that it would take a big effort to bootstrap something new. The efforts going into EA Hub seems to me like a good place to start a centralized knowledge base.
  • I'd like something like a top/bottom research agenda on "how to do the most good", that ends with concrete problems ([like these])(https://forum.effectivealtruism.org/posts/2zcBy7eDXjEti9Sw7/a-collection-of-researchy-projects-for-aspiring-eas). Something that can help us be more strategic in our resource allocation, and through which we can more easily focus experts on where they can help the most (and have a good infrastructure for moral trade).
  • It seems that something like Roam could be great, because it is designed to make it easy to create pages and has backlinks to support exploration and has other neat stuff. It is still not mature enough though.
Comment by edoarad on What is the size of the EA community? · 2019-12-09T06:35:42.985Z · score: 1 (1 votes) · EA · GW

Thanks! This is helpful, and some of it was really surprising :)

Comment by edoarad on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T20:35:53.594Z · score: 4 (3 votes) · EA · GW

Sorry, yes.

There are two ways to use "risk averse" here.

Reducing the risk of saying the wrong advice or giving advice for safer career path.

I meant the first - What are things you would say if you didn't fear giving wrong advice?

Comment by edoarad on A collection of researchy projects for Aspiring EAs · 2019-12-04T15:24:53.209Z · score: 2 (2 votes) · EA · GW

That's cool! (I've also missed that you were talking about a specific comment earlier, which is fantastic)

I'd really like to eventually scale these up, to make it trivial for aspiring EAs to find a project while not hard to manage. So I really like these kinds of collections, at least until there will be a good platform to combine it all :)

Comment by edoarad on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T12:42:38.797Z · score: 10 (9 votes) · EA · GW

Regarding GPI, I guess it could have ended up different than it currently is. What were some major decisions related to how GPI is currently structured?

Comment by edoarad on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T12:37:34.650Z · score: 7 (7 votes) · EA · GW

What are some things you learned on the job that helped you become better at giving career advice?

Comment by edoarad on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T12:36:01.177Z · score: 4 (4 votes) · EA · GW

I'm assuming that you are somewhat risk averse in the 80K career advice, in that you avoid suggesting speculative cause areas and speculative career paths and other speculative suggestions.

Is that the case? If so, what are some examples of career advice that you intuitively guess that you probably should give? (perhaps, advice that someone else should give)

Comment by edoarad on A collection of researchy projects for Aspiring EAs · 2019-12-02T18:17:49.311Z · score: 3 (3 votes) · EA · GW

Thanks :) I've been thinking a lot specifically on GPI's research agenda, and talking to some of your collaborators there. I'm currently under the impression that it would be very difficult to actually help advance the research agenda outside something equivalent at least to a PhD. Ideally, I'd want to have much more concrete projects that comes out of EA org research agendas that might be small enough and self contained enough for non-experts to work on.

So far, the "what can I do to help" that I gathered that's related to GPI's research agenda (and similar academic institutions), for people without adequate background knowledge are mostly:

  1. explain basic papers and concepts.
  2. conduct a literature review, and collect information relevant to one topic.

Ideally, researchers would collect small problems on the side, or write up stuff you'd want someone else to explain. I'm not sure if that is worth the time of the researcher, both to write up the request and to correct errors or poor explanations.

Comment by edoarad on A collection of researchy projects for Aspiring EAs · 2019-12-02T15:58:09.638Z · score: 2 (2 votes) · EA · GW

Thanks, I missed that! There are plenty of great resources in the post and the comments, so I'll definitely look into that

Comment by edoarad on edoarad's Shortform · 2019-11-25T22:54:05.464Z · score: 1 (1 votes) · EA · GW

Nightdreaming on different aspects of Capacity Building for EA:

Community Building, in the sense of getting more people who are better engaged and with good supporting communities.

Increasing Prestige and normalizing EA-Weirdness in academia, governments and elsewhere.

More money for EA as a whole. Securing sources for the future of the movement, perhaps using some sort of donor advised fund.

Better infrastructure for Altruistic Coordination. Implementations that can increase "liquidity" in moral trade, from donations to knowledge transfer to volunteering opportunities.

Improving research and general productivity. Institutionally or individually.

Better Tools and Frameworks for figuring out what is the most good. Say, the discussions around ITN.

Display that we are actually doing good right now. Just figured that pretty much anything can help build better capacity, but the question is which is better?

Comment by edoarad on Updates from Leverage Research: history, mistakes and new focus · 2019-11-25T14:36:27.389Z · score: 4 (3 votes) · EA · GW

Great, this helps me understand my confusion regarding what counts as early stage science. I come from a math background, and I feel that the cluster of attributes above represent a lot of how I see some of the progress there. There are clear examples where the language, intuitions and background facts are understood to be very far from grasping an observed phenomenon.

Instruments and measurement tools in Math can be anything from intuitions of experts to familiar simplifications to technical tools that helps (graduate students) to tackle subcases (which would themselves be considered as "observations").

Different researchers may be in complete disagreement on what are the relevant tools (in the above sense) and directions to solve the problem. There is a constant feeling of progress even though it may be completely unrelated to the goal. Some tools require deep expertise in a specific subbranch of mathematics that makes it harder to collaborate and reach consensus.

So I'm curious if intellectual progress which is dependent on physical tools is really that much different. I'd naively expect your results to translate to math as well.

Comment by edoarad on How do we create international supply chain accountability? · 2019-11-24T20:30:25.783Z · score: 0 (2 votes) · EA · GW

Downvoted, because the concepts are not clear and should be explained, because the topic is removed from current conversation of EA Cause Areas and because I doubt that there is a simple answer to that.

I'd definitely be interested in a post that tries to explain the problem, gives preliminary analysis of why it may be a good cause area, and presents several possible solutions.

Comment by edoarad on Updates from Leverage Research: history, mistakes and new focus · 2019-11-23T08:23:30.012Z · score: 17 (8 votes) · EA · GW

Thank you for writing this. I was very curious about Leverage and I'm excited to see more clearly what you are going for.

Some off the bat skepticism. It seems a priori that the research on early stage science is motivated by early stage research directions and tools in Psychology. I'm wary of motivated reasoning when coming to conclusions regarding the resulting models in early stage, especially as it seems to me that this kind of research (like historical research) is very malleable and can be inadvertently argued to almost any conclusions one is initially inclined to.

What's your take on it?

Also, I'm not quite sure where do you put the line on what is an early stage research. To take some familiar examples, Einstein's theory of relativity, Turing's cryptanalysis research on the enigma (with new computing tools), Wiles's proof of Fermat's last theorem, EA's work on longtermism, Current research on String theory - are they early stage scientific research?

Comment by edoarad on Updates from Leverage Research: history, mistakes and new focus · 2019-11-23T07:46:43.716Z · score: 1 (1 votes) · EA · GW

Winky-. on Windows (That's the windows key + dot) 😊

Comment by edoarad on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-21T16:55:50.989Z · score: 1 (1 votes) · EA · GW

Well, I imagine that many people are thinking "EA is great, I wish I was a more dedicated person, but I currently need to do X or learn Y or get Z". For example, I'd assume that having ten times the amount of EA orgs, they would mostly be filled by people that were only moderately engaged. Or perhaps we should just wait for them to gain enough career capital.

Comment by edoarad on edoarad's Shortform · 2019-11-21T06:20:40.351Z · score: 1 (1 votes) · EA · GW

Also, what do you think of karma as a measure to the contribution of a post to the community? I realize that I am conflating this with a measure of trust, but these are not the same.

When I upvote, I usually think of how useful I think of the post for the community. Say, downvote a post because it was a waste of my time

Comment by edoarad on edoarad's Shortform · 2019-11-21T06:12:19.871Z · score: 1 (1 votes) · EA · GW

Yea, makes sense. I guess that I'm just piggybacking on the karma to make trade easier.

Maybe you can have Total gained Karma and Unused Karma, where the site's trust is based on theTotal, and you can only pay using the Unused but any gain is to them both. This still leaves an option for two members to juts transfer eachother karma and artificially increase thie trust level. This is not that bad as it only amounts to 2 times as large, and I do not realy think that people on the forum would do that.

Comment by edoarad on Stefan_Schubert's Shortform · 2019-11-20T15:16:09.804Z · score: 2 (2 votes) · EA · GW

The paper was also here on the forum

Comment by edoarad on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T04:34:30.975Z · score: 4 (3 votes) · EA · GW

Thanks! I'm sorry to hear about your health problems, but I'm glad it's better now :)

Comment by edoarad on edoarad's Shortform · 2019-11-19T20:11:40.017Z · score: 2 (2 votes) · EA · GW

And having great self-directed infrastructure. Coaching and psychological assistance, best learning materials and methods, easier funding for individuals

Comment by edoarad on edoarad's Shortform · 2019-11-19T19:42:01.910Z · score: 2 (2 votes) · EA · GW

I sometimes think about what would happen if EAs were completely aligned with one another, there was absolute trust and familiarity, and moral trade was easy and comprehensive. A world in which information flows easily and updates the "EA Worldview". A world in which if someone finds a projects which seems like the most important, it would be extremely simple to use one another to make that happen. A world in which people in EA work on what they can contribute most to, irrespective of their favored cause. You may say I'm a dreamer, but I'm not the only one 😇

Comment by edoarad on edoarad's Shortform · 2019-11-19T16:30:59.446Z · score: 8 (3 votes) · EA · GW

How about an option to transfer Karma directly to posts/comments? Perhaps to have the transfer be public (part of the information of the karma of the comment). This may allow some interesting "trades" such as giving prizes for answers (say, like in stackexchange) or have people display more strongly support for a comment.

Damn.. As stated, when people can pay to put Karma in posts, there is a problematic "attack" against it. left as an exercise :)

I still think that Karma transfer between people and prizes on comments/posts can be very interesting

Comment by edoarad on What is the size of the EA community? · 2019-11-19T16:00:44.957Z · score: 11 (6 votes) · EA · GW

Thanks! Note that for the computation to work, you probably should take the mean and not the median. In the survey that's $9761 - so the total (claimed) donations from people that took the survey amounts to about $35 million.

Also, I see that from that same survey that they have (partial) data on contribution to each charity. Note that Givewell is not that big relatively in total donations

Comment by edoarad on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T06:09:59.300Z · score: 4 (2 votes) · EA · GW

How do you view the field of Machine Ethics? (I only now heard of it in this AI Alignment Podcast)

Comment by edoarad on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T06:06:32.938Z · score: 15 (5 votes) · EA · GW

It seems like there are many more people that want to get into AI Safety, and MIRI's fundumental research, than there is room to mentor and manage them. There are also many independent / volunteer researchers.

It seems that your current strategy is to focus on training, hiring and outreaching to the most promising talented individuals. Other alternatives might include more engagement with amatures, and providing more assistance for groups and individuals that want to learn and conduct independent research.

Do you see it the same way? This strategy makes a lot of sense, but I am curious to your take on it. Also, what would change if you had 10 times the amount of management and mentorship capacity?

Comment by edoarad on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T05:47:15.689Z · score: 40 (17 votes) · EA · GW

In the 80k podcast episode with Hilary Greaves she talks about decision theory and says:

Hilary Greaves: Then as many of your listeners will know, in the space of AI research, people have been throwing around terms like ‘functional decision theory’ and ‘timeless decision theory’ and ‘updateless decision theory’. I think it’s a lot less clear exactly what these putative alternatives are supposed to be. The literature on those kinds of decision theories hasn’t been written up with the level of precision and rigor that characterizes the discussion of causal and evidential decision theory. So it’s a little bit unclear, at least to my likes, whether there’s genuinely a competitor to decision theory on the table there, or just some intriguing ideas that might one day in the future lead to a rigorous alternative.

I understand from that that there is little engagement of MIRI with the academia. What is more troubling for me is that it seems that the cases for the major decision theories are looked upon with skepticism from academic experts.

Do you think that is really the case? How do you respond to that? It would personally feel much better if I knew that there are some academic decision theorists who are exited about your research, or a compelling explanation of a systemic failure that explains this which can be applied to MIRI's work specifically.

[The transition to non-disclosed research happend after the interview]

Comment by edoarad on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T05:24:01.510Z · score: 12 (8 votes) · EA · GW

What are some bottlenecks in your research productivity?

Comment by edoarad on edoarad's Shortform · 2019-11-15T15:36:31.896Z · score: 7 (5 votes) · EA · GW

Statisticians Without Borders is a volunteer Outreach Group of the American Statistical Association that provides pro bono services in statistics and data science. Their focus is mostly on developing countries.

They have about 800 Volunteers.

Their Executive Committee consists of volunteers democratically elected from within the volunteer community every two years.

Comment by edoarad on Should animal advocates donate now or later? A few considerations and a request for more. · 2019-11-14T10:29:44.801Z · score: 2 (2 votes) · EA · GW

Great :) I really agree about your first point. Also, it is beneficial to make a concrete realization of generic arguments in a specific case.

Comment by edoarad on Should animal advocates donate now or later? A few considerations and a request for more. · 2019-11-14T04:52:43.624Z · score: 2 (2 votes) · EA · GW

Thanks for compiling this!

I think that the question of when should we donate, and should it be for research or direct work is mostly non-unique to factory farming. Most of the considerations here apply generally.

One thing that you mention here as being unique to factory farming is that you expect it to decay over time (but first rise, mostly in developing nations). Depending on the time scale, that seems like a reason to prefer to act now more than for other causes, although it doesn't feel strong.

Comment by edoarad on edoarad's Shortform · 2019-11-12T16:16:55.846Z · score: 16 (7 votes) · EA · GW

AMF's cost of nets is decreasing over time due to economies of scale and competition between net manufacturers. https://www.againstmalaria.com/DollarsPerNet.aspx

Comment by edoarad on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-12T05:31:34.758Z · score: 1 (1 votes) · EA · GW

From Bottlenecks to EA impact

More dedicated people (e.g. people working at EA orgs, researching AI safety/biosecurity/economics, giving over $1m/year) converted from moderate engagement due to better advanced engagement (e.g. more in-depth discussions about the pros and cons of AI) (note: in the future, we’ll probably avoid giving specific cause areas in our examples)

Is there a consensus that the conversion from moderate engagement should be through better advanced engagement?

Comment by edoarad on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-12T05:17:06.093Z · score: 1 (1 votes) · EA · GW

I'm a bit confused about the "Known Causes" part.

What is approximately the current distribution of resources? Was it displayed/known to respondents?

Even though someone is working within one cause, there are many practical reasons to use a "portfolio" approach (say; risk aversion, moral trade, diminishing marginal returns, making the EA brand more inclusive to people and ideas). It seems that each different approach will lend itself to a different portfolio. I'm not sure how to think about the data here, and what does the average mean.

Comment by edoarad on The Logic of not eating meat · 2019-11-12T04:35:36.739Z · score: 4 (3 votes) · EA · GW

This post by Peter Hurford presents a somewhat similar case.

Things which I found lacking (both in your post and in Peter's), which were my cruxes, are the comparison to ACE recommended charities, and more concrete ways of making it easier (it seems that you are not engaging with what makes the transition difficult).

Comment by edoarad on Some Modes of Thinking about EA · 2019-11-11T21:49:37.299Z · score: 1 (1 votes) · EA · GW

Even though I agree that presenting EA as Utilitarianism is alienating and misleading, I think that it is a useful mode of thinking about EA in some contexts. Many practices in EA are rooted in Utilitarianism, and many (about half from the respondents to the survey, if I recall correctly) of the people in EA consider themselves utilitarian. So, while Effective Utilitarianism is not the same as EA, I think that the confusion of the outsiders is sometimes justified.

Comment by edoarad on Forum update: New features (November 2019) · 2019-11-09T07:02:49.286Z · score: 14 (7 votes) · EA · GW

Great features, thanks!

Regarding the pingback, is it maxed at 5? Also, I see that it does not currently ping back to comments/shortform posts - which would also be useful.

I think the Pingback might be useful in two ways. First, an easier way to navigate old posts and navigate the sphere of ideas in the forum. Second, it can serve as another proxy-measure for the usefulness of a post, similar to academic citations. This should also somewhat encourage people to more thoroughly look through old posts to seek previous discussions.

Comment by edoarad on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-11-05T16:13:18.133Z · score: 2 (2 votes) · EA · GW

Sorry, what do you mean by XR?

Regarding earning to give, from reading this it seems that to maintain motivation and interest there is still a lot of structure needed. How do you think of that?

Comment by edoarad on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-11-05T06:25:31.866Z · score: 2 (2 votes) · EA · GW

Has there been any further discussion of Task Y? More candidates?

Comment by edoarad on We should choose between moral theories based on the scale of the problem · 2019-11-04T18:20:01.424Z · score: 2 (2 votes) · EA · GW

(and sadly, it's not true that we can assume most other children already have parents looking out for them. Or at least, for your argument to work you need to replace most other children with all other children)

Comment by edoarad on We should choose between moral theories based on the scale of the problem · 2019-11-04T18:15:42.705Z · score: 1 (1 votes) · EA · GW

Neglectedness is usually taken to be the amount of resources going into a problem. You can measure the resources by "parenting time" (what about orphans, by the way?) but in many cases it is not the most important resource.