Posts

Introducing Sentience Institute 2017-06-02T14:43:45.784Z · score: 18 (24 votes)
Why Animals Matter for Effective Altruism 2016-08-22T16:50:40.800Z · score: 23 (26 votes)
EA Interview Series, January 2016: Perumal Gandhi, Cofounder of Muufri 2016-01-12T17:02:13.987Z · score: 10 (10 votes)
EA Interview Series: Michelle Hutchinson, December 2015 2015-12-22T15:46:58.036Z · score: 20 (22 votes)

Comments

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:34:35.649Z · score: 0 (0 votes) · EA · GW

Another (possibly bad, but want to put it out there) solution is to list names of people who downvoted. That of course has downsides, but it would have more accountability, especially when it comes to my suspicion that it's a few people doing a lot of the downvoting against certain people/ideas.

Another is to have downvotes 'cost' karma, e.g. if you have 500 total karma, that allows you to make 50 downvotes.

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:18:19.970Z · score: 0 (0 votes) · EA · GW

Yeah, I'm totally onboard with all of that, including the uncertainty.

My view on downvoting is less that we need to remove it, and more that the status quo is terrible and we should be trying really hard to fix it.

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:11:28.124Z · score: 0 (2 votes) · EA · GW

Yeah, I don't think downvotes are usually the best way of addressing bad arguments in the sense that someone is making a logical error, mistaken about an assumption, missing some evidence, etc. Like in this thread, I think that's leading to dogpiling, groupthink, and hostility in a way that outweighs the benefit of downvoting from flagging bad arguments when thoughtful people don't have time to flag them via a thoughtful comment.

I think downvotes are mostly just good for bad comments in the sense that someone is purposefully lying, relying on personal attacks instead of evidence, or otherwise not abiding by basic norms of civil discourse. In these cases, I don't think the downvoting comes off as nearly as hostile.

If you agree with that, then we must just disagree on whether examples (like my downvoted comment above) are bad arguments or bad comments. I think the community does pretty often downvote stuff it shouldn't.

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:28:20.141Z · score: -2 (12 votes) · EA · GW

Another concrete suggestion: I think we should stop having downvotes on the EA Forum. I might be not appreciating some of the downsides of this change, but I think they are small compared to the big upside of mitigating the toxic/hostile/dogpiling/groupthink environment we currently seem to have.

When I've brought this up before, people liked the idea, but it never got discussed very thoroughly or implemented.

Edit: Even this comment seems to be downvoted due to disagreement. I don't think this is helpful.

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:25:18.414Z · score: -1 (1 votes) · EA · GW

For what it's worth, I think if you had instead commented with: "As a newcomer to this community, I see very little evidence that EA prizes accuracy more than average. This seems contrary to its goals, and makes me feel sad and unwelcome," (or something similar that politely captures what you mean) that would have been a valuable contribution to the discussion.

That being said, you might have still gotten downvoted. People's downvoting behavior on this forum is really terrible and a huge area for improvement in online EA discourse.

Comment by thebestwecan on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:20:53.416Z · score: 0 (2 votes) · EA · GW

I wouldn't concern yourself much with downvotes on this forum. People use downvotes for a lot more than the useful/not useful distinction they're designed for (most common other reason is to just signal against views they disagree with when they see an opening). I was recently talking to someone about what big improvements I'd like to see in the EA community's online discussion norms, and honestly if I could either remove bad comment behavior or remove bad liking/voting behavior, it'd actually be the latter.

To put it another way, though I'm still not sure exactly how to explain this, I think no downvotes and one thoughtful comment explaining why your comment is wrong (and no upvotes on that comment) should do more to change your mind than a large number of downvotes on your comment.

I'm really still in favor of just removing downvotes from this forum, since this issue has been so persistent over the years. I think there would be downsides, but the hostile/groupthink/dogpiling environment that the downvoting behavior facilitates is just really really terrible.

Comment by thebestwecan on Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood · 2017-06-29T15:26:19.596Z · score: 2 (2 votes) · EA · GW

That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for "living things," "mountains," and even terms that are themselves very important like "factory farming."

Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of "What are the mental things we care about, and which beings have those?" and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.

Also, to a less extent, (iii) how much we listen to "expert" opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.

Comment by thebestwecan on Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood · 2017-06-28T21:30:13.456Z · score: 2 (2 votes) · EA · GW

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

Comment by thebestwecan on Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood · 2017-06-28T17:24:29.849Z · score: 4 (4 votes) · EA · GW

Thanks for doing this AMA. I'm curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question "Is an insect conscious?" or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?

The Open Phil conversation notes with Brian Tomasik say:

Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism

(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik's well-known analogy is that there's no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there's something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn't give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn't give you an objective definition of a table.)

However, in the report, you write as though there is an objective definition (e.g. in the "Consciousness, innocently defined" section), and I feel most readers of the report will get that impression, e.g. that there's an objective answer as to whether insects are conscious.

Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it's still useful to use common sense rhetoric that treats it as objective, and you don't think it's that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there's still enough likelihood of Type B that you focus on questions like "If Type B is true, then is an insect conscious?" and would just shorthand this as "Is an insect conscious?" because e.g. if Type A is true, then consciousness research is not that useful in your view.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-26T20:49:45.251Z · score: 0 (0 votes) · EA · GW

Thanks, Andy. That table had the values of the previous table for some reason. We updated the page.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-14T12:33:58.081Z · score: 4 (4 votes) · EA · GW

I'd like to ask the people who downvoted this post to share their concerns in comments if possible. I know animal content tends to get downvoted by some people on the EA Forum, so this might just be another instance of that, rather than for more specific reasons.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-14T02:00:54.002Z · score: 1 (1 votes) · EA · GW

I (Jacy) was asked a good question privately that I wanted to log my answer to here, about how our RCT approach compares with that of academic social science RCTs, which I also discussed some in my response to Jay.

While there are many features of academic social science research we hope to emulate, e.g. peer review, I think academia also has a lot of issues that we want to avoid. For example, some good science practices, e.g. preregistration, are still uncommon in academia and there are strong incentives other than scientific accuracy, e.g. publish or perish, that we hope to minimize. I'd venture a speculative guess that the RCTs ran by nonprofit researchers in the EA community, e.g. the Mercy For Animals online ads RCT, are higher-quality than most academic RCTs. The most recurrent issue in EA RCTs is low sample size, which seems like more of a funding issue than a skillset/approach issue. (It could be a skillset/approach issue in some ways, e.g. if EA nonprofits should be running fewer RCTs so they can get the higher sample size on the same budget, which I tentatively agree with and think is the current trend.)

With our Research Network, we're definitely happy to support high-quality academic research. We'd also be happy to hire academics interested in switching to nonprofit research, though we worry that few would be willing to work for the relatively low salaries.

In terms of communicating our research, our lack of PhDs and academic appointments on staff has been at the top of our list of concerns. Unfortunately there's just not a good fix available. Ideally, once we're able to make our first hire, we'd find a PhD who's willing to work for a nonprofit EA salary, but that seems unlikely. We do already have PhDs/academics in our advisory/review network. I've also considered personally going back to school for a PhD, but everyone I've consulted with thinks this wouldn't be worth the time cost.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-13T12:34:51.528Z · score: 1 (1 votes) · EA · GW

We've been in touch with most EAA orgs (ACE, OPP, ACE top/standout charities) and they have expressed interest. We haven't done many hard pitches so far like, "The research suggests X. We think you should change your tactics to reflect that, by shifting from Y to Z, unless you have evidence we're not aware of." We hope to do that in the future, but we are being cautious and waiting until we have a little more credibility and track record. We have communicated our findings in softer ways to people who seem to appreciate the uncertainty, e.g. "Well, our impression of this social movement is that it's evidence for Z tactics, but we haven't written a public report on that yet and it might change by the time we finish the case study."

I (Jacy) would guess that our research-communication impact will be concentrated in that small group of animal advocacy orgs who are relatively eager to change their minds based on research, and perhaps in an even smaller group (e.g. just OPP and ACE). Their interests do influence us to a large extent, not just because it's where they're more open to changing their minds, but because we see them as intellectual peers. There are some qualifications we account for, such as SI having a longer-term focus (in my personal opinion, not sure they'd agree) than OPP's farmed animal program or ACE. I'd say that the interests of less-impact-focused orgs are only a small factor, since the likelihood of change and potential magnitude of change seem quite small.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-09T13:48:47.981Z · score: 1 (1 votes) · EA · GW

The Foundational Summaries page is our only completed or planned project that was primarily intended to delineate research questions. Because of its fairly exhaustive nature, I (Jacy) think it does only have to be done once, and now our future research can just go into that page instead of needing to be repeatedly delineated, if that makes sense.

None of the projects in our research agenda are armchair projects, i.e. they all include empirical, real-world study and aggregation of data. You can also find me personally critiquing other EA research projects for being too much about delineation and armchair speculation, instead of doing empirical research. We have also noted that our niche as Sentience Institute within EAA is foundational research that expands the EAA evidence base. That is definitely our primary goal as an organization.

For all those reasons, I'm not very worried about us spending too much time on delineation. There's also just the question of whether these research questions are so difficult, at least to make concrete progress on, that our work will not be cost-effective even if such progress, if achieved, would be very impactful. That's my second biggest worry about SI's impact (biggest is that big decision-makers won't properly account for the research results). I don't think there's much to do to fix that concern besides working hard for the next few months or couple years and seeing what sort of results we can get. We've also had some foundational research from ACE, Open Phil, and other parties that seems to have been taken somewhat seriously by big EAA decision-makers, so that's promising.

We'd be open to giving grants or scholarships to relevant research projects done by graduate students in the social sciences. I don't think the demand for such funding and the amount of funding we could supply is such that it'd be cost-effective to set up a formal grants program at this time (we only have two staff and would like to get a lot of research done by December), but we'd be open to it. Two concerns that come to mind here are (i) academic research has a lot of limitations, especially when done by untenured junior researchers who have to worry a lot about publishing in top journals, matching their subject matter with the interests of professors, etc. (ii) part-time/partially-funded research is challenging, to the point that some EA organizations don't even think it's worth the time to have volunteers. There's a lot of administrative cost that could make it not cost-effective overall, and better to just hire full-time researchers.

Those concerns are mitigated by considerations like: (i) grad students, even at that early stage, could have valuable subject matter expertise. For example, I'm always on the prowl for someone who both knows a lot about the academic social movement literature and also approaches it with an EA perspective. I've found few people who have both features to a significant degree. (ii) some might be willing to do relevant research with only minimal amounts of funding and supervision, and that could be very low-hanging fruit. We have our Research Network for this sort of work, and we do hope to continue trying to capture low-hanging fruit with it.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-06T14:09:34.715Z · score: 2 (2 votes) · EA · GW

We (Kelly and Jacy) weren't working at SP when its agenda was written, but my impression is that SP's research agenda was written to broadly encompass most questions relevant to reducing suffering, though this excluded some of the questions already prioritized by the Foundational Research Institute, another EAF project. I (Jacy) think the old SP agenda reflects EAF's tendency to keep doors open, as well as an emphasis on more foundational questions like the "sentience as a phenomenon itself" ones you mention here.

When we were founding SI, we knew we wanted to have a relatively narrow focus. This was the recommendation of multiple advocacy/EA leaders we asked for advice. We also wanted to have a research agenda that was relatively tractable (though of course we don't expect to have definitive answers to any of the big questions in EA in the near future), so we could have a shorter feedback loop on our research progress. As we improve our process, we'll lean more towards questions with longer feedback loops. We also think that the old SP agenda was not only broad in topic, but broad in the skills/expertise necessary for tackling its various projects. Narrowing the focus to advocacy/social change means there's more transferability of expertise between questions.

Finally, it seemed there had been a lot of talk in EA of values spreading as a distinct EA project, especially moral circle expansion, which arguably lies at the intersection of effective animal advocacy and existential risk/far future work, meaning it's been kind of homeless and could benefit from having its own organization.*

All of this led us to focus SI on moral circle expansion and the more narrow/tractable/concrete/empirical research agenda than that of the old SP.** We've considered keeping the old agenda around as a long-term/broad agenda while still focusing on the new one for day-to-day work. I think it's currently still up-in-the-air what exactly will happen to that document.

*The analogy that comes to mind here is cultured/clean meat, i.e. real meat grown from animal cells without slaughter. People in this field argue it's been heavily neglected relative to other scientific projects because it's at the intersection of food science (because the product is food) and medical science (because that's where tissue engineering has been most popular).

**We think even our current mission/agenda is very broad, which is why we have the even narrower focus on animal farming right now. We think that narrower focus could change in the next few years, but we expect SI as an organization to be focused on moral circle expansion for the long haul.

Comment by thebestwecan on Introducing Sentience Institute · 2017-06-02T23:32:46.964Z · score: 9 (9 votes) · EA · GW

We're planning to make predictions about movement progress (e.g. rate of corporate welfare reforms) as well as our own goals (e.g. amount of evidence our research generates (ideally will be evaluated by an external party), number of influential advocates who change their minds based on our research). This is similar to what ACE and OPP do, and I think other EA orgs. With the self-predictions, we've been thinking we'll have some lower bounds where if we're consistently underperforming, we'll note that on our Mistakes page and consider big-picture reprioritization such as doing more outreach work.

We're currently in touch with several influential animal advocates because of our survey for the Foundational Summaries page on how EAA researchers currently view the evidence on these questions. We've had positive feedback on it so far, above our expectations, but it'll give us a good first feedback loop to see if our work is useful.

I'd also note that we wanted to get a 'minimum viable product' out there ASAP, so lots of our specific plans are still up in the air. We're still very interested in feedback, and now that we've got the website published, we'll be able to spend more time on the nitty gritty. Full disclosure, we're still going to need to spend a lot of time fundraising in the coming weeks, and then of course we need to do the actual research, so I'm not sure how much we should prioritize progress-tracking relative to that work..

Comment by thebestwecan on Dedicated Donors May Not Want to Sign the Giving What We Can Pledge · 2016-10-30T13:33:28.180Z · score: 1 (1 votes) · EA · GW

.

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-09-04T17:37:00.661Z · score: 0 (0 votes) · EA · GW

The last account is presumably a dummy one created by mapping comments from other sites to the EA Forum, but yeah, the first two are mine.

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-25T06:03:42.703Z · score: 2 (2 votes) · EA · GW

I basically agree with this. Some feedback on this post before it was published suggested that I add even more content justifying animal sentience. I pushed back on that for reasons you mention, but still wanted to include the quoted section because (i) even if most people agree with animal sentience when asked, it's a different matter to "appreciate" it and recognize the implications for cause prioritization and other moral decisions, (ii) some people in the EA community have noted skepticism about animal sentience as the main reason for not prioritizing animal advocacy (although this happens less as time goes on), so I wanted to directly confront that.

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-25T05:30:08.081Z · score: -3 (3 votes) · EA · GW

Evidence is (i) downvoting is on certain users/topics, rather than certain arguments/rhetoric, (ii) lots of downvotes relative to a small amount of negative comments, (iii) strange timing, e.g. I quickly got two downvotes on the OP before anyone had time to read it (<2 minutes).

I think it happens to me some, but I think it happens a lot to animal-focused content generally.

Edit: jtbc, I mean "systematically downvoting content that contributes to the discussion because you disagree with it, you don't like the author, or other 'improper' reasons." Maybe "brigades" was the wrong word if that suggests coordination, which i'm updating towards after searching online for more uses of the term. Though there might be coordination, not really sure.

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-23T22:47:47.687Z · score: 4 (6 votes) · EA · GW

Just to be clear, I do "believe in" the near-term reasons outlined in this article, even though far future arguments also matter a lot to me. I also think there's a lot of overlap, e.g. if something is neglected now, that can be good reason to think it will continue being neglected. I don't think this post deviated from evidence-based thinking, the use of reason, or intellectual honesty.

I think personal posts are important, but introductory content and topic summaries are also useful. Several people have asked for a post on "why animals matter" like this one, and I don't think they'd have been nearly as interested in a post where >75% of the content was about the far future considerations.

Also, in case anyone missed it, I did mention this in the post: "Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself."

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-23T17:12:36.348Z · score: 2 (2 votes) · EA · GW

Yep. I've used the "Tyrael" username on here for posts that I might have wanted to keep anonymous (largely due to the downvoting brigades), but ended up being okay with it being nonanonymous after the fact.

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-23T17:09:24.376Z · score: 2 (2 votes) · EA · GW

I worry that simplifying it to one level of neglectedness is (i) not as good of a way of driving home just how neglected it is because people have trouble appreciating very large/very small numbers, (ii) potentially misleading because the fact that it's two levels (3%, 1%) might make the total neglectedness more/less than if it were just one bigger level (3% * 1%). E.g. other animal advocates could notice the neglectedness within their own first-level cause of "helping animals" and farmed animal protection could receive more resources than if it were a first-level cause area that were all out on its own, so to speak.

Granted, (ii) seems like a pretty minor point in the scheme of things, and I do appreciate the sentiment of not wanting to draw lines between humans and other animals, especially due to the excessive use of those lines throughout history to justify animal abuse (and the use of similar lines between humans to justify the abuse of some humans).

Comment by thebestwecan on Why Animals Matter for Effective Altruism · 2016-08-22T20:28:14.085Z · score: 4 (6 votes) · EA · GW

Great question! Yeah, I personally favor animal advocacy over reducing extinction risk. (I use existential risk to include both risks of extinction and risks of well-populated, but morally bad, e.g. dystopian, futures.) Here's another blog post that talks about some things to consider when deciding which of these long-term risks to prioritize: http://effective-altruism.com/ea/t3/some_considerations_for_different_ways_to_reduce/

Also note that some work might both decrease extinction and quality risks, such as general EA movement-building and research. Also, "animal advocacy" is kind of a vague term, which could either refer to just "values spreading" (i.e. trying to inspire people, now and/or in the future, to have better values and/or act more morally), or just generally refer to "helping animals." If it's used as the latter, then it could include extinction risk if you think that will help future animals or animal-like beings (e.g. sentient machines).

Comment by thebestwecan on EA Interview Series, January 2016: Perumal Gandhi, Cofounder of Muufri · 2016-01-18T16:53:32.189Z · score: 1 (1 votes) · EA · GW

This kind of dynamic applies to pretty much everything we do - it would very often be achieved later anyway.

I don't think it applies nearly as strongly to most forms of social change, which is a significant benefit of that strategy. You might argue that moral progress is inevitable, but I'm quite skeptical of that hypothesis.

But I would agree that speeding things up can still be really valuable, especially given major uncertainty about affecting the far future.

Comment by thebestwecan on EA Interview Series: Michelle Hutchinson, December 2015 · 2015-12-23T05:46:38.149Z · score: 0 (0 votes) · EA · GW

I think once a month is ideal. More frequently and I think it might be less interesting/notable. But I don't feel strongly and could see myself changing my mind pretty easily here.

Comment by thebestwecan on EA Interview Series: Michelle Hutchinson, December 2015 · 2015-12-23T05:45:25.469Z · score: 0 (0 votes) · EA · GW

I'm not sure which is better generally. I think an interview takes less effort from the 'recipient' and from the community, which is a pretty important advantage. People also might see it as higher quality when they it in their news feed, which would lead to more engagement. (AMAs on smaller forums like this one, people probably expect them to not have great questions or to have to sift through a lot of uninteresting stuff.)

Comment by thebestwecan on Empathic communication and strategy for Effective Altruism, Part 1 · 2015-09-27T04:10:28.730Z · score: 3 (3 votes) · EA · GW

Welcome, Alan! :)

Comment by thebestwecan on EA Blogging Carnival: My Cause Selection · 2015-08-17T03:50:42.583Z · score: 4 (4 votes) · EA · GW

I think someone was also planning to organize links to the posts in a single article for easy reference, probably towards the end of the event!

Comment by thebestwecan on The career questions thread · 2015-06-20T06:08:36.350Z · score: 3 (3 votes) · EA · GW

Which careers give you the best public platform to spread important ideas like effective altruism, cosmopolitanism, antispeciesism, or accounting for the interests of future generations?

To put it another way, which individuals have the most influence over the ideas of society (accounting for difficulty in getting those positions)?

Comment by thebestwecan on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-17T19:25:12.432Z · score: 2 (2 votes) · EA · GW

Thanks for the feedback. I agree that particular test/conclusion was unnecessary/misleading. I think we'll be more careful to avoid tests like that in future survey analyses :)