Posts

Announcing PriorityWiki: A Cause Prioritization Wiki 2018-06-18T22:33:58.656Z · score: 38 (45 votes)
Lessons for estimating cost-effectiveness (of vaccines) more effectively 2018-06-06T03:18:22.906Z · score: 17 (14 votes)
How beneficial have vaccines been? 2018-05-05T00:08:14.727Z · score: 10 (10 votes)
Announcing Rethink Priorities 2018-03-02T21:03:36.196Z · score: 43 (40 votes)
Charity Science: Health - A New Direct Poverty Charity Founded on EA Principles 2016-08-04T01:08:25.804Z · score: 20 (20 votes)
Reintroducing .impact 2015-04-07T19:19:59.877Z · score: 8 (8 votes)

Comments

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-15T23:27:19.904Z · score: 10 (4 votes) · EA · GW

We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*

It is true in this scenario that in 2020 we'd end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn't change and the animal donations for 2020 would not then be spent on non-animal work.

*We would very much state publicly when we have no more room for further donations in general, and by cause area.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T23:19:26.025Z · score: 12 (9 votes) · EA · GW

Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)

We also provide a way for any of our staff to offer anonymous feedback and information to senior management (which can help assist someone in the reporting a claim of harassment or discrimination)

Finally, I’d note that during our hiring round last year we pretty actively sought out and promoted our job to a diverse pool of candidates and we tracked performance of hiring on these metrics. We plan to continue this going forward.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:54:59.258Z · score: 13 (6 votes) · EA · GW

Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.

In addition, we have some plans to investigate potentially high value policies for animal welfare.

On CE's work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:48:26.198Z · score: 8 (5 votes) · EA · GW

I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it's been some time since I've seriously considered this topic.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:45:37.130Z · score: 12 (6 votes) · EA · GW

Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:

Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.

Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invertebrate sentience (though there is, of course, a chance these people were overemphasizing the work to me). A couple of academics have also privately told us they thought our work was worthwhile and useful to them. These people largely aren't donors though and I doubt many of them have started to give to invertebrate charities.

That said, I think the impact of this project in particular is difficult to judge. The diffuse impact of possibly introducing or normalizing discussion of this topic is difficult to capture in surveys, particularly when the answers are largely anonymous, and the payoffs even if people have been convinced to take them seriously may not occur until there is an actionable intervention to possibly support.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:28:49.826Z · score: 12 (5 votes) · EA · GW

We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:11:49.603Z · score: 6 (4 votes) · EA · GW

Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.

So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we've said recently in our impact and strategy update, we think we can do a better job of this type of communication going forward.

For the wider EA community, we haven't undertaken significant efforts to drive more discussion on posts but this is something potentially worth considering. I'd say one driver of whether we'd actually decide to do this would be if we came to believe more work here would potentially increase the chances we hit the goals I mentioned above.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T16:40:03.777Z · score: 6 (4 votes) · EA · GW

Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it's because that's where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won't necessarily hold over time.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T16:38:33.442Z · score: 10 (7 votes) · EA · GW

Thanks for the question! To echo Ozzie, I don't think it's fair to directly compare the quality of our work to the quality of GPI's work given we work in overlapping but quite distinct domains with different aims and target audiences.

Additionally, we haven't prioritized publishing in academic journals, though we have considered it for many projects. We don't believe publishing in academic journals is necessarily the best path towards impact in the areas we've published in given our goals and don't view it as our comparative advantage.

All this said, we don't deliberately err more towards quantity over quality, but we do consider the time tradeoff of further research on a given topic during the planning and execution phases of a project (though I don't think this is in any way unique to us within EA). We do try to publish more frequently because of our desire for (relatively) shorter feedback loops. I'd also say we think our work is high quality but I'll let the work speak for itself.

Finally, I take no position on whether EA organizations in general ought to err more or less towards academic publications as I think it depends on a huge number of factors specific to the aims and staffs of each organization.

Comment by marcus_a_davis on Opinion: Estimating Invertebrate Sentience · 2019-11-17T17:37:43.107Z · score: 5 (3 votes) · EA · GW

My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:

...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature

I could have made a 90% subjective confidence interval, but I wasn't confident enough that such an explicit goal in creating or distributing my understanding would be helpful.

Comment by marcus_a_davis on Opinion: Estimating Invertebrate Sentience · 2019-11-10T17:15:14.296Z · score: 7 (4 votes) · EA · GW

I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.

To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.

As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.

*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.

Comment by marcus_a_davis on EA Forum 2.0 Initial Announcement · 2018-07-20T01:46:54.433Z · score: 6 (10 votes) · EA · GW

I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.

What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma "merely" a maximum of 2 times the amount of possible weight? 4 times?

However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.

While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.

Comment by marcus_a_davis on Announcing Rethink Priorities · 2018-04-18T00:12:12.487Z · score: 1 (1 votes) · EA · GW

Sorry for the extremely slow reply, but yes. That topic is on our radar.

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T05:44:55.138Z · score: 0 (2 votes) · EA · GW

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community."

Edit: Should say "Naturally, we all have priors..."

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:50:26.127Z · score: 0 (2 votes) · EA · GW

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:37:50.213Z · score: 0 (2 votes) · EA · GW

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:13:56.917Z · score: 0 (2 votes) · EA · GW

This survey makes sense. However, I have a few caveats:

Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.

Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with AI risk (and, obviously, competent to judge who to include in the survey)? The ideal person to me would be neutral and while of course finding someone who is truly neutral would likely prove impractical, selecting someone overtly positive would be a bad idea for the same reasons it would be to select someone overtly negative. The point is the aim should be towards neutrality.

They should also have a chance to comment on the survey itself >before it goes out. Ideally it >would be checked by someone who understand good survey >design, as subtle aspects of >wording can be important.

This should be a set time frame to draft a response to the survey before it goes public. A "chance" is too vague.

It should be impressed on participants the value of being open and thoughtful in their answers >for maximising the chances of solving the problem of AI risk in the long run.

Telling people to be open and thoughtful is great, but explicitly tying it to solving long run AI risk primes them to give certain kinds of answers.

Comment by marcus_a_davis on A response to Matthews on AI Risk · 2015-08-12T23:54:21.468Z · score: 1 (1 votes) · EA · GW

It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.

I did mean over outcomes. I was referring to this:

If we're uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.

That seems mistaken to me but it could be because I'm misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn't just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we'd then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it's very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.

But if it's even annoying folks at EA Global, then probably people ought to stop using them.

Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.

Comment by marcus_a_davis on A response to Matthews on AI Risk · 2015-08-11T16:46:47.189Z · score: 10 (10 votes) · EA · GW

I think you are short selling Matthews on Pascal's Mugging. I don't think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.

Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn't taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it's not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.

Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal's Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal's Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).

*Normally, I believe, it would be all logically possible outcomes but obviously it's unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.

Comment by marcus_a_davis on How We Run Discussions at Stanford EA · 2015-04-14T17:12:15.379Z · score: 6 (6 votes) · EA · GW

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

Comment by marcus_a_davis on April Open Thread · 2015-04-12T16:34:17.832Z · score: 0 (0 votes) · EA · GW

That counts. And, as I said above to Ben, I should have been more broad anyway. I just think we can use more first-person narratives about earning to give to present the idea as less of an abstraction.

Of course, I could be wrong and those who would consider earning to give at all (or would be moved to donate more because of hearing such a story) would be equally swayed by a third person analysis of why it is a good idea for some people.

Comment by marcus_a_davis on April Open Thread · 2015-04-12T16:27:04.925Z · score: 0 (0 votes) · EA · GW

That would count but I should have been more broad in my statement anyway. People like the "here's what I did and why I did it narrative" and earning to give could use more of these stories in general. I think a variety of them showing different perspectives for people in different positions and different abilities would be a boon.

Btw, I was quite wrong about there being no first person accounts as, for one, Chris Hallquist has written about this extensively.

Comment by marcus_a_davis on Reintroducing .impact · 2015-04-07T19:43:56.694Z · score: 9 (9 votes) · EA · GW

As for my personal experience with .impact here's a brief summary:

I'm still relatively new to .impact but I actually don’t recall with clarity how I found it. I believe, with barely over 50% confidence, that Peter Hurford told me about it. So far I've found it very welcoming and bursting with ideas and people willing to help. And if you review the meeting notes, over any significant period of time it is clear many things are getting accomplished. However, even with the ability to search all of Hackpad for projects, finding things by project type can be difficult if you don’t know where to look (an Index page sorting projects by type might help). As it stands right now, the easiest way to find something for outsiders and newcomers is often just to ask someone.

Also for a newcomer, particularly one like myself who doesn't currently offer any particularly in-demand skills like web design or programming, it can be difficult to know what exactly to do if you arrive just looking to help. However, I found the answer to this, as many things in life, is to just dive in. If you think you can do it and have the time, volunteer. That’s how I ended up writing this post and moderating this forum. It really is the case that if you have the time, there’s probably something you could be working on.

Comment by marcus_a_davis on Earning to Give: Programming Language Choice · 2015-04-07T02:16:19.293Z · score: 4 (4 votes) · EA · GW

As someone currently in the process of learning programming here are a few thoughts on my attempt at learning two of the bolded languages, Java and Ruby:

I'm currently working through The Odin Project, which has a backend focus on Ruby, and I'd highly recommend it. I'd also recommend Peter's guide to TOP which I've found very useful which includes some time estimates, some additional resources and some things to learn after you complete TOP. Perhaps the biggest plus to TOP for me is giving projects of the correct difficulty at the correct time so that they are challenging but doable. Another of the biggest benefits of TOP is the sheer scope of the resources already collected for you. Also Ruby is far more intuitive than Java.

Before starting TOP I started learning programming by attempting to learn Java on my own without much structure. However, going on my own I'd often spend time attempting to track down a good explanation for topics. There was also the issue of not knowing what was a logical path to take to learning and I think I took some major false steps. The resource I found most beneficial during that time were probably the free courses at Cave of Programming which covered a wide range of topics but had the huge downside of being somewhat dated video tutorials. Other than that I didn't find lots of free resources to help learning Java but there are some pretty cheap stuff on Udemy and a subscription to Lynda could be a good investment as well.

Of course, a huge caveat, I am a sample size of one who had no experience at all with programming before starting with Java. People with different backgrounds may have very different experiences.

Comment by marcus_a_davis on April Open Thread · 2015-04-03T02:52:56.952Z · score: 3 (3 votes) · EA · GW

There is also a contingent of utilitarians within effective altruism who primarily care >about reducing and ending suffering. They may be willing to compromise in favor of >animal welfare, and not full rights, but I'm not sure. They definitely don't seem a >majority of those concerned with animal suffering within effective altruism.

Of course, only actual data on EAs could demonstrate the proportionate of utilitarians willing to compromise but this seems weird. To me it would seem utilitarianism all but commits you to accept "compromises" on animal welfare at least in the short term given historical facts about how groups gained ethical consideration. As far as I know (anyone feel free to provide examples to the contrary), no oppressed group has ever seen respect for their interests go from essentially "no consideration" (where animals are today) to "equal consideration" without many compromising steps in the middle.

In other words, a utilitarian may want the total elimination of meat eating (though this is also somewhat contentious) but in practice they will take any welfare gains they can get. Similarly, utilitarians may want all wealthy people to donate to effective charities until global poverty is completely solved but will temporarily "compromise" by accepting only 5% of wealthy people to donate 10% of their income to such charities while pushing people to do better.

So, in practice, utilitarianism would mean setting the bar at perfection (and publicly signaling the highest standard that advances you towards perfection) but taking the best improvement actually on offer. I see no reason this shouldn't apply to the treatment of animals. Of course, other utilitarians may disagree that this is the best long term strategy (hopefully evidence will settle this question) but that is an argument about game theory and not whether some improvement is better than none or if settling for less than perfection is allowable.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T03:55:42.414Z · score: 0 (0 votes) · EA · GW

Ah, I should have guessed that from the "this is being actively pursued" label or I could have just asked there.

Naturally, if you'd like the help, I suspect there may be at least a few people here who, given their familiarity with a given religion, may have a decent idea of how to pitch the focus on effectiveness to a specific group.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T03:10:00.568Z · score: 1 (1 votes) · EA · GW

Are there any first person pieces on someone about successfully changing careers in order to earn to give? There have been several stories discussing the topic over the past few years but these all seem to be descriptive, third person accounts, or normative analysis.

Even if not, if you've actually made such a change could you please publicly share your story. I'd like to hear it and I'd bet many others would too.

Comment by marcus_a_davis on EA Advocates announcement · 2015-04-02T01:10:23.328Z · score: 0 (0 votes) · EA · GW

To answer myself: turns out at least for iBooks the problem was my impatience. It's now in the library and it's still a week before it is officially released. Perhaps Kindle will be the same way.

Still, I so rarely anticipate books being released I'm not sure if this is common.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T01:00:33.519Z · score: 1 (1 votes) · EA · GW

In navel-gazing curiosity: Has there been a poll done on what EAs think about moral realism?

I searched the Facebook group and Googled a bit but didn't come up with anything.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T00:50:11.342Z · score: 1 (1 votes) · EA · GW

Has anyone else tried to pushing EA specifically at religious audiences? There's this on .impact but it's been a while since that was touched and I'd guess this could use some follow up. Doing this could really prove beneficial at getting favorable audiences especially if you or someone you're close to is heavily involved in a church.

Comment by marcus_a_davis on EARadio: A New Podcast for Effective Altruists · 2015-04-02T00:37:47.401Z · score: 0 (0 votes) · EA · GW

If you'd like I can give a go at cleaning up the audio of Ord's talk.

And by give a go I mean, run it through a few filters to see if it can go from "very bad" to "passable".

Comment by marcus_a_davis on We might be getting a lot of new EAs. What are we going to do when they arrive? · 2015-03-30T19:50:18.383Z · score: 3 (3 votes) · EA · GW

I'm up to help do both of those. Of course, how much I can help with the former will depend on what exactly needs to be done.

Comment by marcus_a_davis on EA Advocates announcement · 2015-03-27T17:33:01.580Z · score: 2 (2 votes) · EA · GW

A bit OT but this reminded me: Does anyone know if The Most Good You Can Do is coming out for Kindle?

I strongly prefer digital books, so buying it for Kindle would be the medium by which I can leave a verified purchase review on Amazon. However, the book doesn't seem to available digitally anywhere in the U.S. iBooks is seemingly selling it for Australia only.

I'm pretty sure I'm grasping at proverbial straws here though.

Comment by marcus_a_davis on Marcus Davis will help with moderation until early May · 2015-03-26T00:18:51.179Z · score: 0 (0 votes) · EA · GW

Ryan and I were discussing doing that for different subreddits that a given post here might be of interest to. So if it's a post about medical interventions posting it in /r/medicine for example.

Of course, the Internet is a lot bigger than Reddit though so there are probably many venues related to philanthropy, productivity, philosophy, animal rights, medical interventions, etc. that posts here could be relevant to. I'm going to try to do what I can but I would appreciate guidance toward relevant venues and potentially help actually doing the work if it proves to be a huge task.

Comment by marcus_a_davis on Marcus Davis will help with moderation until early May · 2015-03-25T20:26:14.971Z · score: 4 (4 votes) · EA · GW

I'm pretty excited to help out. Of course, as pointed by Ryan, if anyone has any pointers about spreading our reach more effectively on social media, I'm open to hearing them.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:59:20.667Z · score: 1 (1 votes) · EA · GW

Working on getting a more useful skill but for now if ever someone needs some audio editing, perhaps for a potential EA podcast, I can do it.

Also this job board seems relevant as skills people have they might not think would be of use are in demand.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:28:47.642Z · score: 6 (6 votes) · EA · GW

I'm interested in if anyone has any data or experience attempting to introduce people to EA through conventional charitable activities like blood drives, volunteering at a food bank, etc. The idea I've been kicking around is basically start or co-opt a blood drive or whatever event.

While people are engaged in the activity, or before or after, you introduce them to the idea of EA. Possibly even using this conventional charitable event as the prelude to a giving game. On the plus side the people you are speaking with are self-selected for doing charitable acts so might be more receptive to EA than a typical audience. On the downside is this group might be self-selected for people who care a lot about personally getting hands on for charitable works which typically aren't the most effective things you can do.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:17:38.292Z · score: 2 (2 votes) · EA · GW

Can anyone recommend to me some work on existential threats as a whole? I don't just mean AI or technology related threats but nuclear war, climate change, etc.

Btw Nick Bostrom's Superintelligence is already at the top of my reading list, and I know Less Wrong is currently engaged in a reading group on that book.

Comment by marcus_a_davis on Peter's Personal Review for July-Sep 2014 · 2014-10-07T01:14:30.283Z · score: 1 (1 votes) · EA · GW

This is very useful. As someone still very new to this who wants to contribute more it can be helpful to see what other EAs are doing in detail. I still struggle from not knowing exactly what I can do now and what are realistic goals for behavioral and social changes, particularly in the short term.

More generally, as someone trying to be more productive and efficient Toggl looks promising and I'm going to try it out myself.

Comment by marcus_a_davis on Effective altruism as the most exciting cause in the world · 2014-09-26T17:10:17.144Z · score: 5 (5 votes) · EA · GW

Having grown up as one of those people who figured "can't succeed, don't try" with regards to large problems, I think this is really a fantastic point that I hadn't considered expressing this way. I think lots of people like who currently think like I did could be swayed if the message could get through to them that they can indeed change the world for the better.

Comment by marcus_a_davis on The Economist on "extreme altruism" · 2014-09-19T00:40:22.049Z · score: 2 (2 votes) · EA · GW

Interesting piece. However, the article conflates psychopathy meaning "people with smaller amygdalas" and psychopathy meaning "people with smaller amygdalas who display anti-social behavior". The former group is not necessarily in the latter group. For example, you may have a smaller than average amygdala and genuinely respond less to the fear and distress of others but not become a social predator that manipulates people.

And as you point out, it's not clear how this study relates to EAs. It could be that EAs have relatively normal amygdala size but are disproportionately interested in rationality and ethics and hence recognize the good they can and should be doing in the world.

Comment by marcus_a_davis on Introduce Yourself · 2014-09-18T05:30:31.648Z · score: 11 (11 votes) · EA · GW

Hola everyone. I'm Marcus. I'm an audio engineer but I really got into philosophy during college. Eventually that led me to ethics and effective altruism.

I'm currently learning a more financially beneficial skill so I can earn to give. In the meantime, I intend to do everything I can outside of that to contribute and help spread the word of EA.