Posts

Ask Rethink Priorities Anything (AMA) 2020-12-13T17:23:50.994Z
Rethink Priorities 2020 Impact and 2021 Strategy 2020-11-25T13:28:27.710Z
Announcing PriorityWiki: A Cause Prioritization Wiki 2018-06-18T22:33:58.656Z
Lessons for estimating cost-effectiveness (of vaccines) more effectively 2018-06-06T03:18:22.906Z
How beneficial have vaccines been? 2018-05-05T00:08:14.727Z
Announcing Rethink Priorities 2018-03-02T21:03:36.196Z
Charity Science: Health - A New Direct Poverty Charity Founded on EA Principles 2016-08-04T01:08:25.804Z
Reintroducing .impact 2015-04-07T19:19:59.877Z

Comments

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-16T19:52:14.407Z · EA · GW

What new charities do you want to be created by EAs?

I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.

What are the biggest mistakes Rethink Priorities did?

Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-16T19:49:32.362Z · EA · GW

Thanks for the questions!

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.

Specifically, I'd say it's plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.

Still restricted to the present, and outside the typical EA space altogether, I think it's plausible much of the worst suffering in the world is committed during war crimes or torture under various authoritarian states. I do not know if there's anything remotely tractable in this space or what good donation opportunities would be.

If you broaden consideration to include the future, a much wider set of creatures plausibly could experience extreme suffering including digital minds running at higher speeds, and/or with increased intensity of valenced experience beyond what's currently possible in biological creatures. Here, what you think is the best bet would depend on many empirical beliefs again. I would say, only, that I'm excited about our longtermism work and think we'll meaningfully contribute to creating the kind of future that decreases the risks of these types of outcomes.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:57:36.335Z · EA · GW

Thanks for the question, Edo!

We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:46:31.153Z · EA · GW

Hey Edo, thanks for the question!

We've had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it's something that we're better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.

We've not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I'm not sure this is what you meant, but we've also partnered with Metaculus on some forecasting questions.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:32:07.046Z · EA · GW

Hey Josh, thanks for the question!

From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.

At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:15:03.886Z · EA · GW

I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.

Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:01:11.082Z · EA · GW

Thanks for the question!

We hire for fairly specific roles, and the difference between those we do hire and don't isn't necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).

That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.

For these reasons, it's difficult to say with precision which skills I'd hope for more of among EA researchers. With those caveats, I'd still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.

Comment by marcus_a_davis on Ask Rethink Priorities Anything (AMA) · 2020-12-15T18:40:23.988Z · EA · GW

Thanks for the questions!

On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how "weird" the public finds WAW interventions).

We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It's hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.

On (2), it's hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If we don't raise as much as we'd like, we likely consider allocating our expenses differently; and if we raise more than we expected, we'd scale up our work in a cause area. Because our ability to work in spaces is influenced by how much we raise, generally raising more restricted funding in a space ought to lead to us doing more work in that space.

Comment by marcus_a_davis on Rethink Priorities 2020 Impact and 2021 Strategy · 2020-11-25T21:32:04.727Z · EA · GW

Thanks for the question!

I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.

One very simplistic model you can use to think about possible research projects in this area is:

  1. Big considerations (classically "crucial considerations", i.e. moral weight, invertebrate sentience)
  2. New charities/interventions (presenting new ideas or possibilities that can be taken up)
  3. Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)

It's far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1) can be very difficult to map to specific plausible changes ahead of time and, sometimes, even after the completion of the work. These projects can also be more likely to be boom or bust, in that the results of investigating them could have huge effects if we or others shift our beliefs but it can be fairly unlikely to change beliefs at all. That said, I think these types of projects can be very valuable and we try to dedicate some of our time to doing them.

I think it's fair to say these types of "improving some collective understanding of prioritization" projects have been a minority of the types of projects we've done and that are listed for the coming year. However, there are many caveats here including but not limited to:

  • The nature of the project, our fit, and what others are working on has a big impact on which projects we take on. So even if, in theory, we thought a particular research idea was really worth pursuing there are many factors that go into whether we take on a particular project.
  • These types of projects have historically taken longer to complete, so they may be smaller in number but a larger share of our overall work hours than counting projects would suggest at first glance.
Comment by marcus_a_davis on Shrimp Welfare - 2020 Recommended idea · 2020-11-10T16:41:59.879Z · EA · GW

Hey I'm happy to see this on the forum! I think farmed shrimp interventions are a promising area and this report highlights some important considerations. I should note that Rethink Priorities has also been researching this topic for a while and I won't go into detail as I'm not leading up this work and the person who is currently is on leave, but I think we've tentatively come to some different conclusions about the most promising next steps in this domain.

In the future, if anyone reading this is inclined to work on farmed shrimp, in addition to reviewing this report I'd hope you'd read over our forthcoming work and/or reach out to us about this area.

Comment by marcus_a_davis on Differences in the Intensity of Valenced Experience across Species · 2020-11-01T21:39:46.800Z · EA · GW

I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn't do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn't do anything in 1 where they were present.

I'm unclear why you think proportion couldn't matter in this scenario.

I've written a pseudo program in Python below in which proportion does matter, removing neurons that don't fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I don't believe consciousness works this way in humans or other animals but I don't think anything about this is obviously incorrect given the constraints of your thought experiment.

One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.

def experience_pain(nociceptive_neurons_list):
	# nociceptive_neurons_list is a list of neurons represented by 0's and 1's, where 1 is when an individual neuron is firing, and 0 is not
	proportion_of_neurons_firing = proportion_of_neurons_firing(nociceptive_neurons_list)

	if proportion_of_neurons_firing < 0.3:
		return pain_intensity(1)
	elif proportion_of_neurons_firing > 0.3 && proportion_of_neurons_firing < 0.6:
		return pain_intensity(2)
	elif proportion_of_neurons_firing > 0.6 && proportion_of_neurons_firing < 1:
		return pain_intensity(5)
	elif proportion_of_neurons_firing == 1:
		return pain_intensity(10)
	else:
		return pain_intensity(0)



def proportion_of_neurons_firing(nociceptive_neurons_list):
	num_neurons_firing = 0
	for neuron in nociceptive_neurons_list:
		if neuron == 1:
			num_neurons_firing += num_neurons_firing # add 1 for every neuron that is firing

	return num_neurons_firing/get_number_of_pain_neurons(nociceptive_neurons_list) #return the proportion firing

def get_number_of_pain_neurons(nociceptive_neurons_list):
	return len(nociceptive_neurons_list) # get length of list

pain_list_all_neurons = [0, 0, 0, 1, 1]
pain_list_only_firing = [1, 1]

experience_pain(pain_list_all_neurons) # would return pain_intensity(2)
experience_pain(pain_list_only_firing) # would return pain_intensity(10)
Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-15T23:27:19.904Z · EA · GW

We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*

It is true in this scenario that in 2020 we'd end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn't change and the animal donations for 2020 would not then be spent on non-animal work.

*We would very much state publicly when we have no more room for further donations in general, and by cause area.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T23:19:26.025Z · EA · GW

Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)

We also provide a way for any of our staff to offer anonymous feedback and information to senior management (which can help assist someone in the reporting a claim of harassment or discrimination)

Finally, I’d note that during our hiring round last year we pretty actively sought out and promoted our job to a diverse pool of candidates and we tracked performance of hiring on these metrics. We plan to continue this going forward.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:54:59.258Z · EA · GW

Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.

In addition, we have some plans to investigate potentially high value policies for animal welfare.

On CE's work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:48:26.198Z · EA · GW

I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it's been some time since I've seriously considered this topic.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:45:37.130Z · EA · GW

Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:

Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.

Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invertebrate sentience (though there is, of course, a chance these people were overemphasizing the work to me). A couple of academics have also privately told us they thought our work was worthwhile and useful to them. These people largely aren't donors though and I doubt many of them have started to give to invertebrate charities.

That said, I think the impact of this project in particular is difficult to judge. The diffuse impact of possibly introducing or normalizing discussion of this topic is difficult to capture in surveys, particularly when the answers are largely anonymous, and the payoffs even if people have been convinced to take them seriously may not occur until there is an actionable intervention to possibly support.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:28:49.826Z · EA · GW

We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T22:11:49.603Z · EA · GW

Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.

So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we've said recently in our impact and strategy update, we think we can do a better job of this type of communication going forward.

For the wider EA community, we haven't undertaken significant efforts to drive more discussion on posts but this is something potentially worth considering. I'd say one driver of whether we'd actually decide to do this would be if we came to believe more work here would potentially increase the chances we hit the goals I mentioned above.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T16:40:03.777Z · EA · GW

Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it's because that's where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won't necessarily hold over time.

Comment by marcus_a_davis on We're Rethink Priorities. AMA. · 2019-12-13T16:38:33.442Z · EA · GW

Thanks for the question! To echo Ozzie, I don't think it's fair to directly compare the quality of our work to the quality of GPI's work given we work in overlapping but quite distinct domains with different aims and target audiences.

Additionally, we haven't prioritized publishing in academic journals, though we have considered it for many projects. We don't believe publishing in academic journals is necessarily the best path towards impact in the areas we've published in given our goals and don't view it as our comparative advantage.

All this said, we don't deliberately err more towards quantity over quality, but we do consider the time tradeoff of further research on a given topic during the planning and execution phases of a project (though I don't think this is in any way unique to us within EA). We do try to publish more frequently because of our desire for (relatively) shorter feedback loops. I'd also say we think our work is high quality but I'll let the work speak for itself.

Finally, I take no position on whether EA organizations in general ought to err more or less towards academic publications as I think it depends on a huge number of factors specific to the aims and staffs of each organization.

Comment by marcus_a_davis on Opinion: Estimating Invertebrate Sentience · 2019-11-17T17:37:43.107Z · EA · GW

My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:

...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature

I could have made a 90% subjective confidence interval, but I wasn't confident enough that such an explicit goal in creating or distributing my understanding would be helpful.

Comment by marcus_a_davis on Opinion: Estimating Invertebrate Sentience · 2019-11-10T17:15:14.296Z · EA · GW

I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.

To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgrade your belief that behavior X is possibly indicative of sentience.* In the instance I outlined, I took the latter version of the fork here.

As to the details of what I learned, the vast bulk of it is in the table itself, in the notes for various learning attributes across taxa. The specific examples I mentioned, along with similar learning behaviors being possible in certain plants and protists, are what made me update negatively towards the importance of these learning behaviours as indicative to sentience. For example, it seems classical conditioning, sensitization, and habituation are possible in protists and/or plants.

*Of course, these are not strictly the only options in this type of scenario. It could be, for example, that behavior X is a necessary precondition of behavior Y which you strongly (perhaps independently but perhaps not) think is indicative of sentience. So, you might think, the absence of behavior X would really be evidence against sentience, while its presence alone in a creature might not be relevant to determining sentience.

Comment by marcus_a_davis on EA Forum 2.0 Initial Announcement · 2018-07-20T01:46:54.433Z · EA · GW

I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.

What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma "merely" a maximum of 2 times the amount of possible weight? 4 times?

However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.

While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.

Comment by marcus_a_davis on Announcing Rethink Priorities · 2018-04-18T00:12:12.487Z · EA · GW

Sorry for the extremely slow reply, but yes. That topic is on our radar.

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T05:44:55.138Z · EA · GW

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An even broader selection tool I think worth considering alongside this is simply "people who know about AI risk" but that's basically the same as Rob's original point of "have some association with the general rationality or AI community."

Edit: Should say "Naturally, we all have priors..."

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:50:26.127Z · EA · GW

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:37:50.213Z · EA · GW

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

Comment by marcus_a_davis on Let's conduct a survey on the quality of MIRI's implementation · 2016-02-20T04:13:56.917Z · EA · GW

This survey makes sense. However, I have a few caveats:

Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.

Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with AI risk (and, obviously, competent to judge who to include in the survey)? The ideal person to me would be neutral and while of course finding someone who is truly neutral would likely prove impractical, selecting someone overtly positive would be a bad idea for the same reasons it would be to select someone overtly negative. The point is the aim should be towards neutrality.

They should also have a chance to comment on the survey itself >before it goes out. Ideally it >would be checked by someone who understand good survey >design, as subtle aspects of >wording can be important.

This should be a set time frame to draft a response to the survey before it goes public. A "chance" is too vague.

It should be impressed on participants the value of being open and thoughtful in their answers >for maximising the chances of solving the problem of AI risk in the long run.

Telling people to be open and thoughtful is great, but explicitly tying it to solving long run AI risk primes them to give certain kinds of answers.

Comment by marcus_a_davis on A response to Matthews on AI Risk · 2015-08-12T23:54:21.468Z · EA · GW

It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.

I did mean over outcomes. I was referring to this:

If we're uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.

That seems mistaken to me but it could be because I'm misinterpreting it. I was reading it as saying we should split the difference between the two probabilities of success Matthews proposed. However, I thought he was suggesting, and believe it is correct, that we shouldn't just pick the median between the two because the smaller number was just an example. His real point being that any tiny probability of success seems equally as reasonable from the vantage point of now. If true we'd then have to split our prior evenly over that range instead of picking the median between 10^-15 and 10^-50. And given it's very difficult to put a lower bound on the reasonable range but a $1000 donation being a good investment depends on a specific lower bound higher than he believes can be justified with evidence, some people came across as unduly confident.

But if it's even annoying folks at EA Global, then probably people ought to stop using them.

Let me be very clear, I was not annoyed by them, even if I disagree, but people definitely used this reasoning. However, as I often point out, extrapolating from me to other humans is not a good idea even within the EA community.

Comment by marcus_a_davis on A response to Matthews on AI Risk · 2015-08-11T16:46:47.189Z · EA · GW

I think you are short selling Matthews on Pascal's Mugging. I don't think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.

Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference between 10^-15 or 10^-50 but spreading your belief over all positive outcomes below some reasonable barrier and (potentially) above another* (and this isn't taking into account the non-zero, even if unlikely, probability that despite caution AI research is indeed speeding up our doom). What those numbers are is very difficult to tell but if the estimation of those boundaries is off, and given the record of future predictions of technology it's not implausible, then all current donations could end up doing basically nothing. In other words, his critique is not that we must give up in the face of uncertainty but that the the justification of AI risk reduction being valuable right now depends on a number of assumptions with rather large error bars.

Despite what appeared to him to be this large uncertainty, he seemed to encounter many people who brushed aside, or seemingly belittled, all other possible cause areas and this rubbed him the wrong way. I believe that was his point about Pascal's Mugging. And while you criticized him for not acknowledging MIRI does not support Pascal's Mugging reasoning to support AI research, he never said they did in the article. He said many people at the conference replied to him with that type of reasoning (and as a fellow attendee, I can attest to a similar experience).

*Normally, I believe, it would be all logically possible outcomes but obviously it's unreasonable to believe a $1000 donation, which was his example, has, say, a 25% chance of success given everything we know about how much such work costs, etc. However, where the lower bound is on this estimation is far less clear.

Comment by marcus_a_davis on How We Run Discussions at Stanford EA · 2015-04-14T17:12:15.379Z · EA · GW

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

Comment by marcus_a_davis on April Open Thread · 2015-04-12T16:34:17.832Z · EA · GW

That counts. And, as I said above to Ben, I should have been more broad anyway. I just think we can use more first-person narratives about earning to give to present the idea as less of an abstraction.

Of course, I could be wrong and those who would consider earning to give at all (or would be moved to donate more because of hearing such a story) would be equally swayed by a third person analysis of why it is a good idea for some people.

Comment by marcus_a_davis on April Open Thread · 2015-04-12T16:27:04.925Z · EA · GW

That would count but I should have been more broad in my statement anyway. People like the "here's what I did and why I did it narrative" and earning to give could use more of these stories in general. I think a variety of them showing different perspectives for people in different positions and different abilities would be a boon.

Btw, I was quite wrong about there being no first person accounts as, for one, Chris Hallquist has written about this extensively.

Comment by marcus_a_davis on Reintroducing .impact · 2015-04-07T19:43:56.694Z · EA · GW

As for my personal experience with .impact here's a brief summary:

I'm still relatively new to .impact but I actually don’t recall with clarity how I found it. I believe, with barely over 50% confidence, that Peter Hurford told me about it. So far I've found it very welcoming and bursting with ideas and people willing to help. And if you review the meeting notes, over any significant period of time it is clear many things are getting accomplished. However, even with the ability to search all of Hackpad for projects, finding things by project type can be difficult if you don’t know where to look (an Index page sorting projects by type might help). As it stands right now, the easiest way to find something for outsiders and newcomers is often just to ask someone.

Also for a newcomer, particularly one like myself who doesn't currently offer any particularly in-demand skills like web design or programming, it can be difficult to know what exactly to do if you arrive just looking to help. However, I found the answer to this, as many things in life, is to just dive in. If you think you can do it and have the time, volunteer. That’s how I ended up writing this post and moderating this forum. It really is the case that if you have the time, there’s probably something you could be working on.

Comment by marcus_a_davis on Earning to Give: Programming Language Choice · 2015-04-07T02:16:19.293Z · EA · GW

As someone currently in the process of learning programming here are a few thoughts on my attempt at learning two of the bolded languages, Java and Ruby:

I'm currently working through The Odin Project, which has a backend focus on Ruby, and I'd highly recommend it. I'd also recommend Peter's guide to TOP which I've found very useful which includes some time estimates, some additional resources and some things to learn after you complete TOP. Perhaps the biggest plus to TOP for me is giving projects of the correct difficulty at the correct time so that they are challenging but doable. Another of the biggest benefits of TOP is the sheer scope of the resources already collected for you. Also Ruby is far more intuitive than Java.

Before starting TOP I started learning programming by attempting to learn Java on my own without much structure. However, going on my own I'd often spend time attempting to track down a good explanation for topics. There was also the issue of not knowing what was a logical path to take to learning and I think I took some major false steps. The resource I found most beneficial during that time were probably the free courses at Cave of Programming which covered a wide range of topics but had the huge downside of being somewhat dated video tutorials. Other than that I didn't find lots of free resources to help learning Java but there are some pretty cheap stuff on Udemy and a subscription to Lynda could be a good investment as well.

Of course, a huge caveat, I am a sample size of one who had no experience at all with programming before starting with Java. People with different backgrounds may have very different experiences.

Comment by marcus_a_davis on April Open Thread · 2015-04-03T02:52:56.952Z · EA · GW

There is also a contingent of utilitarians within effective altruism who primarily care >about reducing and ending suffering. They may be willing to compromise in favor of >animal welfare, and not full rights, but I'm not sure. They definitely don't seem a >majority of those concerned with animal suffering within effective altruism.

Of course, only actual data on EAs could demonstrate the proportionate of utilitarians willing to compromise but this seems weird. To me it would seem utilitarianism all but commits you to accept "compromises" on animal welfare at least in the short term given historical facts about how groups gained ethical consideration. As far as I know (anyone feel free to provide examples to the contrary), no oppressed group has ever seen respect for their interests go from essentially "no consideration" (where animals are today) to "equal consideration" without many compromising steps in the middle.

In other words, a utilitarian may want the total elimination of meat eating (though this is also somewhat contentious) but in practice they will take any welfare gains they can get. Similarly, utilitarians may want all wealthy people to donate to effective charities until global poverty is completely solved but will temporarily "compromise" by accepting only 5% of wealthy people to donate 10% of their income to such charities while pushing people to do better.

So, in practice, utilitarianism would mean setting the bar at perfection (and publicly signaling the highest standard that advances you towards perfection) but taking the best improvement actually on offer. I see no reason this shouldn't apply to the treatment of animals. Of course, other utilitarians may disagree that this is the best long term strategy (hopefully evidence will settle this question) but that is an argument about game theory and not whether some improvement is better than none or if settling for less than perfection is allowable.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T03:55:42.414Z · EA · GW

Ah, I should have guessed that from the "this is being actively pursued" label or I could have just asked there.

Naturally, if you'd like the help, I suspect there may be at least a few people here who, given their familiarity with a given religion, may have a decent idea of how to pitch the focus on effectiveness to a specific group.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T03:10:00.568Z · EA · GW

Are there any first person pieces on someone about successfully changing careers in order to earn to give? There have been several stories discussing the topic over the past few years but these all seem to be descriptive, third person accounts, or normative analysis.

Even if not, if you've actually made such a change could you please publicly share your story. I'd like to hear it and I'd bet many others would too.

Comment by marcus_a_davis on EA Advocates announcement · 2015-04-02T01:10:23.328Z · EA · GW

To answer myself: turns out at least for iBooks the problem was my impatience. It's now in the library and it's still a week before it is officially released. Perhaps Kindle will be the same way.

Still, I so rarely anticipate books being released I'm not sure if this is common.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T01:00:33.519Z · EA · GW

In navel-gazing curiosity: Has there been a poll done on what EAs think about moral realism?

I searched the Facebook group and Googled a bit but didn't come up with anything.

Comment by marcus_a_davis on April Open Thread · 2015-04-02T00:50:11.342Z · EA · GW

Has anyone else tried to pushing EA specifically at religious audiences? There's this on .impact but it's been a while since that was touched and I'd guess this could use some follow up. Doing this could really prove beneficial at getting favorable audiences especially if you or someone you're close to is heavily involved in a church.

Comment by marcus_a_davis on EARadio: A New Podcast for Effective Altruists · 2015-04-02T00:37:47.401Z · EA · GW

If you'd like I can give a go at cleaning up the audio of Ord's talk.

And by give a go I mean, run it through a few filters to see if it can go from "very bad" to "passable".

Comment by marcus_a_davis on We might be getting a lot of new EAs. What are we going to do when they arrive? · 2015-03-30T19:50:18.383Z · EA · GW

I'm up to help do both of those. Of course, how much I can help with the former will depend on what exactly needs to be done.

Comment by marcus_a_davis on EA Advocates announcement · 2015-03-27T17:33:01.580Z · EA · GW

A bit OT but this reminded me: Does anyone know if The Most Good You Can Do is coming out for Kindle?

I strongly prefer digital books, so buying it for Kindle would be the medium by which I can leave a verified purchase review on Amazon. However, the book doesn't seem to available digitally anywhere in the U.S. iBooks is seemingly selling it for Australia only.

I'm pretty sure I'm grasping at proverbial straws here though.

Comment by marcus_a_davis on Marcus Davis will help with moderation until early May · 2015-03-26T00:18:51.179Z · EA · GW

Ryan and I were discussing doing that for different subreddits that a given post here might be of interest to. So if it's a post about medical interventions posting it in /r/medicine for example.

Of course, the Internet is a lot bigger than Reddit though so there are probably many venues related to philanthropy, productivity, philosophy, animal rights, medical interventions, etc. that posts here could be relevant to. I'm going to try to do what I can but I would appreciate guidance toward relevant venues and potentially help actually doing the work if it proves to be a huge task.

Comment by marcus_a_davis on Marcus Davis will help with moderation until early May · 2015-03-25T20:26:14.971Z · EA · GW

I'm pretty excited to help out. Of course, as pointed by Ryan, if anyone has any pointers about spreading our reach more effectively on social media, I'm open to hearing them.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:59:20.667Z · EA · GW

Working on getting a more useful skill but for now if ever someone needs some audio editing, perhaps for a potential EA podcast, I can do it.

Also this job board seems relevant as skills people have they might not think would be of use are in demand.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:28:47.642Z · EA · GW

I'm interested in if anyone has any data or experience attempting to introduce people to EA through conventional charitable activities like blood drives, volunteering at a food bank, etc. The idea I've been kicking around is basically start or co-opt a blood drive or whatever event.

While people are engaged in the activity, or before or after, you introduce them to the idea of EA. Possibly even using this conventional charitable event as the prelude to a giving game. On the plus side the people you are speaking with are self-selected for doing charitable acts so might be more receptive to EA than a typical audience. On the downside is this group might be self-selected for people who care a lot about personally getting hands on for charitable works which typically aren't the most effective things you can do.

Comment by marcus_a_davis on Open Thread 4 · 2014-11-04T05:17:38.292Z · EA · GW

Can anyone recommend to me some work on existential threats as a whole? I don't just mean AI or technology related threats but nuclear war, climate change, etc.

Btw Nick Bostrom's Superintelligence is already at the top of my reading list, and I know Less Wrong is currently engaged in a reading group on that book.

Comment by marcus_a_davis on Peter's Personal Review for July-Sep 2014 · 2014-10-07T01:14:30.283Z · EA · GW

This is very useful. As someone still very new to this who wants to contribute more it can be helpful to see what other EAs are doing in detail. I still struggle from not knowing exactly what I can do now and what are realistic goals for behavioral and social changes, particularly in the short term.

More generally, as someone trying to be more productive and efficient Toggl looks promising and I'm going to try it out myself.