Posts

An Effective Altruist Message Test 2017-04-01T14:45:16.674Z · score: 17 (19 votes)

Comments

Comment by michael_s on CSER and FHI advice to UN High-level Panel on Digital Cooperation · 2019-03-12T23:46:02.418Z · score: 2 (2 votes) · EA · GW

I'd be curious which initiatives CSER staff think would have the largest impact in expectation. The UNAIRO proposal in particular looks useful to me for making AI research less of an arms race and spreading values between countries, while being potentially tractable in the near term.

Comment by michael_s on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T15:54:22.072Z · score: 1 (1 votes) · EA · GW

There's also other counterfactual matching opportunities that tend to arise around the same time though.

Comment by michael_s on Beyond Astronomical Waste · 2018-12-27T14:48:47.322Z · score: 2 (2 votes) · EA · GW

Yeah, I don't think filling the finite universe we know about is where the the highest expected value is. It's likely some form of possible infinite value, since it's not implausible that this could exist. But ultimately, I agree that the implications of this are minor and our response should basically be the same as if we lived in a finite universe (keep humanity alive, move values towards total hedonic utilitarianism, and build safe AI).

Comment by michael_s on The case for taking AI seriously as a threat to humanity · 2018-12-24T19:58:51.310Z · score: 9 (7 votes) · EA · GW

I'm not arguing for arguing for false arguments; I'm just saying that if you have a point you can make around racial bias, you should make that argument, even if it's not an important point for EAs, because it is an important one for the audience.

Comment by michael_s on If You’re Young, Don’t Give To Charity · 2018-12-24T15:10:05.594Z · score: 14 (13 votes) · EA · GW

I think this is rather weak and mostly arguing against a straw-man. I don't see Effective Altruists arguing that you should refrain from investments in your human capital. It makes sense to cut down on consumption (eg. eat out less). But I don't know of any EAs arguing that you should refrain from say buying books.

Comment by michael_s on The case for taking AI seriously as a threat to humanity · 2018-12-24T14:44:03.095Z · score: 0 (10 votes) · EA · GW

In general, I'm glad that it was included because it ads legitimacy to the overall argument with Vox's center-left audience.

Comment by michael_s on [Link] Vox Article on Engineered Pathogens/ Global Catastrophic Biorisks · 2018-12-06T22:53:55.951Z · score: 4 (3 votes) · EA · GW

I found this really helpful, and gave me what I expect to be actionable information I can use in my own work (I work in Democratic politics). Much appreciated!

Comment by michael_s on Do Prof Eva Vivalt's results show 'evidence-based' development isn't all it's cut out to be? · 2018-05-21T16:43:24.630Z · score: 3 (3 votes) · EA · GW

I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they're cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.

When costs of RCTs are large, I think there's reason to favor other methodologies, such as regression discontinuity designs, which have faired quite well compared to RCTs (https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.22051).

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-22T02:32:20.342Z · score: 0 (0 votes) · EA · GW

FYI, I'm pretty busy over the next few days, but I'd like to get back to this conversation at one point. If I do, it may be a bit though.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-20T02:07:50.927Z · score: 0 (0 votes) · EA · GW

To your first comment, I disagree. I think it's the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?

I think there is more total pain. I'm not counting the # of headaches. I'm talking about the total amount of pain.

Can you define S1?

We may not, as these discussions tend to go. I'm fine calling it.

I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here's my position on the issue: I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-17T21:14:48.543Z · score: 0 (0 votes) · EA · GW

Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject.

That's what I'm interested in a definition of. What makes it a "single subject"? How is this a binary term?

I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions.

No problem on the caps.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-17T15:30:17.704Z · score: 0 (0 votes) · EA · GW

1) I'd like to know what your definition of "subject-of-experience" is.

2) For this to be true, I believe you would need to posit something about "conscious experience" that is entirely different than everything else in the universe. If say factory A produces 15 widgets, factory B produces 20 widgets, and Factory C produces 15 widgets, I believe we'd agree that the number of widgets in A+C is greater than the number of widgets produced by B, no matter how independent the factories are. Do you disagree with this?

Similarly, I'd say if 15 neural impulses occur in brain A, 20 in brain B, and 15 in brain C, the # of neural impulses is greater than A+C than in B. Do you disagree with this?

Conscious experiences are a product of such neural chemical reactions. Do you disagree with this?

Given this, It seems odd to then postulate that even though all ingredients are the same and are additive between individuals, the conscious product is not. It seems arbitrary and unnecessary to explain anything, and there is no reason to believe it is true.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-17T02:21:08.400Z · score: 0 (0 votes) · EA · GW

I'd say I'm making two arguments:

1) There is no distinct personal identity; rather it's a continuum. The you today is different than the you yesterday. The you today is also different from the me today. These differences are matters of degree. I don't think there is clearly a "subject of experience" that exists across time. There are too many cases (eg. brain injuries that change personality) that the single consciousness theory can't account for.

2) Even if I agreed that there was a distinct difference in kind that represented a consistent person, I don't think it's relevant to the moral accounting of experiences. Ie. I don't see why it matters whether experiences are "independent" or not. They're real experiences of pain

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-15T04:17:22.488Z · score: 2 (2 votes) · EA · GW

It's the same 5 headaches. It doesn't matter if you're imagining one person going through it on five days or imagine five different people going through it on one day. You can still imagine 5 headaches. You can imagine what it would be like to say live the lives of 5 different people for one day with and without a minor headache. Just as you can imagine living the life of one person for 5 days with and without a headache. The connection to an individual is arbitrary and unnecessary.

Now this goes into the meaningless of personhood as a concept, but what would even count as the individual in your view? For simplicity, let's say 2 modest headaches in one person are worse than one major headache. What if between the two headaches, the person gets a major brain injury and their personality is completely altered (as has happened in real life). Let's say they also have no memory of their former self. Are they no longer the same person? Under your view, is it no longer possible to say that the two modest headaches are worse than the major headache? If it still is, why is it possible after this radical change in personality with no memory continuity but impossible between two different people?

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-14T13:31:24.173Z · score: 1 (1 votes) · EA · GW

I think this is confusing means of estimation with actual utils. You can estimate that 5 headaches are worse than one by asking someone to compare five headaches vs. one. You could also produce an estimate by just asking someone who has received one small headache and one large headache whether they would rather receive 5 more small headaches or one more large headache. But there's no reason you can't apply these estimates more broadly. There's real pain behind the estimates that can be added up.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-14T01:55:51.670Z · score: 4 (4 votes) · EA · GW

If a small headache is worth 2 points of disutility and a large headache is worth 5, the total amount of pain is worse because 2*5>5. It's a pretty straightforward total utilitarian interpretation.I find it irrelevant whether there's one person who's worse off; the total amount of pain is larger.

I'll also note that I find the concept of personhood to be incoherent in itself, so it really shouldn't matter at all whether it's the same "person". But while I think an incoherent personhood concept is sufficient for saying there's no difference if it's spread out over 5 people, I don't think it's necessary. Simple total utilitarianism gets you there.

Comment by michael_s on Is Effective Altruism fundamentally flawed? · 2018-03-13T03:30:33.731Z · score: 7 (7 votes) · EA · GW

Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache

I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.

Comment by michael_s on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T01:35:34.423Z · score: 6 (6 votes) · EA · GW

On this topic, I similarly do still believe there’s a higher likelihood of creating hedonium; I just have more skepticism about it than I think is often assumed by EAs.

This is the main reason I think the far future is high EV. I think we should be focusing on p(Hedonium) and p(Delorium) more than anything else. I'm skeptical that, from a hedonistic utilitarian perspective, byproducts of civilization could come close to matching the expected value from deliberately tiling the universe (potentially multiverse) with consciousness optimized for pleasure or pain. If p(H)>p(D), the future of humanity is very likely positive EV.

Comment by michael_s on Could I have some more systemic change, please, sir? · 2018-01-22T18:00:40.947Z · score: 1 (1 votes) · EA · GW

In most cases, I expect interventions to impact policy to also have diminishing marginal returns. Eg An experiment on legislative contacts found little increased effect with more calls (https://link.springer.com/article/10.1007/s11109-014-9277-1) .

Comment by michael_s on 69 things that might be pretty effective to fund · 2018-01-22T01:22:54.865Z · score: 1 (1 votes) · EA · GW

(Global catastrophic risks: Fund CEPI, the Coalition for Epidemic Preparedness Innovations.) looks interesting. It looks like they have a goal of raising 1B dollars (http://www.sabin.org/updates/blog/cepi-new-approach-epidemic-preparedness). My impression is that they are likely to meet this, but I may be mistaken. Would additional funding to CEPI likely be counterfactual?

Comment by michael_s on Should we be spending no less on alternate foods than AI now? · 2017-10-31T18:14:32.848Z · score: 4 (4 votes) · EA · GW

Sure, this material is most important for EAs. However, it could be used to raise funding from EAs that would then be used to secure even more funding from the public sector in a way that's more difficult for AI safety.

Comment by michael_s on Should we be spending no less on alternate foods than AI now? · 2017-10-30T14:02:22.292Z · score: 4 (4 votes) · EA · GW

Really exciting work! This seems like an intervention that could potentially be funded with public resources more easily than AI safety research could, which opens up another avenue to funding.

I see how this could be very useful in the event of a nuclear war, but I do have some skepticism about how useful these alternative foods wold be for a less severe shortage. With a 10% reduction in agricultural productivity, why do you think alternative foods that don't need sunlight could be cheaper than simply expanding how much of useable land we devote to agriculture/using land to grow products that are cheaper per calorie?

Comment by michael_s on An Effective Altruist Message Test · 2017-10-08T17:35:47.438Z · score: 1 (1 votes) · EA · GW

As a quick update, I also tried something similar on the EA survey to see whether making certain EA considerations salient would impact people's donation plans. The end result was essentially no effect. Obligation, Opportunity, and emphasizing cost benefit studies on happiness all had slightly negative treatment effects compared to the control group. The dependent variable was how much EA survey takers reported planning to donate in the future.

Comment by michael_s on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-01T21:36:50.983Z · score: 6 (8 votes) · EA · GW

It might make a lot of sense to test the risk vs. accidents framing on the next survey of AI researchers.

Comment by Michael_S on [deleted post] 2017-09-20T13:38:33.417Z

I disagree. I believe good ballot measure polling should more accurately reflect the actual language that would appear on the ballot. There's a known bias towards voters being more likely to support simpler language.

Unless this is an extremely expensive measure (which is probably won't be), I don't think that assumption is correct. Most voters will probably never hear about the initiative before they see it on the ballot/will have seen a cursory ad that they barely paid attention to.

Comment by Michael_S on [deleted post] 2017-09-15T02:48:50.687Z

Cool; had missed that row. Yeah, if it polls, 70% the chance of passage might be close to 80%. Conditional upon that level of support, your estimate seems reasonable to me (assuming the ballot summary language would not be far more complex than the polled language).

Yeah, I agree that it being an effective treatment is a necessary precursor to it being a good ballot law to pass by ballot initiative and part of the EV calculation for spending money on the ballot measure itself.

Comment by Michael_S on [deleted post] 2017-09-14T23:52:15.367Z

That seems similar to Milan_Griffes' approach. However, when we're comparing ballot measures to other opportunities, I think the relevant cost to EA would be the cost to launch the campaign. That's what EAs would actually be spending money on and what could be spent on other interventions.

We don't have to assume away the additional costs of getting the medicine, but that can be factored into the benefit (ie. the net benefit is the gains they would get from the medicine - the gains they lose from giving up the funds to purchase the drugs)

Comment by Michael_S on [deleted post] 2017-09-14T22:52:08.303Z

Hey; I made some comments on this on the doc, but I thought it was worth bringing them to the main thread and expanding.

First of all, I'm really happy to sea other EAs looking at ballot measures. They're a potentially very high EV method of passing policy/raising funding. They're particularly high value per dollar when spending on advertising is limited/nothing since the increased probability of passage from getting a relatively popular measure on the ballot is far more than the increased probability from spending the same amount advertising for it.

Also, am I correct in interpreting that you assume 100% chance of passage in your model conditional on good polling? Polling can help, but ballot measure polling does have a lot of error (in both directions). So even a popular measure in polling is hardly guarantee of passage (http://themonkeycage.org/2011/10/when-can-you-trust-polling-about-ballot-measures/).

Finally, in your EV estimates, you seem to be focus on the individual treatment cost of the intervention, which overwhelms the cost of the ballot measure. I don't think this is getting at the right question when it comes to running a ballot measure. I believe the gains from the ballot measure should be the estimated sum of the utility gains from people being able to purchase the drugs multiplied by the probability of passage; the costs should be how much it would cost to run the campaign. On the doc, you made the point that Givewell doesn't include leverage on other funding in their estimates, but when it comes to ballot measures, leverage is exactly what you're trying to produce, so I think an estimate is important.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-02T00:31:11.076Z · score: 0 (4 votes) · EA · GW

Thanks!

I adapted that framing from Will MacAskill (example of this starting 12:45 in the podcast with Sam Harris here: https://www.samharris.org/podcast/item/being-good-and-doing-good). MacAskill refers to the framing as "Excited Altruism" It might come across as better when he tells it than in a web survey. But I think it's pretty similar. I grouped this in with "opportunity", which I've also seen called "exciting opportunity" in the ea community (http://lukemuehlhauser.com/effective-altruism-as-opportunity-or-obligation/).

But, regardless of what it's called, I agree with you on the takeaway.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T22:37:08.155Z · score: 0 (0 votes) · EA · GW

Sounds interesting. Would love to take a look when you get a chance to provide the links.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T22:07:30.184Z · score: 1 (2 votes) · EA · GW

Yeah, the survey was a lot longer. Typically general public surveys will cost over 10 dollars a complete, so getting 1200 cases for a survey like this can cost thousands of dollars.

I agree that model specification can be tricky, which is a reason I felt it well worth it to use the proprietary software I had access to that has been thoroughly vetted and code reviewed and is used frequently to run similar analyses rather than trying to construct my own.

I did not make sure people read the paragraph. I discussed the issue a bit in my discussion section, but one way a web survey might understate the effect is if people would pay closer attention and respond better to a friend delivering the message. OTOH, surveys do have some potentual vulnerability to the hawthorne effect, though that didn't seem to express itself in the donations question.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T21:53:57.217Z · score: 1 (2 votes) · EA · GW

No; I did not fit multiple models. Lasso regression was used to fit a propensity model using the predictors.

Using bachelor's vs. non-bachelor's has advantages in interpretability, so I think this was the right move for my purposes.

I did not spend an exorbitant amount of time investigating diagnostics, for the same reason I used a proprietary package was has been built for running these tests at a production level and has been thoroughly code reviewed. I don't think it's worth the time to construct an overly customized analysis.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T21:39:50.107Z · score: 1 (4 votes) · EA · GW

Sure, in an ideal world, software would all be free for everyone; alas, we do not live in such a world :p. I used the proprietary package because it did exactly what I needed and doesn't require writing STAN code or anything myself. I'd rather not re-invent the wheel. I felt the tradeoff of transparency for efficiency and confidence in its accuracy was worth it, especially since I wouldn't be able to share the data either way (such are the costs of getting these questions on a 1200 person survey without paying a substantial amount).

But the basic model was just a multilevel binomial model predicting the dependent variable using the treatments and questions asked earlier in the survey as controls.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T21:04:13.460Z · score: 0 (6 votes) · EA · GW

Unfortunately, because I used proprietary survey data/a proprietary R package to run this analysis, I don't think I'll be able to share the data and code.

Comment by michael_s on An Effective Altruist Message Test · 2017-04-01T21:02:29.320Z · score: 1 (2 votes) · EA · GW

Yup, binomial.

The respondents in a treatment were each shown a message and asked how compelling they thought it was. The control was shown no message.

Yeah; the plots are the predicted values for those given a particular treatment. and Average Treatment Effect is the difference with the control.

I did not include every control used in the provided questionnaire. There were a mix of demographics/attitudinal/behavioral questions asked in the survey that I also used. These controls, particularly previous donations, were important for decreasing variance.

I used a multilevel model to estimate the effects among those with and without a bachelor's degree. So, the bachelor's estimate borrow's power from those without a degree, reducing problems with over fitting.

These models used STAN, which handles these multilevel models well. Convergence was assessed with gelman-rubin statistics.

Comment by michael_s on A Third Take on Trump · 2017-04-01T13:21:38.857Z · score: 5 (5 votes) · EA · GW

I agree that the modal outcome of a Trump presidency is that he changes little and the Democrats come out stronger at the end of his presidency than they entered. However, I still think it would have been better that Clinton had won (even if we assume the same congress).

The most important reason is tail risk. As others have commented, the risk of nuclear war may be greater under Trump than it would have been under Clinton. So far, he seems to be pursuing a more conventional foreign policy than I feared, but I still believe the risk is higher than with Clinton. Additionally, I'm worried that the Trump presidency is increasing the salience of Russian hostility among Democrats and could increase the chance of conflict in the future even when a Democrat takes office.

Another are of concern is pandemics. Trump has expressed anti-Vaccine sentiments and submitted budgets which cut pandemic preparedness. Furthermore, the overall level of incompetence in his administration and many of his appointees leaves me worried that the response of the US to a major pandemic could be diminished.

None of the above is likely to happen, but I'd much rather play it safe with a Clinton presidency. Additionally, even the modal outcome of a presidency isn't all good for the liberals. Most notably, he'll almost certainly be able to move at least one conservative into the supreme court and has a high chance of moving at least one more. If Trump replaces a liberal with a conservative on the court, the court will move to the right and it will likely be quite a while until Democrats retake it. With a Clinton presidency, liberals would have been able to achieve a majority on the court that would likely have lasted a long time itself.

Comment by michael_s on Vote Pairing is a Cost-Effective Political Intervention · 2017-02-26T15:20:06.267Z · score: 6 (6 votes) · EA · GW

Thanks for the write up. I think you make a compelling case that this is more effective than canvassing, which can be over 1000 dollars for votes at the margin in a competitive election like 2016. I do think there are a few ways your estimate may be an overestimate though.

Of those who claimed they would follow through with vote trading, some may not have. You mention that there wouldn't have been much value to defecting. However, much of the value of a vote for individual comes from tribal loyalties rather than affecting the outcome. That's why turnout is higher in safe presidential states in a presidential election than midterm elections, even when the midterm election is competitive. Some individuals may still have defected because of this.

Secondly, many of the 3rd party folks who made the trade could have voted for Clinton anyway. People who sign up for these sites are necessarily strategic thinkers. If they wanted more total votes for Stein/Johnson, but recognized that a vote for Clinton was more important in a swing state, they might have signed up for the site to gain the Stein/Johnson voter, but planned to vote for Clinton even if they didn't get a match. Additionally, even if they were acting in good faith when they signed up, they may have changed their mind as the election approached. 3rd parties are historically over estimated in polling compared to the election results, and 2016 was no exception: http://www.realclearpolitics.com/epolls/2016/president/us/general_election_trump_vs_clinton_vs_johnson_vs_stein-5952.html.

I don't think these problems are enough to reduce the value by an order of magnitude, but it is worth keeping in mind.

Additionally, while vote trading may be high EV now, I am skeptical that it is easy to scale. It's even more difficult to apply outside of presidential elections, so, unlike other potential political interventions, it will mostly be confined to every 4 years in one race. Furthermore, the individuals who signed up now may be lower cost to acquire than additional potential third party traders. They are likely substantially more strategic than the full population of 3rd party voters; in many years, the full population isn't that large to begin with. The cost per additional vote may be larger than your current estimates.

Nevertheless, I agree that right now it's probably more valuable than traditional canvassing and I'm glad people are putting resources into it.

Comment by michael_s on Proposal for an Pre-registered Experiment in EA Outreach · 2017-01-08T16:54:57.029Z · score: 4 (4 votes) · EA · GW

This sounds really great to me. I love the idea of having more RCTs in the EA sphere. I would definitely record how much they are giving 1 year later.

I also think it's worth having a hold out set. People can pre-register the list of friends, than a random number generator can be used to randomly selects some friends not to make an explicit GWWC pitch to. It's possible many of the friends/contacts who join GWWC and start donating are those who have already been exposed to EA ideas before over a long period of time, and the effect size of the direct GWWC pitch isn't as large as it would appear. Having a hold out set would account for this. With a hold out set, CEA wouldn't have to worry about who they contact. The holdout set would take care of this and make the estimate of the treatment effect unbiased.

Comment by michael_s on What does Trump mean for EA? · 2016-11-12T00:04:46.955Z · score: 1 (1 votes) · EA · GW

You can't look at aggregate turnout numbers being different and assume the composition of turnout was different. You're making the assumption that there was 0 movement from Obama to Trump or from Romney to Clinton; both of which are definitely incorrect as evidenced by polling.

Secondly, turnout is much higher than that appears; much more will come in from California, Washington, Oregon and Colorado. It always takes these states forever to report. So the turnout numbers now are misleading.

Comment by michael_s on What does Trump mean for EA? · 2016-11-11T14:54:27.174Z · score: 0 (0 votes) · EA · GW

At most, campaign funds would have moved this a point or two. Campaign funding has little impact on presidential elections; Clinton far outspent Trump and Trump was far outspent in the primary election. If we assume an effective size of 5% for all of Trump's money and assume no diminishing marginal return (both very generous assumptions), that 0.15% is 0.0075 percentage points in movement. The outcome was decided by 1, so that's over two orders of magnitude lower than what was needed under generous assumptions. It was probably more orders of magnitude lower.

That's not true at all.Trump gained substantially in rural areas with mostly white people where Obama had won or performed substantially better. I mean, if turnout had magically been higher among Democrats but not Republicans, we would have won, but you don't get to do that. The composition of the electorate was roughly the same (minus some black people plus some Hispanics) as 2012. It's conceivably possible that without the drop in black turnout, we would have won, but this was inevitable without the first black president running. There is overwhelming evidence that attitudes among the white working class moved against us. Hence our drop in the midwest.

I agree on the point that phone banking does not make much of a difference.

There were several instances that fall under the same pattern: the email story, the hollywood access tapes, the debates, probably the apprentice tapes if they had appeared, and potentially the wikileak emails, though it's much harder to gauge their effect size.

Comment by michael_s on What does Trump mean for EA? · 2016-11-11T13:46:18.559Z · score: 0 (0 votes) · EA · GW

Thiel had essentially nothing to do with the outcome of this election.

This was not primarily a turnout issue. Black turnout was down, but Hispanic turnout was up. White turnout appears relatively flat (both Democratic and Republican white turnout), but we'll know more when actual person level vote history is released. Regardless, EA messaging is not the right way to appeal to Berners.

The easiest way to shift the outcome of the election would have been to change public opinion by a point or two by shifting the narrative of the race in the final week. Comey was successful at doing this.

Comment by michael_s on What does Trump mean for EA? · 2016-11-11T06:29:32.824Z · score: 1 (3 votes) · EA · GW

I don't think a rationalist message/meme would have been successful at convincing hundreds of thousands of working class whites not to vote for Trump. Rationalism has it's place in deciding what to do about an election, but I don't think EA messaging is at all useful in influencing a mass audience.

Comment by michael_s on Dedicated Donors May Not Want to Sign the Giving What We Can Pledge · 2016-10-30T16:55:33.067Z · score: 2 (2 votes) · EA · GW

I agree that the pledge is most useful for those who don't have as strong a dedication. But I expect it is useful for the vast majority of EAs. According to the EA survey, less than half of EAs donate even 10% of their income. So I think we have a far greater incidence of people not taking the pledge when they should than the other way around, even in the EA community. If EAs start putting greater weight on potential problems from the pledge, I expect that would be a net negative across the community.

Comment by michael_s on Dedicated Donors May Not Want to Sign the Giving What We Can Pledge · 2016-10-30T15:00:14.794Z · score: 5 (5 votes) · EA · GW

I don't reject the argument that the GWWC pledge may not make sense for every single person. There are always exceptions. But I think it's quite small, and it's much more beneficial for us as a community to try to get as many people to pledge as possible.

In addition to what it might do for yourself, signing the pledge allows you to influence others. The more people sign the pledge and the more public they are, the more we spread giving a large portion of your income to effective charities as a cultural institution. I think that's very valuable in itself.

Additionally, some of the items you listed as conflicting with donations, eg. wanting a comfortable retirement, seem like items for which donation should take the higher priority from a utilitarian standpoint. I understand that's very difficult for people and many EAs will not be able to do this. That's reality. However, if the pledge gets you to cut back on these luxuries in favor of utilitarian actions, only because you feel obligated to keep the pledge, I think that's a good thing. If you face a conundrum like in the over justification effect, it may be more productive to try and rethink 5) than rethink 2).

Comment by michael_s on How to Measure and Optimize EA Marketing · 2016-09-02T19:00:12.853Z · score: 2 (2 votes) · EA · GW

You might want to also try using google consumer surveys. If you restrict it to a single question (you can put the message in the question), they're incredibly cheap.

Comment by michael_s on How to Measure and Optimize EA Marketing · 2016-09-02T15:08:14.162Z · score: 2 (4 votes) · EA · GW

Is Intentional Insights doing anything to promote the use of RCTs in EA messaging? There may be a lot of value to be gained in conducting message testing experiments to determine which messages are most effective at getting potentual EAs to perform certain actions. Heterogeneous Effects models might also be useful in identifying who to target.

Comment by michael_s on Should you switch away from earning to give? Some considerations. · 2016-08-26T17:45:02.089Z · score: 1 (1 votes) · EA · GW

Sure, but your absolute advantage may provide some evidence of a comparative advantage. If you can give say ~10X the top 90th percentile of self-identified EAs, you might also fine some direct work that allows you to contribute much more effectively than most EAs do directly, but it means there's a higher bar to clear.

Comment by michael_s on Should you switch away from earning to give? Some considerations. · 2016-08-26T17:22:58.097Z · score: 2 (2 votes) · EA · GW

Most people who work on direct work are also probably working suboptimal jobs that require less sacrifice as well. But, whether the average EtG EA or the average direct work EA is making a greater sacrifice is irrelevant in deciding whether you pursue MacAskill's suggested EtG path. There's no reason why you would want to over correct from that. If your EtG plan itself involves minimal sacrifice, than you might want to correct for that. Same with direct work that requires minimal sacrifice.

Comment by michael_s on Should you switch away from earning to give? Some considerations. · 2016-08-26T12:38:52.905Z · score: 4 (4 votes) · EA · GW

I don't think it's fair to say that EtG is less of a sacrifice than direct work. It's dependent on a number factors. If someone EtG by staying in the same job and working the same # of hours one would otherwise while still living on a substantial proportion of one's salary, it may not be that much of a sacrifice.

However, EtG could also mean working at a job that may not have been one's first choice otherwise (eg. Finance), working many more hours than one would otherwise and/or living on just as much or less than one would if they were doing direct work. The EtG work MacAskill suggests involves taking high paying jobs like finance rather than staying in whatever job one happens to be doing, so I don't think your criticism stands in that case.

Comment by michael_s on Introducing Envision: A new EA-Aligned Organization · 2016-08-11T00:16:32.063Z · score: 3 (3 votes) · EA · GW

This is really cool! It's exactly the kind of X-risk intervention I'm excited about (capacity building among elites). I think this investment in the future is even more important than tackling technical problems today.

I noticed that you didn't mention any need for funding. Does the mean that your current funding needs are adequately met?