Posts

Retrospective on Catalyst, a 100-person biosecurity summit 2021-05-26T13:10:22.942Z

Comments

Comment by brianwang712 on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-27T15:21:31.328Z · EA · GW
Estimates of the mortality rate vary, but one media source says, "While the single figures of deaths in early January seemed reassuring, the death toll has now climbed to above 3 percent." This would put it roughly on par with the mortality rate of the 1918 flu pandemic.

It should be noted that the oft-cited case-fatality ratio of 2.5% for the 1918 flu might be inaccurate, and the true CFR could be closer to 10%: https://rybicki.blog/2018/04/11/1918-influenza-pandemic-case-fatality-rate/?fbclid=IwAR3SYYuiERormJxeFZ5Mx2X_00QRP9xkdBktfmzJmc8KR-iqpbK8tGlNqtQ

EDIT: Also see this twitter thread: https://twitter.com/ferrisjabr/status/1232052631826100224

Comment by brianwang712 on Finding it hard to retain my belief in altruism · 2019-01-02T14:14:59.535Z · EA · GW

It seems that there are two factors here leading to a loss in altruistic belief:

1. Your realization that others are more selfish than you thought, leading you to feel a loss of support as you realize that your beliefs are more uncommon than you thought.

2. Your uncertainty about the logical soundness of altruistic beliefs.

Regarding the first, realize that you're not alone, that there are thousands of us around the world also engaged in the project of effective altruism – including potentially in your city. I would investigate to see if there are local effective altruism meetups in your area, or a university group if you are already at university. You could even start one if there isn't one already. Getting to know other effective altruists on a personal level is a great way to maintain you desire to help others.

Regarding the second, what are the actual reasons for people answering "100 strangers" to your question? I suspect that the rationale isn't on strong ground – that it is mostly borne out of a survival instinct cultivated in us by evolution. Of course, for evolutionary reasons, we care more about ourselves than we care about others, because those that cared too much about others at the expense of themselves died out. But evolution is blind to morality; all it cares about is reproductive fitness. But we care about so, so much more. Everything that gives our lives value - the laughter, love, joy, etc. – is not optimized for by evolution, so why trust the answer "100 strangers" if it is just evolution talking?

I believe that others' lives have an intrinsic value on par with my own life, since others are just as capable of all the experiences that give our lives value. If I experience a moment of joy, vs. if Alice-on-the-other side-of-the-world-whom-I've-never-met experiences a moment of joy, what's the difference from "the point of view of the universe"? A moment of joy is a moment of joy, and it's valuable in and of itself, regardless who experiences it.

Finally, if I may make a comment on your career plan – I might apply for career coaching from 80,000 hours. Spending 10 years doing something you don't enjoy sounds like a great recipe for burnout. If you truly don't think that you'll be happy getting a machine learning PhD, there might be better options for you that will still allow you to have a huge impact on the world.

Comment by brianwang712 on “The Vulnerable World Hypothesis” (Nick Bostrom’s new paper) · 2018-11-09T15:57:48.079Z · EA · GW

I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards broadly restricting scientific development more than is necessarily warranted.

I offer a modification of the metaphor that relates to differential technological development. (In the middle of the paper, Bostrom already proposes a few modifications of the metaphor based on differential technological development, but not the following one). Whenever we draw a ball out of the urn, it affects the color of the other balls remaining in the urn. Importantly, some of the white balls we draw out of the urn (e.g., defensive technologies) lighten the color of any grey/black balls left in the run. A concrete example of this would be the summation of the advances in medicine over the past century, which have lowered the risk of a human-caused global pandemic. Therefore, continuing to draw balls out of the urn doesn't inevitably lead to civilizational disaster – as long as we can be sufficiently discriminate towards those white balls which have a risk-lowering effect.

Comment by brianwang712 on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-27T13:39:56.297Z · EA · GW

Interesting idea. This may be worth trying to develop more fully?

Yeah. I'll have to think about it more.

I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then?

Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then perhaps it should be standard policy for this manuscript to be referred to some sort of special group consisting of journal editors from a number of different journals to deliberate. I don't think people need to be taught the mathematical modeling behind the unilateralist's curse for these kinds of policies to be set up, as I think people have an intuitive notion of "it only takes one person/group with bad judgment to fuck up the world; decisions this important really need to be discussed in a larger group."

One important distinction is that people who are facing info hazards will be in very different situations when they are within EA vs. when they are out of EA. For people within EA, I think it is much more likely to be the case that a random individual has an idea that they'd like to share in a blog post or something, which may have info hazard-y content. In these situations the advice "talk to a few trusted individuals first" seems to be appropriate.

For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist's curse.

As I understand it you shouldn't wait for consensus else you have the unilateralist's curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster.

You're right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there's mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist's curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the "true sign" of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.

Comment by brianwang712 on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T03:38:49.024Z · EA · GW

If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf.

Ah right. I suppose the unilateralist's curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn't really apply. Although one wrinkle might be considering the unilateralist's curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.

What the researcher can do is try and build consensus/lobby for a collective decision making body on the internal climate heating (ICH) problem. Planning to release the information when they are satisfied that there is going to be a solution in time for fixing the problem when it occurs.

Thanks, this concrete example definitely helps.

I think I am also objecting to the expected payoff being thought of as a fixed quantity. You can either learn more about the world to alter your knowledge of the payoff or try and introduce things/insituttions into the world to alter the expected payoff. Building useful institutions may rely on releasing some knowledge, that is where things become more hairy.

This makes sense. "Release because the expected benefit is above the expected risk" or "not release because the vice versa is true" is a bit of a false dichotomy, and you're right that we should be more thinking about options that could maximize the benefit while minimizing the risk when faced with info hazards.

Also as the the unilaterlist's curse suggests discussing with other people such that they can undertake the information release, sometimes increases the expectation of a bad out come. How should consensus be reached in those situations?

This can certainly be a problem, and is a reason not to go too public when discussing it. Probably it's best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist's curse, and ideally who don't have the means/authority of releasing the information themselves (e.g., if you have a written up blog post you're thinking of posting that might contain info hazards, then maybe you could discuss in vague terms with other individuals first, without sharing the entire post with them?).

Comment by brianwang712 on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-24T02:18:10.408Z · EA · GW

The unilateralists curse only applies if you expect other people to have the same information as you right?

My understanding is that it applies regardless of whether or not you expect others to have the same information. All it requires is a number of actors making independent decisions, with randomly distributed error, with a unilaterally made decision having potentially negative consequences for all.

You can figure out if they have the same information as you to see if they are concerned about the same things you are. By looking at the mitigation's people are attempting. Altruists should be attempting mitigations in a unilateralist's curse position, because they should expect someone less cautious than them to unleash the information. Or they want to unleash the information themselves and are mitigating the downsides until they think it is safe.

I agree that having dangerous information released by those who are in a position to mitigate the risks is better than having a careless actor releasing that same information –– but I disagree that this is sufficient reason to preemptively release dangerous information. I think a world where everyone follows the logic of "other people are going to release this information anyway but less carefully, so I might as well release it first" is suboptimal compared to a world where everyone follows a norm of reaching consensus before releasing potentially dangerous information. And there are reasons to believe that this latter world isn't a pipe dream; after all, generally when we're thinking about info hazards, those who have access to the potentially dangerous information generally aren't malicious actors, but rather a finite number of, e.g., biology researchers (for biorisks) who could be receptive to establishing norms of consensus.

I'm also not sure how the strategy of "preemptively release, but mitigate" would work in practice. Does this mean release potentially dangerous information, but with the most dangerous parts redacted? Release with lots of safety caveats inserted? How does this preclude the further release of the unmitigated info?

I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

I'm not sure I'm fully understanding you here. If you're saying that the majority of potentially dangerous ideas will originate in those who don't know what the unilateralist's curse is, then I agree –– but I think this is just all the more reason to try to spread norms of consensus.

Comment by brianwang712 on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T16:34:14.740Z · EA · GW

The relevance of unilateralist's curse dynamics to info hazards is important and worth mentioning here. Even if you independently do a thorough analysis and decide that the info-benefits outweigh the info-hazards of publishing a particular piece of information, that shouldn't be considered sufficient to justify publication. At the very least, you should privately discuss with several others and see if you can reach a consensus.

Comment by brianwang712 on When to focus and when to re-evaluate · 2018-03-24T22:24:57.368Z · EA · GW

I wonder how much the "spend 1 year choosing and 4 years relentless pursuing a project" rule of thumb applies to having a high-impact career. Certain career paths might rely on building a lot of career capital before you can have high-impact, and career capital may not be easily transferable between domains. For example, if you first decide to relentlessly pursue a career in advancing clean meat technology for four years, and then re-evaluate and decide that influencing policymakers with regards to AI safety is the highest-value thing for you to do, it's probably going to be difficult to pivot. There's a sense in which you might be "locked in" to a career after you spend enough time in it. My sense is that, for career-building in the face of uncertainty, it might be best to prioritize keeping options open (e.g., by building transferable career capital) and/or spending more time on the choosing phase.

Comment by brianwang712 on Is Effective Altruism fundamentally flawed? · 2018-03-23T14:39:21.250Z · EA · GW

Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here.

Part of what I'm confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what's the case for giving everyone an equal chance? What's gained from that? Why prioritize "chances"? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I'm guessing is not the main driver behind your reasoning.

One way of viewing "giving everyone an equal chance" is to give equal priority to different possible worlds. I'll use the original "Bob vs. a million people" example to illustrate. In this example, there's two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn't you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than a world where that same hurricane petered out unexpectedly and only destroyed the home of one unlucky person?

If so, then why give tragic world A any priority at all, when we can just create world B instead? I mean, if you were asked to choose between getting a delicious chocolate milkshake vs. a bee sting, you wouldn't say "I'll take a 50% chance of each, please!" You would just choose the better option. Giving any chance, no matter how small, to the bee sting would be too high. Similarly, giving any priority to tragic world A, even 1 in 10 million, but be too high.

Comment by brianwang712 on Is Effective Altruism fundamentally flawed? · 2018-03-17T07:00:58.273Z · EA · GW

Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wouldn't have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here -- of course it's very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it's simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we'd be living in a much unhappier society.

If you're unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let's say that Bob successfully argues his case to the donor, who gives Bob a 1/2 chance of being helped. For the purpose of this experiment, it's best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth -1 utility point, consider that he netted 1/2 of an expected utility point from the donor's decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth -1 utility point, and realized positive incidents are worth +1 utility point.)

The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their car accident injuries, or they can spend their time helping one other indivdual with a separate yet equally painful affliction, but they cannot do both. They also cannot split their time between the two choices. They have read your blog post on the EA forum and decide to flip a coin. Bob once again gets a 1/2 expected utility point from this decision.

Unfortunately, Bob's hospital stay cost him all his savings. He and his brother Dan (who has also fallen on hard times) go to their mother Karen to ask for a loan to get them back on their feet. Karen, however, notes that her daughter (Bob and Dan's sister) Emily has also just asked for a loan for similar reasons. She cannot give a loan to Bob and Dan and still have enough left over for Emily, and vice versa. Bob and Dan note that if they were to get the loan, they could both split that loan and convert it into +1 utility point each, whereas Emily would require the whole loan to get +1 utility point (Emily was used to a more lavish lifestyle and requires more expensive consumption to become happier). Nevertheless, Karen has read your blog post on the EA forum and decides to flip a coin. Bob nets a 1/2 expected utility point from this decision.

What is the conclusion from this thought experiment? Well, if decisions were made to your decision rule, providing each individual an equal chance of being helped in each situation, then Bob nets 1/2 + 1/2 + 1/2 = 3/2 expected utility points. Following a more conventional decision rule to always help more people vs. less people if everyone is suffering similarly (a decision rule that would've been agreed upon behind a veil of ignorance), Bob would get 0 (no help from the original donor) + 1 (definite help from the doctors + nurses) + 1 (definite help from Karen) = 2 expected utility points. Under this particular set of circumstances, Bob would've benefitted more from the veil of ignorance approach.

You may reasonably ask whether this set of seemingly fantastical scenarios has been precisely constructed to make my point rather than yours. After all, couldn't Bob have found himself in more situations like the donor case rather than the hospital or loan cases, which would shift the math towards favoring your decision rule? Yes, this is certainly possible, but unlikely. Why? For the simple reason that any given individual is more likely to find themselves in a situation that affects more people than a situation that affects few. In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.

Based on your post, it seems you are hesitant to aggregate utility over multiple individuals; for the sake of argument here, that's fine. But the thought scenario above doesn't require that at all; just aggregating utility over Bob's own life, you can see how the veil-of-ignorance approach is expected to benefit him more. So if we rewind the tape of Bob's life all the way back to the original donor scenario, where the donor is mulling over whether they want to donate to help Bob or to help Amy + Susie, the donor should consider that in all likelihood Bob's future will be one in which the veil-of-ignorance approach will work out in his favor moreso than the everyone-gets-an-equal-chance approach. So if this donor and other donors in similar situations are to commit to one of these two decision rules, they should commit to the veil of ignorance approach; it would help Bob (and Amy, and Susie, and all other beneficiaries of donations) the most in terms of expected well-being.

Another way to put this is that, even if you don't buy that Bob should put himself behind a veil of ignorance because he knows he doesn't have an equal chance of being in Amy's and Susie's situation, and so shouldn't decide to sign a cooperative agreement with Amy and Susie, you should buy that Bob is in effect behind a veil of ignorance regarding his own future, and therefore should sign the contract with Amy and Susie because this would be cooperative with respect to his future selves. And the donor should act in accord with this hypothetical contract.

I would respond to the second point, but this post is already long enough, and I think what I just laid out is more central.

I will also be bowing out of the discussion at this point – not because of anything you said or did, but simply since it took me much more time to write up my thoughts than I would have liked. I did enjoy the discussion and found it useful to lay out my beliefs in a thorough and hopefully clear manner, as well as to read your thoughtful replies. I do hope you decide that EA is not fatally flawed and to stick around the community :)

Comment by brianwang712 on Is Effective Altruism fundamentally flawed? · 2018-03-14T05:22:03.475Z · EA · GW

I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don't know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.

Regarding your second point, I don't think EA's are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People's intuitions would start to differ with more middle-of-the-road painful diseases, but I think EA is a big enough tent to accommodate all those intuitions. You don't have to think interpersonal welfare aggregation is exactly the same as intrapersonal welfare aggregation to be an EA, as long as you think there is some reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain.

Comment by brianwang712 on Is Effective Altruism fundamentally flawed? · 2018-03-13T14:27:53.567Z · EA · GW

One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people's suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls' veil of ignorance argument.

And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person's suffering or a million peoples' suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, "Please donate such that you would alleviate a million people's suffering, and please oh please don't just flip a coin."

More broadly speaking, given that we live in a world where people have competing interests, we have to find a way to effectively cooperate such that we don't constantly end up in the defect-defect corner of the Prisoner's Dilemma. In the real world, such cooperation is hard; but in an ideal world, such cooperation would essentially look like people coming together to sign agreements behind a veil of ignorance (not necessarily literally, but at least people acting as if they had done so). And the upshot of such signed agreements is generally to make the interpersonal-welfare-aggregative judgments of the type "alleviating two people's suffering is better than one", even if everyone agrees with the theoretical arguments that the suffering of two people on opposite sides don't literally cancel out, and that who's suffering matters.

Bob, Susie, Amy, and the rest of us all want to live in a world where we cooperate, and therefore we'd all want to live in a world where we make these kinds of interpersonal welfare aggregations, at the very least during the kinds of donation decisions in your thought experiments.

(For a much longer explanation of this line of reasoning, see this Scott Alexander post: http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/)

Comment by brianwang712 on Opportunities for individual donors in AI safety · 2018-03-12T04:34:30.237Z · EA · GW

To add onto the "platforms matter" point, you could tell a story similar to Bostrom's (build up credibility first, then have impact later) with Max Tegmark's career. He explicitly advocates this strategy to EAs in 25:48 to 29:00 of this video: https://www.youtube.com/watch?v=2f1lmNqbgrk&feature=youtu.be&t=1548.

Comment by brianwang712 on [Paper] Surviving global risks through the preservation of humanity's data on the Moon · 2018-03-04T16:58:46.336Z · EA · GW

I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?

If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for the prioritization of the reduction of some x-risks over others (e.g., risks from superintelligent AI vs. risks from pandemics). The magnitudes of these implications remain unclear to me, though.

Comment by brianwang712 on An Argument for Why the Future May Be Good · 2017-07-20T01:56:28.700Z · EA · GW

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to extract food/labor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head it's the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I don't think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I don't think industrialization had as much to do with it as people's changing views on ethics.

We reach the same conclusion – that the future is likely to be good – but I think for slightly different reasons.

Comment by brianwang712 on Dedicated Donors May Not Want to Sign the Giving What We Can Pledge · 2016-10-30T18:42:36.851Z · EA · GW

This is a good point; however, I would also like to point out that it could be the case that a majority of "dedicated donors" don't end up taking the pledge, without this becoming a norm. The norm instead could be "each individual should think through themselves, giving their own unique situations, whether or not taking the pledge is likely to be valuable," which could lead to a situation where "dedicated donors" tend not to take the pledge, but not necessarily to a situation where, if you are a "dedicated donor," you are expected not to take the pledge.

(I am highly uncertain as to whether or not this is how norms work; that is to say, whether norms connecting a group of people and a certain action could refrain from developing even though a majority of that group of people take that action.)

Comment by brianwang712 on Two Strange Things About AI Safety Policy · 2016-10-06T06:52:12.483Z · EA · GW

I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.

This does seem to assume that absolute size in difference is more important than proportions. I'm not sure how to think about whether or not this is the case.

Comment by brianwang712 on Two Strange Things About AI Safety Policy · 2016-10-06T02:54:08.650Z · EA · GW

Regarding your point (2), couldn't this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?

Comment by brianwang712 on Ideas for Future Effective Altruism Conferences: Open Thread · 2016-08-13T16:56:24.018Z · EA · GW

Quick feedback forms for workshops/discussion groups would be nice; I think most of the workshops I attended didn't allow any opportunity for feedback, and I would have had comments for them.

Comment by brianwang712 on Ideas for Future Effective Altruism Conferences: Open Thread · 2016-08-13T16:53:37.071Z · EA · GW

A guarantee that all the talks/panels will be recorded.

The booklet this year stated that "almost" all the talks would be recorded, which left me worried that, if I missed a talk, I wouldn't be able to watch it in the future (this might just be me). I probably would have skipped more talks and talked to more people if I had a guarantee that all the talks would be recorded.

Also, it would be nice to have a set schedule that didn't change so much during the conference. The online schedule was pretty convenient and was (for the most part) up to date, but people using the physical booklet may have been confused.

Comment by brianwang712 on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2016-05-18T19:33:09.723Z · EA · GW

I think that adopting your first resolution, in addition to the assumption by commenters that being a child with malaria is a net negative experience, can rescue some of the value of AMF. Say in situation 1, a family has a child, Afiya, who eventually gets malaria and dies, and thus has a net negative experience. Because of this, the family decides to have a second child, Brian, who does not get malaria and lives a full and healthy life. In situation 2, where AMF is taken to have a contribution, a family has just one child, Afiya, who is prevented from getting malaria and lives a full and healthy life. The family does not decide to have a second child. Only taking into account the utility of the people directly affected by malaria, and not the family, it seems to me that situation 1 is worse than situation 2 by an amount equivalent to Afiya's net negative experience of getting malaria; the reverse of this could be said to be AMF's contribution. So while this is not the same as 35 QALY's, it still seems like a net positive.

EDIT: Note of clarification: The above is in particular a response to the statement, "Because AMF hardly changes humans’ lifespans, it does not have a clear beneficial effect for humans," which was stated as a problem for Givewell with adopting the first resolution.

Comment by brianwang712 on EA Open Thread: October · 2015-10-11T03:08:17.745Z · EA · GW

I have a question for those who donate to meta-charities like Charity Science or REG to take advantage of their multiplier effect (these charities typically raise ~$5-10 per dollar of expenditure). Do you donate directly towards the operations expenses of these meta-charities? For example, REG's donations page has the default split of your donations as 80% towards object-level charities (and other meta-charities), while 20% is towards REG's operating expenses, which include the fundraising efforts that the multiplier presumably is coming from. It seems to me that in order to get the best multiplier for your donation, you would donate 100% towards operating expenses, since any dollar not spent on operating expenses wouldn't have any multiplier. Is this right?