Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-12T00:50:08.585Z · score: 2 (2 votes) · EA · GW

Great points. I agree re Double Up Drive that it was worthy of much deeper investigation to figure out the counterfactual nature of it. Perhaps there was even a way donors could have *made it* more counterfactual via doing a great job using their donation as an opportunity to signal and influence others' behaviors. I briefly considered donating to it just so that I could write a message to the donors who were offering the match encouraging them to donate the full amount regardless of whether or not the match was reached and have my message possibly be heard.

More generally one thing I have updated a lot on after this past giving season is that I now believe that for small donors the signaling value of their donations matter a lot more. For example, a GWWC member earning $50k/year and donating $5k/year has a certain amount of credibility that conceivably could be used to help influence much larger donors to donate more and more effectively. How one's donation and one's communications around one's donation is going to be perceived by much larger donations may in fact count for more than the value of one's donation itself.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T01:15:43.181Z · score: 2 (2 votes) · EA · GW

I am currently more than 30% confident that I will get at least one donation matched by Facebook on Giving Tuesday 2019. This is not conditioned on Facebook doing another match this year or on anything else.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:57:50.556Z · score: 4 (3 votes) · EA · GW

Minor point: I don't think that regression to the mean is a reason to expect the probability of an EA's donation being matched in 2019 to be less than the probability of it getting matched in 2018.

Comment by williamkiely on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-11T00:55:08.014Z · score: 6 (5 votes) · EA · GW

Thanks for writing this up. This is what I did in 2018 and I have no regrets. It clearly seemed to me to be the right thing to do after I saw my Giving Tuesday 2017 donations get matched. I considered writing a similar post to this one after Giving Tuesday 2017 to encourage others to hold off on donating until Giving Tuesday 2018, but I don't think I ever did due to (if I recall correctly):

(1) a worry that I would dissuade people from donating and worried that some of those people would suffer from value drift and no longer want to donate at all come Giving Tuesday, and

(2) I wasn't sure if I had sufficient evidence to give others confidence that come Giving Tuesday 2018 they'd be able to get their donations matched.

This year for a US donor who is not concerned about value drift being a significant risk I agree with Cullen that this is probably a great bet again.

Comment by williamkiely on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-06T23:55:04.502Z · score: 8 (4 votes) · EA · GW

Thanks for mentioning that the EA Meta Fund might be interested in funding things like this ("A project experimenting with novel fundraising strategies or target groups").

The question of whether having external funding would have helped seems complicated.

I think that there was a lot more valuable work that we could have done to make this initiative an even greater success, but I don't think that having external funding would have caused me or Avi to do more of this work. Firstly because we were capable of funding ourselves, but primarily (at least for me personally) because the challenge I was facing was how I could find the time/energy to do extra work on this project while maintaining my standing at my day job, which seemed important so I could continue working there in the future.

Due to a lack of foresight I had taken a 19-day vacation (Oct 19 - Nov 6) to Europe to attend EA Global London and then do some more traveling, which meant that I couldn't take any more time off from my job later in November upon my return without potentially significantly affecting my standing at my company. This meant that I just had nights and weekends to do work for Giving Tuesday.

I considered that I ought to have taken the extra time off from my job anyway despite this cost, but at the time this seemed too extreme a measure given that I didn't have any plans for what I would do come December/January had things taken a turn for the worse at my job due to my absence.

As I'm writing this now, it actually seems like I definitely should have taken more time off so I could do more work for Giving Tuesday despite the consequences of that for my day job. But my bias is to be risk-averse and not do things that seem to crazy or extreme, so it's not surprising to me that I didn't do this.

Perhaps external funding would have been a benefit in that it would have provided social approval for me taking more time off from my day job to work on the project. My employer probably also would have been more understanding of me doing this if I had external funding from a reputable organization to signal that what I was taking time off to do was valuable.

Actually, now that I think about it, I don't even know for sure how my employer would have reacted to me e.g. taking the entire month of November off since I didn't ask. I think it's likely that I would have been let go, but I could easily be wrong, especially if I took the time to explain why I wanted to take the time off and why I wanted to come back afterward.

Comment by williamkiely on New edition of "Rationality: From AI to Zombies" · 2018-12-17T05:49:52.374Z · score: 1 (1 votes) · EA · GW

I know someone who I think would enjoy HPMOR but refuses to read any book-length text in anything but paper book format. Other than going through the effort of printing out a PDF myself, does anyone know of any way I can get a hard copy? I'd be willing to make a counter-factual donation to MIRI or wherever for the trouble if that would help.

Comment by williamkiely on EA Survey 2018 Series: Donation Data · 2018-12-09T19:44:44.240Z · score: 2 (2 votes) · EA · GW

The two tables showing percentiles are missing column labels:

https://i.ibb.co/Vjbv1rc/image.png

https://i.ibb.co/jL0g7t7/table-4.png

Comment by williamkiely on Doing vs Talking at EA Events · 2018-09-01T03:37:03.252Z · score: 9 (11 votes) · EA · GW

My main question for you: How would you discuss the point from a friend new to EA: “Nothing was practically accomplished at the meeting. Ideas were discussed.”

I would encourage the friend to take a longer-term view to see the value in discussing ideas.

I would also encourage the friend to try out doing the direct-work meetup concept. I suspect there are good reasons why that style meetup is not popular, but would be delighted to see them pioneer it successfully.

The following EA Forum post may be relevant to you. I'm not sure how Madison vs Harvard differ, though I suspect they are both more similar to each other than they are to the contract-work-and-donate model of an EA meetup you are conceptualizing: http://effective-altruism.com/ea/1nh/heuristics_from_running_harvard_and_oxford_ea/

Some value I see in the social discussion style EA meetups I am familiar with (I co-organize EA Austin and have been to Chicago a couple times):

  • Prevent value drift through facilitating the development of friendships and closer ties between community members

  • Inform EA community members about new ideas which they can then go home and read further about online (high fidelity learning)

  • Have more personalized discussions about career planning than is easy/typical in online EA discussions

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:59:19.828Z · score: 0 (0 votes) · EA · GW

Good point. Do you think EAs with more money ought to consider living in group houses for the sake of reducing the cost of living to enable them to donate more?

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:35:32.559Z · score: 1 (1 votes) · EA · GW

Alex K Chen says: "You should like talk to people who do summer camp housing too, like SPARC" https://sparc-camp.org/

Comment by williamkiely on EA Hotel with free accommodation and board for two years · 2018-06-06T03:16:01.394Z · score: 2 (2 votes) · EA · GW

Is the main value of this coordination to cause EAs to live together in a group? Or is it causing poor EAs to be able to do direct work without having to build up savings first?

If the former, it's unclear to me why there would only be value in grouping together EAs who don't have much money/income (would getting other EAs with money to live together not be equally as valuable?).

And if it's the latter, it's unclear to me why this idea would be better than just funding poor EAs directly and letting them decide where to live -- e.g. Alex K. Chen has proposed that paying for talented young people with high potential to live in the Harvard/MIT area so they could unschool themselves there is potentially very high value.

Comment by williamkiely on EA #GivingTuesday Fundraiser Matching Retrospective · 2018-03-04T16:18:40.988Z · score: 0 (0 votes) · EA · GW

Thanks Avi.

Comment by williamkiely on #GivingTuesday: Counter-Factual Donation Matching is the Lowest-Hanging Fruit in Effective Giving · 2017-11-27T00:37:31.303Z · score: 4 (4 votes) · EA · GW

Facebook saw over 100,000 people donate to thousands of fundraisers that raised $6.79 million on Giving Tuesday across the United States. (Source)

This year I expect it to be more, though I'm not well-informed on how much more. Perhaps $10-$20MM is a reasonable expectation. https://en.wikipedia.org/wiki/Giving_Tuesday

Also, the match last year was for $500K instead of $2MM. From the same source:

After the initial match of $500,000 was reached within hours, The Bill & Melinda Gates Foundation increased their pledge to $900,000 total to match more Giving Tuesday Fundraisers on Facebook.

Note that last year's matching campaign was also announced in advance.

So I think 3 minutes is overkill. While apriori I would be expecting people to take advantage of this such that $2MM in donations are made in the first ~3 minutes, I think that last year shows that this is unlikely to happen. I would be surprised if the $2MM match is reached in less than 30 minutes. I'll assign a 20% probability to that happening somewhat arbitrarily. And maybe a 5% chance to it being reached in less than 10 minutes. My median estimate would be around 9:30 EST (1.5 hours). And maybe a 20% chance that it takes more than 3 hours. Although I don't really know, so my suggestion is to donate ASAP. If you're donating more than just a small amount it's worth it even if it's inconvenient.

I intend to make all of my donations ASAP after 8:00 AM EST. (I am going to try to make 10 separate $1,000 donations before 8:10 AM EST).

Comment by williamkiely on An Argument for Why the Future May Be Good · 2017-07-22T19:57:08.037Z · score: 2 (2 votes) · EA · GW

I wasn't proposing that (I in fact think the present is already good), but rather was just trying to better understand what you meant.

Your comment clarified my understanding.

Comment by williamkiely on An Argument for Why the Future May Be Good · 2017-07-21T00:58:27.517Z · score: 2 (2 votes) · EA · GW

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Comment by williamkiely on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-08-27T04:35:37.968Z · score: 0 (2 votes) · EA · GW

Hmm. I do believe I discount vertebrates much less than I discount insects, however I also think there's a huge difference between say chickens and chimpanzees or chimpanzees and humans. Even among humans (who have quite similar brains to one another compared to inter-comparisons), I think that the top 10% of Americans probably live lives that I value inherently (by which I mean ignoring the effects that they have on other things and only counting the quality of their conscious life experience) at least one order of magnitude (if not several) more than the bottom 10% of Americans. I believe this is an unpopular view also, but one consideration I might be able to give in support of it is if you reflect on how much you value your own conscious experience during some parts of your life compared to others you may find as I do that some moments or short periods seem to be of much greater value than others of equal duration.

An exercise I tried recently was making a plot of "value realized / time" vs "time" for my own conscious life experience (so again: not including the effects of my actions, which is the vast majority of what I value) and found that there were some years I valued multiple times more than other years and some moments I valued many times more than all years on net. The graph was also all positive value and trending upwards. Sleeping much less than awake. (I don't think I have very vivid dreams relative to others, but even if I did, I would probably still tend to value waking moments much more than sleeping ones.) Also, remembering or reflecting on great moments fondly can be of high value too in my evaluation. There's also the problem of not knowing now what certain experiences were like in the past to actually experience them since I'm relying on my memory of what they were like, which for all I know could be faulty. I think in general I choose to value experiences based on how I remember them being rather than how I think they were when I lived them (if there is a discrepancy between the two).

Also note that I'm a moral anti-realist and so I don't think there are correct answers, so to a certain extent how much I value some periods of my conscious life experience relative to others is a choice, since I don't believe that there are completely defined definite values that are mine that I can discover either.

A general thing I'd be really interested in seeing is peoples' estimates of how much they value (whether positively or negatively) the total life experiences of say, mosquitoes, X, Y, Z, chickens, cows, humans (and what that distribution looks like), oneself over time, a typical human over time, etc. And also "What would a graph of (value realized per unit time) vs (time) look like for Earth's history?" which would answer the question "How much value has been realized since life began on Earth?" (note: I'd ignore estimates of value realized elsewhere in the universe, which may actually be quite significant, for the sake of the question). If you'd like to indulge me on your own views on an of this I would be very interested, but of course no need if you don't want to. I'll estimate and write my own answers up sometime.

Comment by williamkiely on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-08-27T03:35:48.432Z · score: 0 (4 votes) · EA · GW

How many painful mosquito deaths would you have to be offered to prevent to choose that over causing one new human life (of quality equal to that of a typical person today) to be lived (all instrumental effects / consequences aside)?[1][2][3] (For my answer see [2].)

What would the distribution of EAs' answers look like? College graduates' answers? Everyone's answers?

What range of answers does the OP assume?

Or more broadly, for what range of moral theories can a case be made that WAS should be prioritized?

I ask these questions because, while I find the OP argument intriguing, my current values (or my current beliefs about my values, depending on how you want to think about it) are such that preventing mosquito suffering is very insignificant relative to many other things (e.g. there being more humans that live good lives, or humans living better lives) and is therefore far from being a high priority for me.

While I haven't dived deeply into arguments for negative utilitarianism or other arguments that could conceivably change my view significantly, I think it's unlikely (~10%, reported in [2]) that doing so would lead me to change my view significantly.[4]

It seems to me that the most probable way that my view could be changed to believe that (e.g.) OPP ought to prioritize WAS would be to persuade me that I should adopt a certain view on how to deal with moral uncertainty that would, if adopted, imply that OPP ought to prioritize WAS even given my current beliefs about how much I value the suffering of mosquitoes relative to other things (e.g. the lives of humans).

Is there a case to be made for prioritizing WAS if one assigns even a small probability (e.g. 1%) to a negative utilitarian-like view being correct given that they also subscribe to certain plausible views on moral uncertainty?

My views on how to deal with moral uncertainty are very underdeveloped. I think I currently have a tendency to evaluate situations or decide on actions on the basis of the moral view I deem most probable, however as the linked LessWrong wiki article points out, this has potential problems. (I'm also not aware of a less problematic view, so I will probably continue to do this until I encounter something else that appeals to me more. Bostrom's parliamentary model seems like a reasonable candidate, although I'm unsure how this negotiation process works exactly or would play out. Would have to think about it more.

Lastly, let me just note that I don't challenge the non-normative factual claims of the OP. Rather, I'm simply stating that my hesitation to take the view that OPP should prioritize WAS comes from my belief that I value things significantly differently than I would have to in order for WAS to be something that OPP should prioritize.


{1] A similar question was asked in the Effective Altruism Facebook group. My version gets at how much one values the life of a typical person today relative to the life of a typical mosquito rather than how much one values extreme pleasure relative to extreme suffering.

[2] Since I'm asking for others' answers, I should estimate my own answer. Hmm. If I had to make the decision right now I would choose to create the new human life, even if the number of painful mosquito deaths I was offered to prevent was infinite. Although note that I am not completely confident in this view, perhaps only ~60%. Then maybe ~30% to 10^10-infinity and ~10% to <10^10 mosquitoes, where practically all of that 10% uncertainty comes from the possibility that a more enlightened version of myself would undergo a paradigm shift or significant change in my fundamental values / moral views. In other words, I'm pretty uncertain (~40/60) about whether mosquitoes are net negative or not, but I'm pretty certain (~75%=30%/40%) that if I do value them negatively that the magnitude of their negative value is quite small (e.g. relative to the positive value I place on (the conscious experience of) human life).

[3] Knowing that my view is controversial among EAs (see the link at [1]), perhaps I should meta-update significantly towards the consensus view that not only is the existence of suffering inherently bad, but it's also a much greater magnitude bad than I think in the ~30% scenario that it is. I'll refrain from doing this for now, or figuring out how much I should update if I only think there's an X% that it's proper to update. (I'm also not sure how much my intuitions / current reported estimates already take into account others estimates or not.)

[4] The basis of my view that the goodness of a human life is much greater than the possible (~40% in my view) badness of a mosquito's suffering or painful death (and the basis of more general versions of this view) is my intuition. Thinking about the question from different angles I have been unable to shift my view significantly towards placing substantially more value on mosquitoes' significance or preventing mosquito suffering.

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T23:23:11.872Z · score: -1 (1 votes) · EA · GW

Noting that I didn't find this essay useful (although I'm not giving it a thumbs-down vote).

In fact I found it counter-productive because it lead me to spend (waste, IMO) more time thinking about this topic. Of course that's not your fault, but I just wanted to mention this. (I have an imperfect brain. Even though my better judgment says that I shouldn't spend more time thinking about this topic I am often lured in by it when I come across it and am unable to resist thinking about it.) Moral realism has always seemed obviously wrong to me, and my lack of success historically at understanding why many smart people apparently do think that there is a One True Moral Standard that magically compels people to do things independent of their desires/values/preferences has caused much frustration for me. I find it frustrating not only because I have been unable to make progress understanding why moral realists believe moral realism is true, but also because if I am right that moral realism is simply wrong then time spent thinking and writing about the topic (including the time that you spent writing the OP essay) is largely wasted AND if moral realism is true then it still seems to me that the time spent discussing the topic is largely wasted time since I'm still going to go on caring about what I care about and acting to effectively achieve what I value rather than adjust my actions to adhere to the particular Moral Standard that is apparently somehow "correct."

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T22:04:17.608Z · score: 0 (0 votes) · EA · GW

Comment 2:

My one criticism to offer after reading this is in regard to the way you choose to answer "Yes" to the question of whether people "have obligations" (which I put in quotes to communicate the fact that the phrase could be interpreted in different ways such that the correct answer to the question could be either yes or no depending on the interpretation):

"So am I obligated to do anything?

"Yes. You have legal obligations to follow the laws, have epistemic obligations to believe the truth, have deontological obligations not to lie under any circumstance, have utilitarian obligations to donate as much of your income as you can manage, etc… You’re under millions of potential obligations – one for each possible standard that can evaluate actions. Some of these may be nonsensical, like an anti-utilitarian obligation to maximize suffering or an obligation to cook spaghetti for each meal. But all of these obligations are there, even if they’re contradictory. Chances are you just don’t care about most of them."

While I can see how this way of defining what it means to have an obligation can definitely be useful when discussing moral philosophy and bring clarity to said discussions, I think it's worth pointing out that how it could potentially be quite confusing when talking with people who aren't familiar with your specific definition / the specific meaning you use.

For example, if you ask most people, "Am I obligated to not commit murder?" they would say, "Yes, of course." And if you ask them, "Am I obligated to commit murder?" they would say, "No, of course not."

You would answer yes to both, saying that you are obligated to not commit murder by (or according to) some moral standards/theories and are obligated to commit murder by some others.

To most people (who are not familiar with how you are using the language), this would appear contradictory (again: to say that you are obligated both to do and not to do X).

And the second note is that when laypeople say, "No, I am not obligated to commit murder," you wouldn't be inclined to say that they are wrong (because you don't interpret what they are trying to say so uncharitably), but rather would see that clearly they meant something else than the meaning that you explained in the article above that you would assign to these words.

My interpretation of their statement that they are not obligated to commit murder would be (said in one way) that they do not care about any of the moral standards that obligate them to commit murder. Said differently, they are saying that in order to fulfill or achieve their values, people shouldn't murder others (at least in general), because murdering people would actually be a counter-productive way to cause what they desire to happen to happen.

Comment by williamkiely on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-04T22:03:49.791Z · score: 0 (0 votes) · EA · GW

I hold the same view as yours described here (assuming of course that I understand you correctly, which I believe I do).

FWIW I would label this view "moral anti-realist" rather than "moral realist," although of course whether it actually qualifies as "anti-realism" or "realism" depends on what one means by those phrases, as you pointed out.

Here are two revealing statements of yours that would have lead me to strongly update my view towards you being a moral anti-realist without having to read your whole article (emphasis added):

(1) "that firm conviction is the “expressive assertivism” we talked about earlier, not a magic force of morality."

(2) "I disagree that there is One True Moral Standard." "I disagree that these obligations have some sort of compelling force independent of desire."

Comment by williamkiely on Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) · 2016-06-10T23:27:17.818Z · score: 0 (0 votes) · EA · GW

Related: http://effective-altruism.com/ea/ss/the_importantneglectedtractable_framework_needs/ "The Important/Neglected/Tractable framework needs to be applied with care"

Comment by WilliamKiely on [deleted post] 2016-05-31T00:39:48.539Z

Thank you, I think you're right. Thanks for the feedback.

Comment by williamkiely on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2016-05-19T23:02:30.050Z · score: 4 (4 votes) · EA · GW

But more importantly, donors should be aware of how questions of population ethics affect the expected value of different interventions.

Thank you for emphasizing this--I think it's very important.

I've realized lately that my views on questions of population ethics are very underdeveloped, which is problematic because it leaves me very uncertain about the relative importance of different causes and the expected value of different interventions, which leads me to postpone donating more until I have better information (and also possibly leads me to not engage in direct-impact work that I possibly should be engaging in due to not knowing what that work that I should be engaging in is).

Note that because questions of population ethics can change the expected value of possible interventions from positive to negative (or vice versa) and by orders of magnitude rather than just a few percentage points, my lack of confident answers to questions of population ethics seems to be a good reason to postpone making any further donations until I have better information on my views on those questions.

If donors understood these assumptions, I expect that many of them would prioritize their donations differently.

I wonder... To what extent it is true that donor ignorance about their views on questions of population ethics (and related questions about their values) leads donors to confidently choose one charity or intervention over another when in fact if they understood their views on population ethics correctly then they would have chosen the other charity or intervention?

I used to think that I knew what I valued well enough to choose where to donate, but now I realize that I have to think more on certain questions of population ethics to at least figure out what approximate probability I would assign to each possible way of valuing things before I can know which cause and intervention I believe has the highest expected value and is worth donating to.

Comment by williamkiely on Lesswrong Diaspora survey · 2016-04-03T23:25:20.829Z · score: 0 (2 votes) · EA · GW

What is the purpose of the survey? It doesn't seem to be a very worthwhile EA activity to take it. I stopped at question 40.

Comment by williamkiely on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2016-03-26T02:07:46.584Z · score: 0 (0 votes) · EA · GW

(Note: I found this old thread after Eliezer recently shared this Wait But Why post on his Facebook: Why Cryonics Makes Sense)

"I'm so afraid of dying and believe in cyronics so much that signing up for cryonics would end many of my worries and let me be far more productive"

I don't find this argument humorous, but I do see it as perhaps the most plausible argument defending cryonics from an EA perspective.

That said, I don't think the argument succeeds for myself or (I would presume) a large majority of other people.

(It seems to me that the exceptions that may exist would tend to be the people who are very high producers (such that even a very small percentage increase in their good-production would outweigh the cost of their signing up for cryonics) rather than people who are exceptionally afraid of death and love the idea of possibly waking up in the distant future and living longer so much to the point that not signing up for cryonics would be debilitating to them and a sufficiently large hindrance on their productivity (e.g. due to feeling depressed and being unable to concentrate on EA work, knowing that this cryonics option exists that would give them hope) to outweigh the cost of signing up for cryonics.)

So I don't see cryonics as being very defensible from an EA perspective.

Comment by williamkiely on Should you start your own project now rather than later? · 2016-02-26T05:08:31.529Z · score: 0 (0 votes) · EA · GW

Would asking people on the street if they'd be willing to donate money to effective charities such as AMF (or similar marketing efforts to try to raise money quickly rather than focus on high quality movement building) have this negative counterfactual impact?

What is a good way to evaluate this risk of a negative counterfactual impact for candidate projects one is considering launching in general?

Comment by williamkiely on Opportunity to increase your giving impact through AMF · 2016-02-24T22:06:45.727Z · score: 1 (1 votes) · EA · GW

A full analysis of the effectiveness of this event will be undertaken, with an analysis to be made publicly available.

Will you ask donors what they would have done otherwise had they not heard of this event and decided to participate (e.g. donate to AMF anyways, or to another charity, or not donate at all)?

Comment by williamkiely on Accomplishments Open Thread - February 2016 · 2016-02-07T04:17:14.949Z · score: 5 (7 votes) · EA · GW

In January the only EA activities I engaged in were Skyping with Gleb about possible volunteer work I could do for InIn and offering him some feedback on an article or two.

I also wrote down two EA-related ideas that have been on my mind in two blog posts, although it's not clear to me that doing this actually did any good.

Tonight I completed about half of my Centre for Effective Altruism Pareto Fellowship application.

Comment by williamkiely on Celebrating All Who Are in Effective Altruism · 2016-01-20T03:26:15.136Z · score: 2 (2 votes) · EA · GW

I would be interested in hearing from someone who disagrees with this about why they disagree. Just curious.

Comment by williamkiely on Accomplishments Open Thread · 2016-01-10T03:34:21.140Z · score: 2 (2 votes) · EA · GW

Thank you for embracing the awkwardness; this is inspiring!

Comment by williamkiely on Accomplishments Open Thread · 2016-01-10T03:33:08.107Z · score: 10 (10 votes) · EA · GW

In 2015:

1) I met up with several EAs in person--once in Chicago and multiple times in NH.

2) I read Doing Good Better and The Life You Can Save and talked about them with friends and non-EAs online.

3) I donated 10% of my 2015 income to the Against Malaria Foundation.

Comment by williamkiely on Why You Should Be Public About Your Good Deeds · 2016-01-06T03:11:59.900Z · score: 1 (1 votes) · EA · GW

What is the most effective manner in which to be public about your good deeds?

Comment by williamkiely on Why You Should Be Public About Your Good Deeds · 2016-01-06T03:00:53.153Z · score: 1 (1 votes) · EA · GW

I notice that even after reading this I still have not notified anyone about my giving this past year.

I don't feel like being public about it, although I suppose I'm capable of overcoming that feeling in pursuit of my higher wants and really should make a conscious effort to do that sometime.

Okay, fine, I'll do it now. Okay, yes, I just announced the fact on social media and linked to this post as a justification for the post:

Last year I donated to the Against Malaria Foundation, GiveWell's top recommended charity. Note that while I don't feel like saying this, I apparently ought to. [Link to here]

Comment by williamkiely on Why You Should Be Public About Your Good Deeds · 2016-01-06T02:55:50.150Z · score: 1 (1 votes) · EA · GW

Thanks for posting this here. I hadn't heard of your organization Intentional Insights and am glad to learn of it since I believe intentionality is critical to effective altruism and the mission of doing the most good possible.

Comment by williamkiely on What is the expected effect of poverty alleviation efforts on existential risk? · 2015-10-04T17:37:39.848Z · score: 1 (1 votes) · EA · GW
  1. I don't have a specific charity in mind yet. 2. I'm not very confident in my answer.

I should also mention that I probably won't be donating much more for at least a couple years, so it probably shouldn't be my highest priority to try to answer all of these questions. They are good questions though, so thanks.

Comment by williamkiely on What is the expected effect of poverty alleviation efforts on existential risk? · 2015-10-03T05:14:38.519Z · score: 0 (0 votes) · EA · GW

I wasn't thinking that the money would go towards hiring experts. Rather, something like: "I'll donate $X to GiveDirectly if someone changes my view on this important question that will decide whether I want to donate my money to Org 1 or Org 2."

Comment by williamkiely on What is the expected effect of poverty alleviation efforts on existential risk? · 2015-10-03T04:43:17.777Z · score: 0 (0 votes) · EA · GW

That is a great question you posted on Reddit!

There are so many important unanswered questions relevant to EA charitable giving. Maybe an effective meta-EA charity idea would be a place where EAs could pose research questions they want answered, and they offer money based on how much they would be willing to give to have their question answered with a certain quality.

Comment by williamkiely on What is the expected effect of poverty alleviation efforts on existential risk? · 2015-10-03T04:33:44.640Z · score: 4 (4 votes) · EA · GW

Specifically, I intend to give (100% or nearly 100%) to existential risk rather than (mostly) poverty alleviation (this due to how much I value future lives (a lot) relative to the quality of currently-existing lives).

Upon trying to think of counter-arguments to change my view back in favor of donating to poverty alleviation charities, the best I can come up with right now:

Maybe the best "poverty alleviation" charities are also the best "existential risk" charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?

Comment by williamkiely on What is the expected effect of poverty alleviation efforts on existential risk? · 2015-10-03T03:41:43.228Z · score: 3 (3 votes) · EA · GW

Thank you! This just changed where I intend to donate tremendously.

Comment by williamkiely on EA Assembly & Call for Speakers · 2015-08-28T02:09:17.822Z · score: 0 (0 votes) · EA · GW

Seconded. I just recommended the same thing to Kyle before reading your comment.

Comment by williamkiely on EA Assembly & Call for Speakers · 2015-08-28T02:08:28.588Z · score: 0 (0 votes) · EA · GW

Where has the list of speakers been released? Thanks.

Comment by williamkiely on EA risks falling into a "meta trap". But we can avoid it. · 2015-08-26T00:18:02.243Z · score: 2 (2 votes) · EA · GW

Great post, Peter.

You helped change my perspective from my post yesterday.

I hadn't considered your point:

Meta Trap #5: Sometimes, Well Executed Object-Level Action is What Best Grows the Movement

It makes sense. For example, set an example for others as someone who thoughtfully and altruistically donates a significant portion of their income to charity and many of the others may follow.

Also:

The more steps away from impact an EA plan is, the more additional scrutiny it should get.

This is a very important point. While I think it's quite possible to identify effective meta-level work that avoids your Meta Traps #1-#4, I think it's probably harder than most people (including myself) would initially think, due to many initial ideas falling into one or more of the meta traps.

Comment by williamkiely on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-24T18:22:46.463Z · score: 3 (3 votes) · EA · GW

Thanks, this is helpful.

I read your organization breakdown page (as well as several of the linked documents) and will be submitting an application in the next week for an internship. Hopefully I can d something to help out.

Comment by williamkiely on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-24T16:09:24.233Z · score: 2 (2 votes) · EA · GW

Okay, point taken. I don't mean to criticize GiveWell. Rather, I meant to point out that it seemed to me that they were more focused on the function of identifying top giving opportunities than the function of directing as much money as possible to said charities. Is this not true? Is GiveWell the best at both, or just the former?

Comment by williamkiely on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-24T15:59:05.883Z · score: 0 (0 votes) · EA · GW

Thanks.

Comment by williamkiely on Charity Redirect - A proposal for a new kind of Effective Altruist organization · 2015-08-24T15:42:30.239Z · score: 2 (2 votes) · EA · GW

Awesome, I'm glad to know someone is working on this. I'm definitely going to check it out and see if it makes sense for me to get involved.

Do you have any idea why it isn't widely believed in the EA movement that donating to Charity Science is better than donating to GiveWell's recommendations directly? (Or maybe most EAs do know this, and I just for some reason never heard people emphasizing this.)

Comment by williamkiely on How to get more EAs to connect in person and share expertise? · 2015-08-19T15:50:58.245Z · score: 4 (4 votes) · EA · GW

Cool, thanks. Some people just brought me up to 10 Karma, so I'm going to write a post on one idea tonight and publish it here.

Comment by williamkiely on How to get more EAs to connect in person and share expertise? · 2015-08-19T02:58:21.139Z · score: 9 (9 votes) · EA · GW

The 10 karma requirement to make your own posts on this forum makes it difficult for new people to share their own ideas here. Perhaps reducing it to 2 would be better.

I've been aware of this forum for a few months and have checked back to it a couple dozen times, but still don't have 10 karma because searching out for posts where you can make insightful comments to get 10 upvotes isn't actually that easy or quick.

Comment by williamkiely on How to get more EAs to connect in person and share expertise? · 2015-08-19T02:51:58.950Z · score: 8 (8 votes) · EA · GW

I just met several EAs in person for the first time last night after following the community online for over a year. Here's the process:

I was travelling and knew I'd be in the Chicago area for two weeks. Upon arrival I searched for Chicago EA groups on Facebook, found one and asked if there were any meetups happening. There weren't, but people were interested and we set one up (Facebook event). 8-10 people showed and we had some good discussions. Pretty simple.

Takeaways:

(1) Create EA Facebook group for your area if there isn't one already

(2) Join the EA Facebook group for your area

(3) Attend meetups when travelers or new people express interest in meeting up

Comment by williamkiely on Efective Altruism Quotes · 2015-08-03T20:04:02.924Z · score: 1 (1 votes) · EA · GW

When it comes to doing good, fat-tailed distributions seem to be everywhere. It’s not always true that exactly 80 percent of the value comes from the top 20 percent of activities—sometimes things are even more extreme than that, and sometimes less. But the general rule that most of the value generated comes from the very best activities is very common.

-- William MacAskill, Doing Good Better