Posts

Ways Frugality Increases Productivity 2019-06-25T21:06:19.014Z · score: 67 (41 votes)
What is the Impact of Beyond Meat? 2019-05-03T23:31:40.123Z · score: 25 (10 votes)
Identifying Talent without Credentialing In EA 2019-03-11T22:33:28.070Z · score: 31 (17 votes)
Deliberate Performance in People Management 2017-11-25T14:41:00.477Z · score: 30 (26 votes)
An Argument for Why the Future May Be Good 2017-07-19T22:03:17.393Z · score: 26 (26 votes)
Vote Pairing is a Cost-Effective Political Intervention 2017-02-26T13:54:21.430Z · score: 12 (14 votes)
Living on minimum wage to maximize donations: Ben's expenses in 2016 2017-01-29T16:07:28.405Z · score: 21 (21 votes)
Voter Registration As an EA Group Meetup Activity 2016-09-16T15:28:46.898Z · score: 4 (6 votes)
You are a Lottery Ticket 2015-05-10T22:41:51.353Z · score: 10 (10 votes)
Earning to Give: Programming Language Choice 2015-04-05T15:45:49.192Z · score: 3 (3 votes)
Problems and Solutions in Infinite Ethics 2015-01-01T20:47:41.918Z · score: 9 (8 votes)
Meetup : Madison, Wisconsin 2014-10-29T18:03:47.983Z · score: 0 (0 votes)

Comments

Comment by ben_west on How do you, personally, experience "EA motivation"? · 2019-08-17T17:32:39.579Z · score: 16 (9 votes) · EA · GW

I like this quote from the beginning of Strangers Drowning:

There is one circumstance in which the extremity of do-gooders looks normal, and that is war. In wartime — or in a crisis so devastating that it resembles war, such as an earthquake or a hurricane — duty expands far beyond its peacetime boundaries… In wartime, the line between family and strangers grows faint, as the duty to one’s own enlarges to encompass all the people who are on the same side. It’s usually assumed that the reason do-gooders are so rare is that it’s human nature to care only for your own. There’s some truth to this, of course. But it’s also true that many people care only for their own because they believe it’s human nature to do so. When expectations change, as they do in wartime, behavior changes, too.

In war, what in ordinary times would be thought weirdly zealous becomes expected… People respond to this new moral regime in different ways: some suffer under the tension of moral extremity and long for the forgiving looseness of ordinary life; others feel it was the time when they were most vividly alive, in comparison with which the rest of life seems dull and lacking purpose.

In peacetime, selflessness can seem soft — a matter of too much empathy and too little self-respect. In war, selflessness looks like valor. In peacetime, a person who ignores all obligations, who isn’t civilized, who does exactly as he pleases — an artist who abandons duty for his art; even a criminal — can seem glamorous because he’s amoral and free. But in wartime, duty takes on the glamour of freedom, because duty becomes more exciting than ordinary liberty…

This is the difference between do-gooders and ordinary people: for do-gooders, it is always wartime. They always feel themselves responsible for strangers — they always feel that strangers, like compatriots in war, are their own people. They know that there are always those as urgently in need as the victims of battle, and they consider themselves conscripted by duty.
Comment by ben_west on My recommendations for RSI treatment · 2019-07-10T22:39:34.589Z · score: 2 (1 votes) · EA · GW

Is there a non paywall version or a summary you could share? I'm guessing this is the tool you are talking about? https://www.amazon.com/TheraBand-Tendonitis-Strength-Resistance-Tendinitis/dp/B07NX7JXXH

Comment by ben_west on Ways Frugality Increases Productivity · 2019-07-08T14:58:46.980Z · score: 2 (1 votes) · EA · GW

That sounds right to me

Comment by ben_west on Ways Frugality Increases Productivity · 2019-07-08T14:55:45.598Z · score: 2 (1 votes) · EA · GW

Thanks for the clarification! I agree that there are lots of ways that spending money on yourself can make you more productive, and a gym membership seems plausibly like one of those for you. I'm just pointing out that not all ways of spending money on yourself improve your productivity (which is a claim you might not endorse, but seems to have gotten some traction in EA).

Comment by ben_west on Ways Frugality Increases Productivity · 2019-06-26T17:37:54.709Z · score: 6 (4 votes) · EA · GW

This is great. Much more eloquent than my post.

Comment by ben_west on Ways Frugality Increases Productivity · 2019-06-26T17:36:58.526Z · score: 33 (13 votes) · EA · GW
Arguments 2 and 3 mostly seem like arguments against having a life outside of work - am I reading that right?


Yes, if you want to maintain flexibility to jump on new projects if exciting opportunities arise, you probably shouldn't have much of a life outside of work.

(Note: I personally do have a fairly involved life outside of work, and am fine with that trade-off. I'm just pushing back against the claim that no trade-off exists.)

Comment by ben_west on Information security careers for GCR reduction · 2019-06-24T23:32:46.838Z · score: 19 (10 votes) · EA · GW

Thanks Claire and Luke for writing this!

I have hired security consultants a couple of times, and found that it was challenging, but within the normal limits of how challenging hiring always is. If you want someone to tell you the best practices for encrypting AWS servers, or even how to protect some unusual configuration of AWS services, my guess is that you can probably find someone (although maybe you will be paying them $200+/hour).

My assumption is that the challenge you are pointing to is more about finding people who can e.g. come up with novel cryptographic methods or translate game theoretic international relations results into security protocols, which seems different from (and substantially harder than) the work that most "information security" people do.

Is that accurate? The way you described this as a "seller's market" etc. makes me unsure if you think it's challenging to find even "normal"/junior info sec staff.

Comment by ben_west on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2019-06-21T21:39:43.939Z · score: 8 (3 votes) · EA · GW

Thanks for writing this up! I'm really excited to see such a detailed model of nuclear winter risk. As Kit mentioned, the guesstimate model is easy to understand and play with.

Comment by ben_west on Drowning children are rare · 2019-05-30T19:50:03.686Z · score: 14 (8 votes) · EA · GW
I'd be very interested to know if there are posts that both criticize something EA in a cogent way as this post does and don't receive large numbers of downvotes.

Hallstead's criticism of ACE seems like one example.

Comment by ben_west on Drowning children are rare · 2019-05-30T15:52:29.458Z · score: 9 (5 votes) · EA · GW

Most of the comments in the EA forum are pointing out serious factual errors in the post (or linking to such explanations). The LW comments are more positive. The simpler explanation to me seems like the issues with his posts were hard-to-find, and unsurprisingly people on the EA forum are better at finding them because they have thought more about EA.

Comment by ben_west on EA Meta Fund Grants - March 2019 · 2019-05-30T15:35:43.552Z · score: 29 (10 votes) · EA · GW

Thanks for writing this up!

Question about One for the World: the average American donates about 4% of their income to charity. Given this, asking people to pledge 1% seems a bit odd – almost like you are asking them to decrease the amount they donate.

One benefit of OFTW is that they are pushing GiveWell-recommended charities, but this seems directly competitive with TLYCS, which generally suggests people pledge 2-5% (the scale adjusts based on your income).

It's also somewhat competitive with the Giving What We Can pledge, which is a cause-neutral 10%.

I'm curious what you see as the benefits of OFTW over these alternatives?

Comment by ben_west on Ingredients for creating disruptive research teams · 2019-05-22T22:18:12.199Z · score: 13 (6 votes) · EA · GW

Thanks so much for writing this! It looks like a really thorough investigation, and you found more concrete suggestions than I would've expected.

Regarding psychological safety: Google also found that psychological safety was the strongest predictor of success on their teams, and has created some resources to help foster it which you might be interested in.

Comment by ben_west on A Framework for Thinking about the EA Labor Market · 2019-05-15T16:07:55.308Z · score: 6 (3 votes) · EA · GW

Cool. I think that is a helpful segmentation.

But are you saying a) there are lots of people in cohort 2 who are great candidates so we should go after them, or b) there are not very many people in cohort 2, but because they are from some underrepresented demographic we should still go after them?

Comment by ben_west on A Framework for Thinking about the EA Labor Market · 2019-05-14T15:34:21.810Z · score: 4 (2 votes) · EA · GW
I think the EA community has gotten caught up in the observation that “there are a lot of smart people willing to work for way below market” and lost track of the question of “what’s the right way to structure EA compensation to maximize impact?”

Not sure I fully understand this. You're saying something like: "it might be true that increasing wages will have only a small increase in the number of candidates, but those new candidates are unusually impactful (because they are from underrepresented groups) so it's still worth doing?"

Comment by ben_west on A Framework for Thinking about the EA Labor Market · 2019-05-13T23:30:46.449Z · score: 8 (4 votes) · EA · GW

Thanks for writing this Jon!

"But I believe many EAs are resistant to raising salaries as a way to close talent gaps because they conflate willingness to work for low pay with fit for a job"

The claim I have heard most frequently is not this but rather that labor supply is inelastic below market rates. E.g. there are people who really want to work for you (because of your mission or the prestige or whatever), and to them being paid 60% market rate is basically the same as being paid 90% market rate. So raising your compensation from 60% to 90% market rate won't actually attract more candidates.

(I modified your picture here to show this – you can see that the quantity supplied Qs is very close to the equilibrium quantity supplied Qs*.)

I don't know if this is actually true (it seems simplistic, at the least), but it seems consistent with everything you've written above yet still doesn't imply EA organizations should focus on raising salaries.

Comment by ben_west on Non-Profit Insurance Agency · 2019-05-13T22:57:01.193Z · score: 5 (3 votes) · EA · GW

Thanks for considering this! I believe you are considering what I would call "earning to give" as an insurance agent. 80 K has more info on earning to give, if you have not already seen that: https://80000hours.org/articles/earning-to-give/

Building on Gordon's answer: LLCs are pass-through entities, which means that you can deduct donations up to half your revenue/income. You could consider something more elaborate (e.g. a corporation owned by a nonprofit foundation, like Newman's Own), but that's probably unnecessarily complex.

Comment by ben_west on What is the Impact of Beyond Meat? · 2019-05-04T17:36:26.246Z · score: 9 (3 votes) · EA · GW

I didn't realize they had discontinued the chicken products. That's too bad

Comment by ben_west on Legal psychedelic retreats launching in Jamaica · 2019-04-18T18:25:05.946Z · score: 10 (6 votes) · EA · GW
Weaker evidence shows that psychedelic experiences positively predict liberal and anti-authoritarian political views, trait openness

Increasing openness does not seem uniformly good, e.g. SSC wrote a speculative blog post that psychedelic use may make one "open" to pseudoscience, conspiracy theories etc. I'm curious if you have thoughts on this?

Comment by ben_west on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-19T18:12:22.485Z · score: 7 (4 votes) · EA · GW

Thanks for thinking of this! My experience is that, in both for-profit and nonprofit spaces, the limiting constraint is not knowledge that fundable projects exist. Rather, it's the lack of due diligence on the projects (and people who can do that sort of DD).

In for-profit angel investing, usually one investor will take the "lead", meaning that they do a full examination of the startup: speak with customers, audit the financials, do background checks on the founders, etc. Other investors will invest conditional on the lead signing off. Certain groups will usually prefer to lead or not; some of them will make investments into hiring lawyers, accountants etc. to help them do this due diligence, whereas others will prefer to just defer to other lead investors.

I'm not aware of any entity similar to a lead investor in the EA community. People sometimes suggest just following on with OpenPhil (i.e. only donating to organizations which OpenPhil grants to) – this doesn't seem unreasonable, but it does mean that many organizations will be left unfunded.

Comment by ben_west on Identifying Talent without Credentialing In EA · 2019-03-14T00:09:13.852Z · score: 2 (1 votes) · EA · GW
Individuals at companies are bad at hiring for expected performance

Fair – an implicit assumption of my post is that markets are efficient. If you don't think so, then what I had to say is probably not very relevant.

Comment by ben_west on Identifying Talent without Credentialing In EA · 2019-03-13T17:46:27.685Z · score: 4 (5 votes) · EA · GW

At basically every company in the world, someone who comes in with a 10-year-long track record of success is going to be way more likely to be hired than someone fresh out of college, even if they are equally skilled. It would be pretty surprising to me if you are able to outperform the hiring practices of all these companies - do you have a sense of why you think you can? Many orgs use work-sample tests, so that seems too simple of an explanation.

Comment by ben_west on Identifying Talent without Credentialing In EA · 2019-03-12T00:33:15.389Z · score: 5 (2 votes) · EA · GW

Thanks for the feedback! I meant "credentials" to include things like work experience, and perhaps should have used more examples like that to be clear.

Comment by ben_west on Animal-Welfare Economic Research Questions · 2018-12-26T21:16:50.068Z · score: 3 (2 votes) · EA · GW

You might be interested in The Role of Economics in Achieving Welfare Gains for Animals. Here is the abstract:

The demand for animal products and services is a powerful economic force in society, and multibillion-dollar industries are organized around this demand. These industries often face increased costs by improving animal welfare and are quick to use economic arguments against proposed welfare reforms (see sidebar on page 169). These arguments, while often specious, can influence consumers, voters, and policy makers. Citizens are less likely to support animal welfare reforms they’ve been told will double their shopping bill or impoverish family farmers.
Animal welfare advocates cannot respond to these economic arguments with moral rhetoric alone. Instead, non-governmental observers (NGOs) must challenge the economic assumptions, calculations, and conclusions of animal industries and produce reliable economic arguments of their own. To do so they should understand some basic economic principles, which we review below, and, when possible, enlist the help of economists.

Fearing, J., & Matheny, G. (2007). The role of economics in achieving welfare gains for animals. In D.J. Salem & A.N. Rowan (Eds.), The state of the animals 2007 (pp. 159-173). Washington, DC: Humane Society Press.

Comment by ben_west on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-20T01:21:53.317Z · score: 34 (16 votes) · EA · GW

Thanks Ben! Great as always. One quibble:

As such, in general I do not give full credence to charities saying they need more funding because they want more than a year of runway in the bank. A year’s worth of reserves should provide plenty of time to raise more funding.

  1. 12 months of runway means that an organization with an annual fundraising drive will be near bankruptcy once per year, every year. That seems bad.
  2. I agree that most organizations should be able to raise money in less than 12 months if they are 100% focused on raising money, but not having to worry about fundraising seems pretty valuable to me. An 18 month runway means, for example, that if your annual fundraising drive goes poorly you still have 6+ months to find some solution.
Comment by ben_west on Why I'm focusing on invertebrate sentience · 2018-12-11T22:48:44.377Z · score: 5 (4 votes) · EA · GW

When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is. This seems like a plausible response to me. However, there are some possible future scenarios where this might not be true. If we have to directly specify our values to a superintelligent AI, rather than it learning the value more indirectly, we might have to specify a definition of consciousness for it. It might also be good to have a failsafe mechanism that would prevent an AI from switching off before implementing any scenario that involved a lot of suffering, and to do this we might have to roughly understand in advance which beings are and are not conscious.

It seems like there is some asymmetry here as is common with extinction risk arguments: if we think that we will, eventually, figure out what consciousness is then, as long as we don't go extinct, we will eventually create positive AGI. Whereas, if we focus on consciousness and then AGI kills everyone, we never get to a positive outcome.

I think the original argument works if our values get "locked in" once we create AGI, which is not an unreasonable thing to assume, but also doesn't seem guaranteed. Am I thinking through this correctly?

Comment by ben_west on From humans in Canada to battery caged chickens in the United States, which animals have the hardest lives: results · 2018-11-29T22:34:49.196Z · score: 5 (4 votes) · EA · GW

This is really cool! One thing which stuck out to me: you list that there are the same number of bugs as there are factory farmed fish. Is that really correct? I would have thought that there would be many more bugs than fish.

Comment by ben_west on Effective Altruism Making Waves · 2018-11-21T19:05:37.816Z · score: 2 (1 votes) · EA · GW

The emphasis for me has been a race to make short term gains whilst medium to longer term projects have been marginalised or just not considered

ACE recently did an analysis of how resources are allocated in the farmed animal movement. You can see from figure 7 that ACE funding goes more towards building alliances and capacity (the "long-term" parts of their ontology) than in the movement more generally.

(ACE argues that the amount is still too small. But it seems weird to criticize EAA for that, since ACE is doing better than the rest of the movement, and seems to be planning to do even more.)

Comment by ben_west on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-11-07T22:36:02.308Z · score: 0 (0 votes) · EA · GW

I personally feel much more funding constrained / management capacity constrained / team culture “don’t grow too quickly” constrained than I feel “I need more talented applicants” constrained.

I feel like part of the definition of "talented applicant" is that they don't stretch your management capacity, don't mess up your culture, etc. For example, if there was someone who had volunteered at Rethink for a while, you had a lot of trust in them, they knew your projects intimately and could hit the ground running etc., my guess is that you would value that person much more highly than someone who had "general" competency.

And the next level up would be candidates who not only don't stretch your management capacity or culture but actually add to it.

My experience is that there are lots of people who are good at research or programming or whatever but fewer who have those skills and can add value to the organization without subtracting from other limited resources.

Comment by ben_west on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T18:49:53.603Z · score: 2 (2 votes) · EA · GW

AI designers, even if speciesist themselves, might nonetheless provide the right apparatus for value learning such that resulting AI will not propagate the moral mistakes of its creators

This is something I also struggle with in understanding the post. it seems like we need:

  1. AI creators can be convinced to expand their moral circle
  2. Despite (1), they do not wish to be convinced to expand their moral circle
  3. The AI follows this second desire to not be convinced to expand their moral circle

I imagine this happening with certain religious things; e.g. I could imagine someone saying "I wish to think the Bible is true even if I could be convinced that the Bible is false".

But it seems relatively implausible with regards to MCE?

Particularly given that AI safety talks a lot about things like CEV, it is unclear to me whether there is really a strong trade-off between MCE and AIA.

(Note: Jacy and I discussed this via email and didn't really come to a consensus, so there's a good chance I am just misunderstanding his argument.)

Comment by ben_west on EA Survey 2017 Series: Cause Area Preferences · 2018-01-10T15:10:54.826Z · score: 0 (0 votes) · EA · GW

Thanks. I was hoping that there would be aggregate results so I don't have to repeat the analysis. It looks like maybe that information exists elsewhere in that folder though? https://github.com/peterhurford/ea-data/tree/master/data/2017

Comment by ben_west on EA Survey 2017 Series: Cause Area Preferences · 2018-01-09T18:08:49.006Z · score: 0 (0 votes) · EA · GW

Is it possible to get the data behind these graphs from somewhere? (i.e. I want the numerical counts instead of trying to eyeball it from the graph.)

Comment by ben_west on Four Organizations EAs Should Fully Fund for 2018 · 2017-12-21T23:33:40.152Z · score: 2 (2 votes) · EA · GW

I still think both LEAN and SHIC have a substantial risk of not being cost-effective, but I’m far more confident that there is sufficient analytical work going on now that failure would be detected and learned from. Given the amount of information they’re generating, I’m confident we’ll all learn something important even if either (or both) projects fail

Could you say more about this? When I look at their metrics, it's a little unclear to me what failure (or success) would look like. In extremis, every group rating LEAN as ineffective (or very effective) would be an update, but it's unclear to me how we would notice smaller changes in feedback and translate that to counterfactual impact on "hit" group members.

Similarly, for SHIC, if they somehow found a high school student who becomes a top-rated AI safety researcher or something similar that would be a huge update on the benefit of that kind of outreach. But the chances of that seems small, so it's kind of unclear to me what we should expect to learn if they find that students have some moderate changes in their donations but nothing super-high-impact.

Comment by ben_west on How you can save expected lives for $0.20-$400 each and reduce X risk · 2017-12-21T22:54:17.469Z · score: 2 (2 votes) · EA · GW

Thanks for writing this! This is a very interesting idea.

Do you have thoughts on "learning" goals for the next year? E.g. is it possible that you could find a certain valuable food source with significantly more or less effort expected? Or could you learn of a non-EA funding source (e.g. government grants) that would make you significantly more impactful? I'm mostly interested in your $10,000 order of magnitude, if that's relevant.

Also: do you think that your research could negatively impact animal welfare in the event that a global catastrophe does not occur? E.g. could you recommend a change to fishing practices which are implemented prior to a catastrophe which increases the number of farmed fish or changes their quality of life?

Comment by ben_west on CFAR's end-of-year Impact Report and Fundraiser · 2017-12-21T22:53:05.267Z · score: 5 (5 votes) · EA · GW

Thanks Anna! A couple of questions:

  1. If I'm understanding your impact report correctly, you identified 159 IEI alumni, and ~22 very high impact alumni whose path was determined to have been "affected" by CFAR. 1.1 Can you give me an idea of what that implies for the upcoming year? E.g. does that mean that you expect to have another 22 very high impact alumni affected in the next year? 1.2 Can you say more about what the threshold was for determining whether or not CFAR "affected" an alumnus? Was it just that they said there was some sort of counterfactual impact or was there a stricter criterion?
  2. You mention reducing the AI talent bottleneck: is this because you think that the number of people you moved into AI careers is a useful proxy for your ability to teach attendees rationality techniques, or because you think this is/should be the terminal goal of CFAR? (I assume the answer is that you think both are valuable, but I'm trying to get a sense for the relative weighting.)
  3. Do you have "targets" for 2018 impact metrics? Specifically: you mentioned that you think your good done is linear in donations: could you tell us what the formula is? 3.1 Or more generally: could you give us some insight into the value of information we could expect to see from a donation? E.g. "WAISS workshops will either fail or succeed spectacularly, so it will be useful to run some and see."
Comment by ben_west on Effective Altruism London - Strategic Plan & Funding Proposal 2018 · 2017-10-29T20:36:11.743Z · score: 5 (5 votes) · EA · GW

Things for sharing this! You've given me some ideas for the Madison group, and I look forward to hearing about your progress.

Comment by ben_west on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T15:49:25.001Z · score: 26 (28 votes) · EA · GW

I prefer to play the long game with my own investments in community building, and would rather for instance invest in someone reasonably sharp who has a track record of altruism and expresses interest in helping others most effectively than in someone even sharper who reasoned their way into EA and consumed all the jargon but has never really given anything up for other people

I believe that Toby Ord has talked about how, in the early days of EA, he had thought that it would be really easy to take people who are already altruistic and encourage them to be more concerned about effectiveness, but hard to take effectiveness minded people and convince them to do significant altruistic things. However, once he actually started talking to people, he found the opposite to be the case.

You mention "playing the long game" – are you suggesting that the "E first, A second" people are easier to get on board in the short run, but less dedicated and therefore in the long run "A first, E second" folks are more valuable? Or are you saying that my (possibly misremembered) quote from Toby is wrong entirely?

Comment by ben_west on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T15:39:09.977Z · score: 7 (9 votes) · EA · GW

Thank you for the interesting post Kelly. I was interested in your comment:

people tend to think that women are more intuitively-driven and less analytical than men, which does not seem to be borne out and in fact the opposite may be more likely

And followed the link through to Forbes. I think the part you are citing is this:

But research shows that women are just as data-driven and analytical as men, if not more so. In a sample of 32 studies that looked at how men and women thought about a problem or made a decision, 12 of the studies found that women adopted an analytical approach more often than men, meaning that women systematically turned to the data, while men were more inclined to go with their gut, hunches, or intuitive reactions. The other 20 studies? They found no difference between men and women’s thinking styles.

Unfortunately, the link there is broken. Do you know what the original source is?

Comment by Ben_West on [deleted post] 2017-10-19T22:39:02.273Z

Thanks Milan! Do you know more about how they defined "relationships" ("altruism")? Given that they think "relationships" and "altruism" are synonymous, it seems possible that the definition they use may not correspond to what people on this forum would call "altruism".

Comment by Ben_West on [deleted post] 2017-10-18T13:59:12.503Z

Do you know how they measured altruism? It seems like maybe they are using "altruism" as a synonym for the "relationships" questionnaire?

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-08-20T17:25:42.628Z · score: 1 (1 votes) · EA · GW

Thanks Brian!

I think you are describing two scenarios:

  1. Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won't have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
  2. Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).

This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn't the expected value greater than zero?

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-08-16T21:27:41.603Z · score: 0 (0 votes) · EA · GW

I'm curious if you think that the "reflective equilibrium" position of the average person is net negative?

E.g. many people who would describe themselves as "conservationists" probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?

Comment by ben_west on EAGx Relaunch · 2017-08-13T17:55:46.404Z · score: 1 (1 votes) · EA · GW

Thanks!

For future readers, I think here are a couple links Roxanne was referring to:

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-07-30T19:38:53.443Z · score: 1 (1 votes) · EA · GW

Yeah, I think the point I'm trying to make is that it would require effort for things to go badly. This is, of course, importantly different from saying that things can't go badly.

Comment by ben_west on EAGx Relaunch · 2017-07-26T21:38:20.491Z · score: 0 (0 votes) · EA · GW

Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa.

Could you say more about what kind of (smaller, local, non-EAGx) events CEA would like to see/would be interested in providing support for?

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-07-23T13:54:18.143Z · score: 4 (4 votes) · EA · GW

Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-07-21T23:08:32.218Z · score: 3 (3 votes) · EA · GW

Yeah, it would change the meaning.

My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-07-20T23:06:43.758Z · score: 6 (6 votes) · EA · GW

Yes, I agree. More generally: the more things consciousness (and particularly suffering) are useful for, the less reasonable point (3) above is.

Comment by ben_west on An Argument for Why the Future May Be Good · 2017-07-20T14:49:49.290Z · score: 5 (5 votes) · EA · GW

Thanks for the response!

  1. It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
  2. Good point! I rewrote it to clarify that there is less net suffering
  3. Where I disagree with you the most is your statement "there's not much to do if the future will 'automatically' be good." Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).
Comment by ben_west on Applications are open for EA Global Boston · 2017-04-24T21:59:15.092Z · score: 0 (0 votes) · EA · GW
  1. Are there blocks of rooms reserved at some hotel?
  2. Are there "informal" events planned for around the official event? (I.e. should everyone plan to land Thursday night and leave Sunday night or would it make sense to leave earlier/stay later?)

Thanks!

Comment by ben_west on EA Funds Beta Launch · 2017-03-13T15:48:09.921Z · score: 4 (4 votes) · EA · GW

Is it possible to donate through transferring money already donated to a different donor advised fund?

I generally put money into my own DAF at a time which is convenient for tax purposes, and only consider grants later. Mine is through fidelity, if that's relevant.