Comment by dpiepgrass on Examples of loss of jobs due to Covid in EA · 2020-08-29T16:06:16.101Z · score: 10 (4 votes) · EA · GW

I lost my job at an oil & gas software company, so now I have less money to donate to clean energy and other causes. (It's not one of the worst effects of COVID, I'll grant you.)

Googling around I'm surprised how hard it is to find articles about job losses that were not published in March or April. Oilfield services were hardest-hit after April, whereas mostly restaurants, leisure & hospitality were hit hardest in March and April. The best info I could find by sector was this BLS report ("nonprofit" was not among the categories) and this chart shows that unemployment in the U.S. jumped from 3.5% in February to 14.7% in April, and fell steadily to 10.2% in July.

Comment by dpiepgrass on Geographic diversity in EA · 2020-08-21T17:40:56.806Z · score: 1 (1 votes) · EA · GW

I think the calculator you mentioned is kinda... broken. I notice that the local cost of living is ignored and no recommendation is given for incomes under $40,000 USD (or rather the recommendation is "we recommend giving whatever you feel you can afford without undue hardship"). A "well-paying" job in a LMIC is usually below $40,000/year. My highest gross income ever was about $100,000 CAD, and for this they recommend a 1% donation. Nah, I'll stick with 10%+ thanks. You have to make over $83,000 USD for the recommendation to inch past 1%.

Comment by dpiepgrass on Geographic diversity in EA · 2020-08-21T17:26:40.359Z · score: 2 (2 votes) · EA · GW

I'm thinking that it would be relatively smarter for EAs in low-income countries to work in local nonprofits, compared to those in high-income countries who are relatively more effective by earning-to-give. Does that sound right to you?

However this does require that a suitable nonprofit job be available in your country! I just checked the 80000 Hours job board and found that the total number of jobs in the "biggest impact" category in Low-Mid Income Countries was 15, versus 336 jobs in the (less populated!) first world. It could well be that there are fewer EAs in LMICs, but probably not 22 times fewer.

Comment by dpiepgrass on Effective Altruism Quest · 2020-08-21T16:16:01.468Z · score: 1 (1 votes) · EA · GW

Yeah, it starts out looking too much like an ad for Gitcoin, when it would be better to describe who created the whatever-it-is, for whom it was created, and what people will get out of it.

  • My first thought is that Gitcoin would have been better described later, and as a clause, e.g. instead of a "Gitcoin" section, saying "Gitcoin is on a mission to grow open source..." you have a paragraph like "we built the quiz on Gitcoin, which is on a mission to grow open source..." because readers can't see why they should read about Gitcoin in isolation.
  • It is odd that there is a section to explain what EA is - we are on EA forum, readers normally know already what EA is.
  • Why should people who are not Gitcoin members care about kudos?
  • Why do we need a Github account, is this meant for programmers?
  • Why do we need a browser extension?
  • The post needs to be structured so that the reader quickly gets a clue about what they would get out of reading the whole post, and for any instructions you give us, we need to be given a motivation to do it that we understand.

Upvoted because I think a negative score is too harsh... yet I don't understand this well enough to be motivated to install a browser extension. Based on the existence of a section to explain what EA is, I assume this is designed for people who are new to EA, which I am not, so I deduce that I would get nothing out of it.

Comment by dpiepgrass on Improving local governance in fragile states - practical lessons from the field · 2020-08-21T15:51:54.306Z · score: -1 (2 votes) · EA · GW

I assume the downvoters thought the jargon above that line was easy to understand. Congratulations on your IQ, but I promise not everyone is like you.

Comment by dpiepgrass on Improving local governance in fragile states - practical lessons from the field · 2020-08-14T02:47:59.738Z · score: -3 (4 votes) · EA · GW
I am now switching to academic jargon, so this language can integrate directly in my paper

Are you kidding? The jargon was barely comprehensible before this point.

Comment by dpiepgrass on EA Focusmate Group Announcement · 2020-08-01T02:54:19.613Z · score: 1 (1 votes) · EA · GW

Okay, this is weird... how is it that I can pick a person on the Calendar and book with them, but I can't create my own "unmatched" session? If I create a session in an empty slot, I am silently and automatically matched with "Gonzalo P".

Comment by dpiepgrass on EA Focusmate Group Announcement · 2020-08-01T02:18:49.161Z · score: 2 (2 votes) · EA · GW

I'm working on a software project that is quite esoteric - it is only meant to be understood by other software developers. I have signed up to Focusmate but you can't pick a partner in a narrowly defined way like "must be a programmer" (especially as Focusmate has no such information about users). But Michael A is working on "optical computers", I guess I'll give him a try.

Comment by dpiepgrass on Food Crisis - Cascading Events from COVID-19 & Locusts · 2020-06-01T01:13:18.010Z · score: 1 (1 votes) · EA · GW

Um... look, I'm not happy about it either, but is that really a reason to downvote?

Comment by dpiepgrass on Food Crisis - Cascading Events from COVID-19 & Locusts · 2020-05-12T16:00:23.338Z · score: 1 (3 votes) · EA · GW

Saw a WFP donation ad on Facebook, which reminded me to check the FAO, but it still does not appear to have put out a call for funding.

Comment by dpiepgrass on Thoughts on electoral reform · 2020-03-18T05:12:08.029Z · score: 3 (2 votes) · EA · GW

I've said for years that if there is a referendum / ballot initiative to switch from first-past-the-post to anything else, always say yes. Anything is better than first-past-the-post. Having said that... I have opinions.

First of all, before choosing a voting system you have to know if there will be a single winner or many. Single-winner voting systems should not be used to run multi-winner elections because single-winner district-based voting method cannot produce a fair outcome; gerrymandering is always possible. Also, these methods pretty reliably magnify the power of large parties over small parties, which has always been frustrating to me (because I consistently dislike large parties as well as most smaller parties, and feel like there is no one to represent me). Multi-winner elections should use multi-winner voting systems! Naturally, my favorite systems are the one I designed, Simple Direct Representation, and the one that inspired it, Direct Representation, but since these systems will never happen, I'd recommend good old fashioned Proportional Representation, Mixed-Member Proportional, STV or any other proportional system that seems politically viable. Sometimes at night I dream of a meta-voting system where a country splits its legislature into two voting systems and during every election there's a vote for which one people like better, which adjusts the relative influence of each system, and then ... but never mind, no one would vote for it: it's too democratic.

As for single-winner systems, I rank them clearly in this order:

1. Score voting (a.k.a. range voting, cardinal voting), where each candidate is rated on a scale (e.g. 0 to 5 stars, like the old Netflix. I was puzzled that Netflix killed off star-ratings; it seemed to produce more accurate and meaningful recommendations than the new up/down system. The reason given for the change was not that it didn't work well - the system worked quite well, but people didn't understand it. If it were up to me I'd focus on helping people understand it, rather than scrapping it.)

2. Condorcet methods, e.g. Ranked Pairs, which is based on preferential ballots (candidates in preferential order). A Condorcet method looks at each pair of candidates in isolation, with respect to all the ballots, and elects the candidate that wins a majority of the vote in every pairing against every other candidate. The problem is that there is not always a "condorcet winner". If there are three candidates A, B, and C, it can happen that A beats B, B beats C and C beats A. So a Condorcet system must also specify how to resolve such conflicts. Ignorant people often promote "the preferential ballot" as a voting system, but a preferential ballot is just a ballot, not a voting system. IRV is far more popular than Condorcet, but it seems strictly worse, because IRV has no underlying mathematical basis, and happens to have somewhat unstable behavior and fails the monotonicity criterion. Also, I believe voters should be allowed to rank two candidates as equal (no preference between them) or "no opinion"; Condorcet can support such features, while IRV cannot.

I used to prefer Condorcet, probably because I learned about it first and liked the intuitive idea that "if a candidate is preferred by a majority of voters over all others, that candidate should win." I changed my mind for the following reasons:

1. Range Voting, as well as its simpler cousin Approval Voting, allow the outcome of an election to be measured numerically, which lets voters understand the popularity of candidates, which is relevant in future elections. You can say things like "minor-party candidate M had an an average rating just one point behind the winner" or "the winner of the election had a lower average score than any other in history". Condorcet does not allow this. If somebody comes up with a way to turn Condorcet or IRV results into simple numbers, I think somebody else could come up with a different way to do that, allowing confusing, competing numerical narratives about the results.

2. Score Voting allows more nuance. I can say "I like X a little more than Y, but I like Y a lot more than Z".

3. Score Voting works far better in case the number of candidates is large. If there are 30 candidates, putting them in a single order is fairly impractical and burdensome for the voter unless ties are allowed.

4. Tallying results is easier with Score voting than Condorcet (though not as easy as Approval). Note that with computers we can calculate outcomes with an arbitrarily complex method, but computers can be hacked, so manual counting remains relevant.

5. If you wanted to know how happy people were with the outcome of an election, how might you ask them? "On a scale of 1 to 10, how happy are you with the outcome of the election?" That's asking for a Score! Score voting simply turns this question into ballot form, so that if people answer honestly, it will maximize the average answer to the happiness question! A criticism against Score voting might explain "if you raise the preference of a less-preferred candidate L on your ballot, you could cause L to beat your preferred candidate P who you rated higher". But if this happens because overall satisfaction of all voters is collectively higher, that's a fine outcome. I'm open to hearing about ways the Score voting system could be gamed, but such gaming is only interesting if other systems are not similarly vulnerable to gaming. (Edit: aha, here's the site where I learned about this idea.) The one "game" we can count on is something I'll call "spreading", where we spread out our true opinion on the ballot: if there are 3 candidates and my happiness would be 4/10 if A wins, 5/10 if B wins and 6/10 if C wins, I will spread this out to 0/10 for A, 5/10 for B and 10/10 for C. But every proposed voting system has something analogous to this.

Approval voting is technically a version of Score voting that gathers relatively little information from voters. Its virtues are that it is extremely simple and easy to implement, and I'm persuaded of its value on that basis. I suspect that, statistically, as a result of a large number of voters, Approval won't perform much worse than Score voting in practice. My intuition is this: consider one hundred voters who partially approve of a candidate C; they would like to rank this person as 5/10, but on an Approval ballot they can't. I suspect that roughly half of the people will "approve" of this person, so that overall the results are similar to what Score voting would produce.

Comment by dpiepgrass on Are we living at the most influential time in history? · 2019-10-31T13:50:49.685Z · score: 1 (1 votes) · EA · GW
P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)

Disagree: as a software engineer, my prior against the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don't live in a simulation. (The only plausible exception is if I am the only person in the simulation.)

I like Toby's point—seems like the prior about "one person's influence over the future" should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive today is well taken.

Meanwhile on the topic of "having the prerequisite knowledge necessary to positively impact the long-term future", that quantity has been increasing over time, particularly in the last century, given developments in science, philosophy, rationality etc., and that quantity will certainly increase in the coming centuries provided that civilization survives that long. Therefore, in consideration of how society has neglected X-risks and civilization-destroying risks, this point in time seems very hingey in the sense that we can probably already take actions that predictably and non-negligibly affect cataclysmic risk levels, and these actions may determine whether or not society survives long enough to reach a future time when our cluelessness is reduced, and our knowledge and values are improved.

Something I didn't see mentioned in the above discussion is the idea that hingeyness may be unclear even in hindsight. Certainly before the 19th century there is an argument to be made that one could have little impact on the future unless one was, say, Isaac Newton, and even then one's impact was perhaps just to bring science to people a little earlier than would have happened otherwise. But what's more hingey, the 19th or 20th century? Well, when it comes to X-risks, there was no atomic bomb until after modern physics was discovered in the early 20th century, and therefore no MAD cold war... no risk of superbugs until modern medicine, etc. When it comes to risk against civilization, the 20th century seems more hingey than the 19th, but on other topics (like when the best time to be a scientist or engineer is) it is less obvious.

Certain early choices had a lot of impact. A classic example is the Qwerty keyboard; on the other hand this layout was the choice of just one or two people, a choice that no one else could have influenced—this reminds me of a general problem with the 19th century: opportunities to have an impact were rare, because there was e.g. no government funding for science. Note that a successor keyboard like Dvorak could have been designed by vastly more people, so I wonder if things could have gone differently, e.g. what if someone had gone with the flow like I did with my own keyboard design, would it have sold better? What if it was sold in the 1920s instead of the 1930s? Or consider Esperanto—almost anyone could design a language. I heard that Esperanto was largely forgotten when WWI happened, but what if a commander in the allies knew about it, and observed that troops could communicate better if they had a common language? If we had a common language today, surely the world would be different—it's hard to be sure that it would be better, but today many people have to spend vast amounts of time learning English before they can meaningfully affect the course of history.

So I'd say overall that the 20th century was much more hingey, though it's hard to see how to assign credit—do we credit scientists for what they discovered, politicians for what policies they instituted that created funding for science, public servants for how they moderated new institutions, lawyers for the important cases they argued, activists for helping influence elections that led to policy, engineers for what they created, or companies that funded engineers? And what if communist China ultimately has the greatest impact, either by precipitating another world war, or by overturning democracy and free speech in favor of a authoritarian global regime in which the definition of truth can be chosen by the leadership?

So generally I think the knowledge we gather in the future will be crucial for our long-term future, but the things we do today will lay the foundation for that future, and perhaps this is the best thing to focus on: laying down a good foundation.

Each of us can contribute in our own way. As a software engineering veteran, I hope to contribute by designing foundational software, which could potentially act as an accelerator that brings benefits of the future more quickly to the present (my impact is no doubt eclipsed, however, by Steve Krause of Future of Coding who succeeded, where I failed, in building a community, or by Bret Victor who inspired countless people). If you work in medicine you might work on containing the risk of superbugs; if in politics you there are any number of causes that might help build a stable and prosperous world... we may be clueless now, but there are things we know, like: stability and prosperity good, war and catastrophe bad. And while rationalism is in its infancy, I think we have enough epistemological tools to point us in the right directions (my life might have gone quite differently if I had discovered rationalism and EA and left my religion fifteen years earlier!)

In any case, I'm not sure why we should be concerned with how hingey this century is—at least it's probably more hingey than the last century, and in any case we have to play the hand you're dealt. We are clueless about a great many things, but not about everything, suggesting a two-pronged course of action: first to work on reducing cluelessness (and figuring out how to act in the face of cluelessness), and second to help the future in ways we can understand, such as by reducing catastrophic risks.

Comment by dpiepgrass on [updated] Global development interventions are generally more effective than climate change interventions · 2019-10-14T20:55:43.307Z · score: 7 (2 votes) · EA · GW

When a post like this on cause prioritization appears on EA forums, I expect it to go something like this: "the marginal costs of the best interventions are lower in category A than category B, under the following assumptions (though if we tweak the assumptions in a certain way, B looks better)... so it's likely that we should fund those best interventions in category A rather than B."

This post, however, doesn't appear to be based on the current EA thinking about the marginally-best interventions for climate change or global development. On climate, the last thing I heard was that clean energy R&D is most sorely needed... which agreed with my prior that molten salt reactors are awesome, scalable and sorely needed, plus the potential of things like enhanced geothermal energy to provide a transition path for the oil & gas industry, and then there's alternative fusion technologies like DPF and EMC2's Polywell (whose value to society is either zero or astronomical). Yet I don't see these on the chart of interventions. I don't recall what the best available interventions for global development are thought to be, but I thought I heard someone say that cash transfers likely weren't it.

And although we should be skeptical of "free lunch" interventions with negative costs, that doesn't mean they aren't real. Some of them could be great investments, and if so, should we not identify which ones are realistically negative-cost and invest in them?

(I only read the first half; sorry if I missed something important.)

Comment by dpiepgrass on Crowdfunding for Effective Climate Policy · 2019-06-09T19:39:29.479Z · score: 2 (2 votes) · EA · GW

I'm strongly inclined to support this, but the abstract doesn't say what the money would be spent on, or explain how this can lead to more spending on previously neglected R&D. Care to comment before I read the entire document?

Also, the very first graph says "CO2 emissions by region in the NPS", but what's the NPS?

Also, what is your relationship to the stated authors Hart & Cunliff [edit: I see they are not the authors, rather they are evaluated by the document], and how does Bill Gates fit in?

Comment by dpiepgrass on The Case for the EA Hotel · 2019-05-15T19:25:57.490Z · score: 7 (2 votes) · EA · GW

Well, I've worked on "non-EA projects" and I've "accrued career capital" (in the software industry) but I don't think I could just flip a switch and start working on EA projects with other EA people. At present it's easier to get into the EA hotel than to get a grant from an EA org, which in turn is probably easier than getting a job at an EA org. And note that if I got a grant I would still be isolated from other EAs as I don't live near an EA hub; EA hotel solves the "loneliness" problem.

Comment by dpiepgrass on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-05-15T19:03:13.113Z · score: 1 (1 votes) · EA · GW

Of note: this newer post argues persuasively for the hotel in a different way than OP or me.

Comment by dpiepgrass on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-15T18:46:37.933Z · score: 3 (2 votes) · EA · GW
Even addressed problems can be addressed inefficiently

This is a good generalization to make from the climate change post last month. I argued in a comment that while climate action is well-funded as a category, I knew of a specific intervention that seems important, easily tractable & neglected. We can probably find similar niches in other well-funded areas.

I was increasingly seeing a movement in Foundation World towards better frameworks around understanding and reporting on net impact. While EA takes this idea to an extreme I didn't understand why this community needed to be so removed from the conversations (and access to capital) that were simultaneously happening in other parts of the social sector.

I suppose EA grew from a different group of people with a different mindset than traditional charities, so I wouldn't expect connections to exist between EA and other nonprofits (assuming this is a synonym for "the social sector") until people step forward to create connections. Might we need liasons focused full time on bridging this gap?

At the beginning of the curve, down and to the left, we see that there is a smaller amount of capital circulating through approaches that aren't that effective.

On the far left, interventions can have negative value.

Comment by dpiepgrass on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-15T17:48:45.499Z · score: 8 (3 votes) · EA · GW

Thanks. Today I saw somebody point to Peter Singer and Toby Ord as the origin of EA, so I Googled around. I found that the term itself was chosen by 80000 hours and GWWC in 2012.

In turn, GWWC was founded by Toby Ord and William MacAskill (both at Oxford), and 80,000 hours was founded by William MacAskill and Benjamin Todd.

(incidentally, though, Eliezer Yudkowsky had used "effective" as an adjective on "altruist" back in 2007 and someone called Anand had made a "EffectiveAltruism" page in 2003 on the SL4 wiki; note that Yudkowsky started SL4 and LessWrong, and with Robin Hanson et al started OvercomingBias.)

I thought surely there was some further connection between EA and LessWrong/rationalism (otherwise where did my belief come from?) so I looked further. This history of LessWrong page lists EA as a "prominent idea" to have grown out of LessWrong but offers no explanation or evidence. LessWrong doesn't seem to publish the join-date of its members but it seems to report that the earliest posts of "wdmacaskill" and "Benjamin_Todd" are "7y" ago (the "Load More" command has no effect beyond that date), while "Toby_Ord" goes back "10y" (so roughly 2009). From his messages I can see that Toby was also a member of Overcoming Bias. So Toby's thinking would have been influenced by LessWrong/Yudkowskian rationalism, while for the others the connection isn't clear.

Comment by dpiepgrass on Thoughts on the welfare of farmed insects · 2019-05-10T23:42:06.069Z · score: 1 (1 votes) · EA · GW
that they should be a less painful way


as the least method of killing


It's a risk, to be sure, that the aggregate suffering of insects would exceed the same weight in cattle; however it's probably uncommon to expect that so I, like nshepperd, am curious where your expectation comes from. (which reminds me, I'm sure glad somebody has ideas about how to do consciousness research - I couldn't possibly!)

Comment by dpiepgrass on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-10T19:59:18.824Z · score: 4 (3 votes) · EA · GW
I was reading somewhere on this forum recently a post that was about how EA is a set of beliefs and approaches, and shouldn't aspire to be a group or movement.

I think of EA as a culture. IIUC there was a community called Overcoming Bias which became LessWrong, a community based on a roughly-agreed-upon set of axioms and approaches that led to a set of beliefs and a subculture; EA branched off from this [edit: no, not really, see replies] to form a closely related subculture, which "EA organizations" represent and foster.

It seems to be a "movement" because it is spreading messages and taking action, but I think its "movement-ness" is a natural consequence of its founding principles rather than a defining characteristic. Interestingly, I discovered EA/rationalism somewhat recently, but my beliefs fit into EA and rationalism like a hand in a glove. Personally, I am more attracted to being in an "EA culture" than an "EA movement" because I previously felt sort of like the only one of my kind - a lonely situation!

[Addendum:] I think this post is making a great point, that there is good to be done by, for example,

  • EAs learning practical lessons from other organizations
  • EAs promoting straightforward techniques for figuring out how to do good effectively
  • EAs making specific suggestions about ways to be more effective

But I also think that, if you want to do more than simply donate to effective charities - if you want to participate in EA culture and/or do "EA projects" - there is a lot to learn before you can do so effectively, especially if you aren't already oriented toward a scientific way of thinking. This learning takes some time and dedication. So it seems that we should expect a cultural divide between EAs (or at least the "core" EAs who use EA approaches/beliefs on a day-to-day basis) and other people (who might still be EAs in the sense of choosing to give to effective charities, but never immerse themselves in the culture.)

[P.S.] Since you mentioned optics, I wonder if this divide might be better framed not as a "cultural" divide, but an "educational" divide. We don't think of "people with science degrees" as being in a "different culture" than everyone else, and I'm basically saying that the difference between core EAs and altruistic non-EAs is a matter of education.

[On the other hand, in my mind, EA feels tied to rationalism because I learned both together - and rationalism is more than an ordinary education. (The rationalist in me points out that the two could be separated, though.) There are scientists who only act like scientists when they are in the lab, and follow a different culture and a different way of thinking elsewhere; more generally, people can compartmentalize their education so that it doesn't affect them outside a workplace. Rationalism as promoted by the likes of Yudkowsky explicitly frowns on this and encourages us to follow virtues of rationality throughout our lives. In this way rationalism is a lifestyle, not just an education, and thinking of EA the same way appeals to me.]

Comment by dpiepgrass on Why is the EA Hotel having trouble fundraising? · 2019-04-29T17:33:45.657Z · score: 2 (2 votes) · EA · GW

I live in a place with no established EA community and I don't see myself working on a project part-time with no oversight because I've been there, done that, and largely failed, because in the long term, isolation was hard for me. Charity Entrepreneurship? This is the first I've heard of it. For me, moving to a place like EA hotel would be a great way to "break into" the EA community; paying my way at an EA hotel (£10/day) is more attractive to me than attempting to get a job in an expensive EA hub like SF or NY, since I already tried to do that and failed. [That said I'm not going to this EA hotel since I'm not British.]

Remember that we're not all Ivy league here. I graduated merely with honors from a local university.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-29T16:12:08.225Z · score: 3 (3 votes) · EA · GW

Intuitively I'm inclined to agree that the probability of very high or low climate sensitivities is overestimated due to the existence of a few separate lines of evidence that give us similar estimates, and because some studies have used inappropriate priors.

But I've heard climate science experts say it's harder to "nail down" the upper end of the ECS range, IIUC because of the multiplicative nature of positive feedbacks. A simple blackbody model of Earth with no feedbacks says that doubling CO2 would give us about 1.1°C of warming (IIRC) but there are several feedbacks in which a temperature increase causes a larger temperature increase: water vapour, ice albedo, permafrost melt (not technically included in ECS, but worth considering along with the effect of destabilizing shallow clathrates, if any), cloud feedback (thought to be small), a potential increase in drought leading to higher albedo, changes to the oceanic depth-temperature gradient / changes to ocean currents (which reminds me, global warming could ultimately cause cooling in Europe which implies that if the ECS is X, the typical warming would be above X outside Europe, and as I have noted elsewhere in this discussion, the land will warm more than the ocean surface at equilibrium).

When you stack the PDF of all the feedbacks together, the tail of the distribution gets uncomfortably long. (I didn't read much of Annan & Hargreaves so if their analysis specifically addresses the "stacking" of feedbacks, let me know.) [Overall, there have been new papers suggesting we can constrain ECS below 4° and others saying we can't, so I think we need to give the dust some time to settle - while still looking for tractable things we can do in this area.]

Note that the historical data on the ECS doesn't help much to constrain the upper end of the temperature range because ECS is likely not independent from initial temperature; we'll be reaching temperature zones Earth hasn't had for many millions of years, and we don't have very solid data going back beyond 800,000 years.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-29T15:49:38.420Z · score: 3 (3 votes) · EA · GW

I'd agree that "urgency" is subsumed by "importance", but it's also worth pointing out explicitly, as something that might be overlooked if it is not mentioned.

Comment by dpiepgrass on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-04-28T20:12:04.734Z · score: 9 (4 votes) · EA · GW

I have a thought about EA hotel which this analysis likely doesn't capture: the general intuition that EAs should be taken care of - that we should "take care of our own".

Today I read that research by Holt-Lunstad "shows that being disconnected [lonely] poses comparable danger to smoking 15 cigarettes a day, and is more predictive of early death than the effects of air pollution or physical inactivity." While I'm not exactly lonely*, I have no EA friends (my city is not an EA hub), and my productivity is extremely low as I'm (1) currently unemployed and (2) have virtually stopped working on altruistic projects due to a lack of emotional support and a loss of faith that I can succeed**. I may soon get a job and will then earn-at-least-partly-to-give (probable donations: $15,000/yr, perhaps more later), but this is not as fulfilling as a project would be, or as fulfilling as EA friendships would be. I've tried job hunting in the Bay Area where I might have been able to be near EAs, but was turned down by a few companies and gave up; besides, the idea of spending roughly 100% of the additional income I would earn in the Bay Area on rent... it's repugnant.

By extension, I believe that for some EAs, EA hotels could offer improvements to mental heath and future good-doing potential that aren't otherwise available. Intuitively, it seems like the EA community ought to be able to take care of its own adherents. One simple justification of this is simply that poorer mental health limits the amount of good done by each EA; another one is that if the EA movement can't take care of its non-central members, it will be more difficult to grow and spread the movement; e.g. a reputation for loneliness among EAs would suggest to others that they shouldn't become an EA, and EAs who are lonely are less likely to encourage others to become EAs.

Since creating more EAs presumably creates more good in the world - especially as we can anticipate exponential growth - the question of how to create more EAs is valuable to ponder. While EA hotels (and similar projects) are not a solution by themselves, they may be an important component of such growth. So the EA hotel is one of my favorite ideas and if I were in the UK I might be living there now.

* I live with my best friend, who doesn't think at all like a EA/rationalist. I'm also married to a non-EA but the Canadian government keeps us separated (thanks IRCC). But this touches on a related issue - I plan to have a child, and as long as we don't have a rationalist/EA Sunday School system to teach our values, I'm curious whether growing up inside or close to an EA hotel would work as a substitute. Seems worth a try!

** as I'm writing software, the value of the project is a highly nonlinear function of the input effort, requiring much more manpower to become valuable, i.e. a minimum viable product. Working on it has become harder in turns of willpower requirement over the years.

Comment by dpiepgrass on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-04-28T17:35:53.947Z · score: 1 (1 votes) · EA · GW

I don't understand the formula that appears after "For each of our variables, we define a relative version:", could you clarify? Then its says "Remember that rVresident..." but I can't "remember", since there's no definition of it earlier (only of rV). A definition of rVresident appears later [but it incorporates new concepts R and W that aren't defined very clearly (what's a "resource" and what exactly does "value after controlling for resources" mean, for those of us that are not statisticians? Well, you start to elaborate on W, but ... by this point I'm confused enough to find the discussion harder to follow.)]

Comment by dpiepgrass on Psychedelics Normalization · 2019-04-27T21:56:50.422Z · score: 5 (4 votes) · EA · GW
I'm curious why psychedelics aren't talked about more...

For me, because I didn't know. Insofar as evidence exists, I'm interested. [Edit: to be an EA focus area, it would have to score well in the importance/tractability/neglectedness framework - might tractability be a problem here?]

I imagine drugs are a more political realm than EA usually goes for, but there are various reasons to build up expertise in political areas - communication, persuasion, lobbying - beyond this particular cause area.

Comment by dpiepgrass on Reasons to eat meat · 2019-04-27T21:23:51.733Z · score: 1 (1 votes) · EA · GW
This post is satire. Some of these reasons are good and some of them are bad...

Ahh, you might want to lead with that.

Since I don't have EA friends or an EA spouse, avoiding meat can be difficult and I've settled for "reducetarian".

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-23T15:51:05.553Z · score: 2 (2 votes) · EA · GW

Someone pointed me to this video by Jesse Jenkins at MIT who models the cost of electricity systems in the context of a goal to reach zero carbon emissions. The video shows how nuclear would play an important role even if a nuclear plant costs 6 times as much to build as a natural gas plant. When I saw this video I thought "wait, if new renewables eventually lose so much value that expensive nuclear plants start looking attractive, just how the heck could we convince every country in the world to replace all their fossil fuels?" Since we know how to make nuclear cheaper, the obvious answer is, let's do that.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-20T17:10:56.862Z · score: 5 (3 votes) · EA · GW

I think the zero-goal matters because (1) if you plan for, say, 50% reduction, or even 66%, you might end up with a very different course of action than if you plan for 100% reduction. Specifically, I'm concerned that a renewable-heavy plan may be able to reduce emissions 50% straightforwardly but that the final 25-45% will be very difficult, and that a course correction later may be harder than it is now; (2) most people and groups are focused on marginal emissions reductions rather than reaching zero, so they are planning incorrectly. I trust the EA/rationalist ethos more than any other, to help this community analyze this issue holistically, mindful of the zero-goal, and to properly consider S-risks and X-risks.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-19T16:15:29.932Z · score: 5 (3 votes) · EA · GW
It also doesn't make a great deal of sense to combine intermittent renewables with nuclear

Although you're right, it appears the renewables juggernaut is unstoppable, and mass production for affordable reactors will require about 15 years to spin up, during which time renewables will be the only game in town. For that reason, MSR vendors want to use huge silos of solar salt to store energy when renewables are going strong, which they can discharge when the renewables start losing power. In this way the nuclear reactor can usually go at full power, albeit at the cost of extra turbines and solar salt (so named because it was pioneered by concentrated solar power technology).

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-19T15:04:44.637Z · score: 4 (4 votes) · EA · GW
Educating away people's political convictions has seen very limited success when it comes to convincing them that radical action on climate is needed.

I would point out that this has been largely liberals trying to convince conservatives about climate science; cross-tribe communication is pretty difficult. Indeed, I wonder if support for nuclear among conservatives stems as much from opposing the "liberal media"'s scare mongering than anything else. There's been some success, at least on the left, from efforts to get the word out about "the" 97% consensus among climate scientists. Educating people on the left seems like an easier problem - there are die-hard anti-nukes who can't be convinced, but they're a small minority.

AFAIK no one has seriously attempted the educational resource I propose, so before saying it can't work I think it's worth trying. We do have some stuff like Gordon McDowell's videos that basically targets maven personalities like myself, but I found that it still doesn't provide all the information I need to get a complete mental model for nuclear power. An educational site is not enough by itself to change public opinion, but it could at least be valuable to maven-type people who want to change minds about nuclear power but don't have good sources of information that they can link to and learn from.

Public opinion is a very hard nut to crack, but what about the media? I would guess that influencers like Jon Oliver probably got some of their information from SkepticalScience, so I think public education may be able to percolate to the people by first percolating up to the media.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-19T14:44:17.524Z · score: 1 (1 votes) · EA · GW
Nuclear energy is unpopular.

I am very much aware. That's what I think we should take steps to address. Providing educational resources isn't enough by itself, but it's a necessary step.

Comment by dpiepgrass on Does climate change deserve more attention within EA? · 2019-04-18T00:17:28.366Z · score: 29 (16 votes) · EA · GW

I'd like to make a few points based on my knowledge as someone who studies climate science issues as a hobby. I'm a member of the volunteer team at SkepticalScience (an anti-climate-misinformation site).

[edits/additions in square brackets; original version contained mistakes]

First, humanity needs to reduce carbon emissions all the way to zero or below, because natural removal of CO2 from the climate system is extraordinarily slow. 50% is too much; 25% is too much. Zero. Popularly-considered strategies for mitigating global warming won't achieve that. Optimistic IPCC scenarios like like RCP 2.6 assume technology will also be widely deployed to remove CO2 from the air. Things like tree planting that increase biomass can slow down the increase of CO2 but can't stop it even briefly; other ideas for carbon sequestration are not economical [AFAIK] and it's irresponsible to simply assume an economical technology for this will be invented. Therefore, we need to switch to 100% clean energy, and do so as soon as possible.

In my opinion the best thing the EA community can do (under the importance-tractability-neglectedness framework) is to study and support nuclear energy in some affordable way. In the past, the push for climate change mitigation has come from traditional environmentalists, who have fought against nuclear power since at least the early 1980s and mostly haven't reconsidered. This is evident from the many campaigns for "renewable energy" rather than "clean" or even "sustainable" energy. EA can fill a gap here. My favorite thing is to ask people to support new, inexpensive Molten Salt Reactor (MSR) designs. But probably the cheapest thing we can do is a web site. I think there is a real need for a web site about nuclear facts (or clean energy facts generally), something that debunks myths like SkepticalScience does for climate science, and also provides information that isn't adequately available on the web right now, about such topics as the risks of radiation, the types and quantities of nuclear waste, and the ways nuclear regulations have improved safety (albeit increasing costs). And, of course, it would go into some detail about MSRs and other affordable next-generation reactor designs. As EAs are not funded by the nuclear industry, they could be a credible independent voice.

Solar power makes great sense in a lot of tropical places, but in northern climates like Canada it doesn't, as peak energy use happens in the wintertime when the sun is very weak. AFAICT this makes solar in Canada into a nuisance, a potential roadblock as we get close to 100% clean energy (why would we deploy more clean energy if existing solar power makes it redundant for half of each year?). Without nuclear power, our main source of energy [in such climates] would probably have to be wind, and I'm very concerned that the cost of relying mostly on wind power would be prohibitive, especially in a free-market system. I don't know the exact numbers, but once we exceed something like 25%-30% average wind power, instantaneous wind power will often exceed 100% of demand, after which wind turbines are likely to become less and less economical. (Epistemic status: educated guess [but after I posted this someone pointed me to a presentation by an expert, which says solar value starts to decline well before 25-30% penetration])

[Meanwhile, right now nuclear advocates often rely on bare assertions, some of which are wrong. Without credible-but-readable sources - plain language explanations that cite textbooks, scientific literature and government reports - it's hard to convince intellectuals and reasonable skeptics to change their mind. Anti-nukes can simply assert that claims that make nuclear power look not-scary are nuclear-industry propaganda. Note that nuclear power relies on public opinion much more than renewables currently do - company are free to design and build new wind turbines, but nuclear power is, of necessity, highly regulated and its continued existence relies on political will, which in turn flows from popular opinion. Witness the blanket shutdown of all nuclear power in Germany. Hence the motivation for an educational site.]

By contrast, it seems clear to me that mass-produced MSRs can [theoretically] be cheaper than today's CCGTs (natural gas plants). I've been following MSRs with great interest and I've published an article about it on medium, although it remains unlisted because I'm still uncertain about a couple of points in the article and I'd love to get a nuclear expert to review it.

Second, it is a common misconception that we could have 4°C of global warming by 2100; climate scientists generally don't think so [except in the RCP 8.5 (business as usual) scenario which by now is more of a "look at the train wreck we're avoiding!" than a plausible outcome]. Often this misconception arises because there are two measurements of the warming effect of CO2, and the most commonly reported measure is ECS (equilibrium climate sensitivity) which predicts the amount of warming caused by doubling CO2 levels and then waiting for the climate system to adjust. The best estimate of ECS is 3°C (2.0-4.5°C, 90% confidence interval according to the IPCC) and it will take at least 200 years after CO2 doubles to even approach that amount of warming. If the ECS is higher than 3°C I would expect it to take even longer to approach equilibrium, but I'm rather uncertain about that.

To estimate the warming we expect by 2100, look at the TCR (Transient Climate Response) instead. The TCR is highly likely to be in the range 1.0-2.5°C. Keep in mind, however, that only 2/3 of greenhouse warming comes from CO2 according to the AGGI; 1/6 comes from methane and the final 1/6 from all other human-added greenhouse gases combined. The most common estimate of TCR is 1.7°C or 1.8°C and a first-order estimate based on observed warming so far is about 1.5°C. So if CO2 doubles (to 560 ppm), I'd expect about 2.5[±1.1]°C of global warming based on a TCR of 1.75, assuming CO2's fraction of all GHGs increases slightly by then. [side note: I would be surprised if CO2 more than doubles - I think we'll get almost 100% clean energy by 2100; OTOH predicting the future isn't really my forte.]

Third, Having said that, the land will warm a lot faster than the oceans. Climate models on average predict 55% more warming on land than sea [related paper]. [Observations so far suggest that the transient difference could be] greater. Therefore, although 4°C of "global" warming by 2100 is highly unlikely, 4°C of land warming by 2100 is a distinct possibility (though I estimate a probability below 50%.)

I guessed on Metaculus that global warming by 2100 would be [1.7 to 2.6°C] (despite the Paris agreement), but on land [it's likely to reach 3°C (and as climate change is non-uniform, some populated locations could exceed 3°C even if the land average is less than 3. I should add that the land-sea ratio is thought to be lower in the tropics, albeit higher in the subtropics. And my prediction was somewhat optimistic—I assumed that eventually society would build nuclear plants at scale; or that at least some cheap CCS tech would be discovered.)]

Fourth, having lived in the northern Philippines, I think the impact of the warming itself is underappreciated. I lived in a very humid town (more humid and hotter than Hawaii) where the temperature exceeded 30°C in the shade most days. The hottest day of the year was about 37°C in the shade at noon, coldest would have been around 18°C at 6AM.

Maybe it's just that I lived in Canada too long, but humans are humans - we are naturally uncomfortable if our core temperature exceeds 37°C and I became uncomfortable whenever I went outside or left the sanctuary of the Air Conditioner. So for the sake of Filipinos and the other 3+ billion people living in tropical latitudes, I think we should be very concerned about the effect of just the warming itself on humanity's quality of life.

If we get 4°C of [land] warming vs preindustrial, that implies average daily highs of about 33-34°C in my town, which I would describe as virtually unbearable at 75% humidity. Consider also that if the Philippines becomes more prosperous, they will respond to the high temperatures by extensive use of air conditioning, which is energy intensive. If we don't stop using fossil fuels soon, air conditioning itself can become a significant contributor to further global warming.

Comment by dpiepgrass on The case for delaying solar geoengineering research · 2019-04-04T22:00:42.301Z · score: 1 (1 votes) · EA · GW

2. My assumptions were that geoengineering might reduce society's drive for mitigation (the switch to clean energy), and that it would be used to halt the temperature increase.

In the linked paper (Keith & Macmartin 2015) their proposal [actually they use the word "scenario" - I don't think they are going so far as to endorse it as a plan] is a bit different. They propose to use *half* as much aerosols as would be required to halt global warming (this is a bit tricky to get right, e.g. the radiative forcing of aerosols has much greater uncertainty than the forcing of CO2, so their proposal includes feedback to modify the injections as decades pass and observations are gathered about the effect of the aerosols). The paper says "We do not claim that this scenario is optimal. Rather we claim that good-quality policy-motivated scientific analysis requires an explicit scenario, and that this scenario is less obviously suboptimal than some scenarios employed in the literature." They point out that the harms of global warming increase superlinearly with temperature change, so I think they are saying that avoiding half the warming, or at least slowing global warming by half, is a reasonable compromise that avoids the worst harms without turning global warming into a total non-issue.

"Temporary deployment does not reduce long-term climate change. Warming in 2300, for example, is almost completely determined by cumulative carbon emissions and is unaffected by SRM that ends in 2200. Some commentators conclude that such temporary SRM offers no benefits, suggesting that it must be maintained forever." The paper counters that many climate change impacts depend on the rate of change - that if warming is slowed down, it is less harmful even if the total warming over 200 years is left the same. So I think the proposal here is to taper off the aerosol injections in such a manner that, in the worst case, we get the same warming over 200 years rather than 100.

They note that "It is clear that this scenario does not directly address thresholds that are a function only of the magnitude of the change rather than the rate, although it does delay reaching these thresholds, giving more time both to learn about the system and develop alternate strategies." The total amount of warming in their scenario *would* be decreased if we invent and deploy a technology that can remove CO2 from the air permanently (such technologies are very far from economical today). However, we can't guarantee we will invent an economical technology to do this. If we don't, Greenland may still melt under their scenario, but later than it would have otherwise ("in Fig. 1, the time to reach a temperature rise of 2 °C above pre-industrial increases from 2055 to 2068, while the time to reach a 2.5 °C rise increases by 32 years.").

Regarding stoppage of geoengineering due to catastrophe, they say, "While not discounting the possibility of social collapse, we note that humanity has operated technologies such as trans-oceanic communication links and electric power grids for more than a century in spite of horrific wars. Moreover, in considering the implications of a possible social collapse on the public policy of SRM [Solar Radiation Management], one must set the risks of termination against the (likely) greater human suffering that would arise directly from the collapse itself." So, if there's a global catastrophe, a sudden increase in global warming seems like a minor footnote in comparison.

I remain concerned that geoengineering is a distraction that could reduce the pressure to reduce CO2 emissions, but if geoengineering were to become a popular political position, I agree that Keith & Macmartin's proposal seems better than the "default" geoengineering proposal that people (including me) naively think of, i.e. to simply stop global warming regardless of CO2 emissions.

3. Yes.

Comment by dpiepgrass on The case for delaying solar geoengineering research · 2019-03-31T04:28:01.759Z · score: 6 (6 votes) · EA · GW

Additional objections to stratospheric aerosol injections:

1. Stratospheric aerosol injections will stop global warming but not ocean acidification, which is caused directly by CO2 dissolving in seawater. This is a notable consequence if this "plan B" disrupts the "plan A" to reduce our carbon emissions to zero or below.

2. Once (conventional) aerosol injections start, they must not be stopped. Explanation:

I think it's fair to say that the main danger of global warming is its speed: ecosystems and human civilization would perhaps barely notice if the global mean land temperature were to rise 3°C gradually over the course of 10,000 years, but the same change in 100 years is difficult to bear (trivia: climate models and recent temperature records all agree that land temperatures will rise faster than sea temperatures; they disagree as to the extent of this phenomenon but, long story short, if global temperatures rise 2.2°C this corresponds to about 3°C warming on land, or more in the short term. I think that the goal to "keep global warming under 2°C" rather than "under 5°F on land" was a bit of a marketing mistake. Details in this paper.)

Stratospheric aerosols fall out of the stratosphere after a year or two, whereas much of our CO2 emissions will stay in the atmosphere for hundreds of years. Once stratospheric aerosol injections begin (assuming net CO2 emissions remain above zero), the quantity of aerosols must be continually increased to maintain a roughly constant temperature.

If the injections are ever suddenly stopped, most of the warming that would have occurred, over the decades or centuries that injections have been done, will occur immediately. This extremely rapid change is potentially very disruptive to humanity and global ecosystems, so an injection program should not begin without a very high confidence that we can ensure the injections will continue in perpetuity.

3. It may already be too late to avoid this problem, but in case of a global catastrophe, where modern society and most of its technology disappears for some reason, we'll want to rebuild society afterward. To this end, it may be significantly easier to rebuild if there is some oil left in the ground that is accessible to the reconstruction effort.

Comment by dpiepgrass on Apology · 2019-03-31T04:01:26.601Z · score: 8 (13 votes) · EA · GW

I sure am curious why you repeated the same comment four times (in addition to a couple of closely related comments) and why 100% of your comments on this site are on this page. It seems obviously inappropriate.

Comment by dpiepgrass on Request for comments: EA Projects evaluation platform · 2019-03-24T01:13:42.585Z · score: 3 (2 votes) · EA · GW

Based on Habryka's point, what if "stage 1b" allowed the two reviewers to come to their own conclusions according to their own biases, and then at the end, each reviewer is asked to give an initial impression as to whether it's fund-worthy (I suppose this means its EV is equal to or greater than typical GiveWell charity) or not (EV may be positive, but not high enough).

This impression doesn't need to be published to anyone if, as you say, the point of this stage is not to produce an EV estimate. But whenever both reviewers come to the same conclusion (whether positive or not), a third reviewer is asked to review it too, to potentially point out details the first reviewers missed.

Now, if all three reviewers give a thumbs down, I'm inclined to think ... the applicant should be notified and suggested to go back to the drawing board? If it's just two, well, maybe that's okay, maybe EV will be decidedly good upon closer analysis.

I think reviewers need to be able (and encouraged) to ask questions of the applicant, as applications are likely to be have some points that are fuzzy or hard to understand. It isn't just that some proposals are written by people with poor communication skills; I think this will be a particular problem with ambitious projects whose vision is hard to articulate. Perhaps the Q&As can be appended to the application when it becomes public? But personally, as an applicant, I would be very interested to edit the original proposal to clarify points at the location where they are first made.

And perhaps proposals will need to be rate-limited to discourage certain individuals from wasting too much reviewer time?