Comment by denkenberger on David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop) · 2019-02-19T18:27:03.024Z · score: 6 (3 votes) · EA · GW

Thanks very much to CEA for doing the recording, editing, and transcription! Also thanks to CEA for the EA grant that has supported some of this work. Mitigating the impact of catastrophes that would disrupt electricity/industry such as solar storm, high altitude electromagnetic pulse, or narrow AI virus, is a parallel effort within The Alliance to Feed the Earth in Disasters (ALLFED) that I did not get a chance to talk about in my 80,000 Hours interview. The Guesstimate model I referred to in the workshop can be found here (blank to avoid anchoring). Also, the three papers involving losing electricity/industry are feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes, so input is welcome.

David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop)

2019-02-19T15:58:01.214Z · score: 7 (5 votes)
Comment by denkenberger on The Need for and Viability of an Effective Altruism Academy · 2019-02-18T03:19:54.871Z · score: 2 (1 votes) · EA · GW

There is also the EA MOOC. There does not appear to be a counter - does anyone know how many completions of this course there has been?

Comment by denkenberger on Reflections on doing good with lump sums - the retired person's dilemma · 2019-02-12T07:38:14.483Z · score: 2 (1 votes) · EA · GW

You can always do more than the Giving What We Can minimum of 10%, but it is true it is aimed at pre-retirement income. Bolder Giving encourages committing 50% of lump sums or income, so this might be more appropriate for you. It does not require effectiveness, though there are a number of EAs on the site.

Comment by denkenberger on Near-term focus, robustness, and flow-through effects · 2019-02-12T04:48:52.656Z · score: 2 (1 votes) · EA · GW

I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).

Comment by denkenberger on Near-term focus, robustness, and flow-through effects · 2019-02-08T04:36:50.661Z · score: 4 (5 votes) · EA · GW

I think the argument was written up formally on the forum, but I'm not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

Comment by denkenberger on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-30T06:31:28.724Z · score: 4 (3 votes) · EA · GW

The issue is that there are many sources of uncertainty of nuclear winter. When I developed a probabilistic model taking all these sources into account, I did get a median impact that was 2-3 C reduction (though I was also giving significant probability weight to industrial and counterforce strikes). However, I still got a ~20% probability of collapse of agriculture.

Comment by denkenberger on Vocational Career Guide for Effective Altruists · 2019-01-29T06:34:12.523Z · score: 2 (3 votes) · EA · GW

Accountants typically require a 4 year degree; vocational is generally 2 year degree or less.

Comment by denkenberger on Introducing Sparrow: a user-friendly app to simplify effective giving · 2019-01-18T07:21:35.752Z · score: 3 (2 votes) · EA · GW

Can it help enable Giving Tuesday matching despite many small donations throughout the year?

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-17T01:34:11.367Z · score: 6 (4 votes) · EA · GW

I generally agree. The question is whether we should call something an X-risk by the impact if it happens alone or by the impact*probability. If the latter, and if comets are an X-risk, then we should call extreme climate change (and definitely nuclear war) an X-risk.

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-16T04:32:17.176Z · score: 4 (3 votes) · EA · GW

I think it is useful to discuss qualifies as an X-risk. Asteroid/comet impact is widely regarded as an X-risk, but a big one that could cause human extinction might only have a one in a million probability in the next 100 years. This is a 0.0001% reduction in humanity's long term value. However, if you believe 80,000 Hours that nuclear war might have a ~3% chance in the next 100 years and this could reduce the long term future potential of humanity ~30%, that is a ~1% reduction in the future of humanity this century. So practically speaking, it is much more of an X-risk than asteroids are. Similarly, if you believe 80k that extreme climate change has a ~3% chance in the next 100 years and it reduces the long run potential by ~20%, that is a 0.6% reduction in the long term future of humanity. This again is much larger than asteroids. I personally think the nuclear risk is higher and the climate risk is lower than these numbers. It is true that some of the long-term impact could be classified as trajectory changes rather than traditional X risk. But I think most people are interested in trajectory changes as well.

Comment by denkenberger on What are ways to get more biologists into EA? · 2019-01-16T02:47:08.687Z · score: 4 (3 votes) · EA · GW

I'm not sure if this is answering the intent of the question, but one could refer undergrad/grad biologists to the biology part of effective thesis.

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T07:14:52.207Z · score: 7 (3 votes) · EA · GW

But remember, X-risk is not just extinction - there are many routes to long term future impacts from nuclear war - some are mentioned here.

Comment by denkenberger on The Global Priorities of the Copenhagen Consensus · 2019-01-12T07:46:04.841Z · score: 6 (5 votes) · EA · GW

I think EAs should look more into reducing trade barriers, both because of the global poverty benefits, but also because I think countries are less likely to go to (nuclear) war if they are economically dependent on each other.

Comment by denkenberger on An integrated model to evaluate the impact of animal products · 2019-01-12T07:40:21.465Z · score: 2 (1 votes) · EA · GW

Cattle's feed climate impact could be reduced if they ate agricultural residues (like they used to and still often do in less developed countries). I don't think that grass fed beef is really better because conventional cattle are grass fed part of their lives, so having some cattle completely grass fed means that the remainder would become a smaller percent grass fed. It looks like a little bit of seaweed reduces the methane from cattle.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2019-01-06T21:21:33.747Z · score: 2 (1 votes) · EA · GW

Thanks! However, neurons in smaller organisms tend to be smaller. So I think the actual brain mass of humans would be similar to the land arthropods and the nematodes. Fish are larger organisms, so it does look like the brain mass of fish would be significantly larger than humans. There is the question of whether a larger neuron could provide more value or dis-value than a smaller neuron. If it is the same, then neuron count would be the relevant number.

Comment by denkenberger on Keeping Absolutes in Mind · 2019-01-06T04:44:16.784Z · score: 2 (1 votes) · EA · GW

Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!

Comment by denkenberger on EA Hotel Fundraiser 1: the story · 2018-12-29T00:39:35.249Z · score: 9 (4 votes) · EA · GW

Perhaps for some, but I think most people working on X-risk are primarily altruistically motivated. And for them, it is more important to stay alive in a catastrophe so they can help more. A less extreme version of this is living outside of metros to reduce the chance of being killed in a nuclear war.

Comment by denkenberger on EA Hotel Fundraiser 1: the story · 2018-12-28T08:00:08.476Z · score: 6 (7 votes) · EA · GW

What about an EA hotel in Australia/New Zealand? Safer from nuclear war and pandemics...

Comment by denkenberger on EA Survey 2018 Series: Donation Data · 2018-12-28T00:26:20.978Z · score: 5 (4 votes) · EA · GW

Nice work! One more way of teasing out the origin of relatively low donations is asking about net worth. A person may be a full-time nonstudent and have a good salary, but still have college debt and therefore be hesitant to donate a lot.

Comment by denkenberger on Detecting Morally Significant Pain in Nonhumans: Some Philosophical Difficulties · 2018-12-26T09:19:36.801Z · score: 2 (1 votes) · EA · GW

Very nice piece! So how many plants are there in the world? And why did you choose that rather than motile bacteria? I would guess they would be a better candidate, given they are more numerous and the questionable assertion that the sea squirt eats its brain when it stops swimming around.

Comment by denkenberger on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-20T17:55:21.270Z · score: 7 (6 votes) · EA · GW

I think if EA can expand more with conservatives, it could dramatically increase its giving capability and influence. I'm curious about your assertion that no EA religious person is prioritizing converting people. When I talked to a former evangelical, he said that if you truly believe that people are going to hell if they do not believe in Christianity, then you have this burning desire to convert people-it is your top priority. Please note that there are multiple repeat posts.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-20T17:13:03.125Z · score: 2 (1 votes) · EA · GW

I'm not a biologist, but the point is that you can start with a tiny amount of material and still scale up to large quantities extremely quickly with short doubling times. As for competition, there are many ways in which human design technology can exceed (and has exceeded) natural biological organisms' capabilities. These include better materials, not being constrained by evolution, not being constrained by having the organism function as it is built, etc. As for the large end, good point about availability of uranium. But the super intelligence could design many highly transmissible and lethal viruses and hold the world hostage that way. Or think of much more effective ways than we can think of. The point is that we cannot dismiss that the super intelligence could take over the world very quickly.

Comment by denkenberger on Discussion: What are good legal entity structures for new EA groups? · 2018-12-20T06:33:30.312Z · score: 2 (1 votes) · EA · GW

There are charities that specialize in sponsoring projects to give them tax deductability, one being the Social and Environmental Entrepreneurs.

Comment by denkenberger on College and Earning to Give · 2018-12-20T05:16:56.758Z · score: 6 (3 votes) · EA · GW

I would love to see a paper actually breaking out how all the reasons why college has gotten so much more expensive actually contribute quantitatively, but I have not seen it yet. In general, we expect something that is labor dominated to increase faster than inflation, because salaries increase faster than inflation (this is roughly per capita economic growth). But college has increased faster than per capita income. You mention more administrators. Also, there are more services, like career services. You point out that tuition has increased faster than the amount actually paid, because there is more price discrimination. There is also reduced state funding for state colleges (I believe absolute and definitely as a percentage). There may be reduction in mean class sizes-there has been for primary and secondary. There is more technology use inside and outside the classroom - overall I think this has increased efficiency, but it would still probably show up as an increase in tuition. On the side for controlling costs, a much greater proportion of classes are taught by low-paid adjuncts. Also, terms have gotten shorter. Another factor that can control costs is endowments. Princeton has actually reduced its tuition recently because its endowment is so big.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2018-12-19T22:41:42.567Z · score: 3 (2 votes) · EA · GW

My prior here is brain size weighting for suffering, which means insects are similar importance to humans currently. But I would guess they would be less tractable than humans (though obviously far more neglected). So I think if there could be compelling evidence that we should be weighting insects 5% as much as humans, that would be an enormous update and make invertebrates the dominant consideration in the near future.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-19T08:04:35.558Z · score: 2 (1 votes) · EA · GW

Let's say they only mail you as much protein as one full human genome. Then the self-replicating nanotech it builds could consume biomass around it and concentrates uranium (there is a lot in the ocean, e.g.). Then since I believe the ideal doubling time is around 100 seconds, it would take about 2 hours to get 1 million intercontinental ballistic missiles. That is probably optimistic, but I think days is reasonable - no lawyers required.

Comment by denkenberger on Why You Should Invest In Upgrading Democracy And Give To The Center For Election Science · 2018-12-19T07:22:52.223Z · score: 6 (4 votes) · EA · GW

I think it would be very helpful to do an explicit cost effectiveness model for approval voting. Of course there would be a lot of uncertainty, but not necessarily more than AI or alternate foods. It could be for the present generation (like this or this for alternate foods) or for the long term future, like this for alternate foods and AI. Then we would have at least some quantitative way of comparing.

Comment by denkenberger on College and Earning to Give · 2018-12-18T18:10:38.366Z · score: 2 (1 votes) · EA · GW

I agree it is good to think about it early! The 529 still can be better than general taxable savings because of a tax deduction and then tax-free growth. The problem is limited investment options. However, the Coverdell has more freedom of investment, so I'm doing that. Depending on the college, faculty or even staff have their kids go at reduced or free tuition, which could be an option for some people. That is a creative idea to do direct work while kids are in college. If one does have significant savings by then (including retirement savings, because as you point out the penalty for withdrawal is not that large), one could even take off time from paid work. This is facilitated by being able to take out an interest free loan on the part one does not have to pay right away while the kids are in college. Or if one is doing direct work while kids are in college, one could just continue the direct work afterwards. Since parents are expected to pay 5% of taxable assets per year when in college, I believe that means one would need to pay about 30% over six years. As you point out, there is also uncertainty that it would even be required (like if MOOCs take over). But you can still think of this (and the interest free loan) as an additional reason other than avoiding taxes to donate more money now instead of accumulating it in a taxable account. (Disclaimer: I am not a financial advisor nor a tax professional.)

Comment by denkenberger on Lessons Learned from a Prospective Alternative Meat Startup Team · 2018-12-18T08:16:13.238Z · score: 4 (3 votes) · EA · GW

Nice writeup. One big advantage of fungus-based meat substitutes is that they can grow on waste products (lowering cost and environmental impact). This is typically done for mushrooms, but not Quorn. Does anyone know why?

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-17T07:26:11.728Z · score: 2 (1 votes) · EA · GW

Some possibilities for rapid gain in thinking speed/intelligence are here.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-15T08:23:25.214Z · score: 7 (3 votes) · EA · GW

In regards to intelligence quickly turning into world domination, Yudkowsky paints this scenario, and points out that super human intelligence should be able to think of much better and faster ways:

"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology."

Comment by denkenberger on Existential risk as common cause · 2018-12-13T06:55:22.042Z · score: 2 (1 votes) · EA · GW

I haven't read much deep ecology either. Seth Baum has written that some people think there is intrinsic value in functioning ecosystems - presumably these people would want the ecosystems to continue as a garden world. Other people value biodiversity (number of species). But you're right that some just want whatever would have happened naturally.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2018-12-11T07:34:38.298Z · score: 2 (1 votes) · EA · GW

Reflecting on the mirror test - nice pun!

Comment by denkenberger on Existential risk as common cause · 2018-12-07T07:05:37.873Z · score: 6 (5 votes) · EA · GW

For AI, one estimate is $3-$1000 per life saved in the present generation (near bottom of model). For alternate foods for global agricultural catastrophes, one estimate is $0.20-$400 globally and $1-$20,000 in the US, both in the present generation.

Comment by denkenberger on Existential risk as common cause · 2018-12-07T06:45:18.878Z · score: 4 (4 votes) · EA · GW

For deep ecologists, I use the argument that without people, the animals and plants will generally go extinct in 500-1000 million years because the increasing brightness of the sun will cause runaway global warming. Humans could delay this by putting reflective material between the Earth and the sun or other means. Or humans could perpetuate species on other planets. So in the long run, it is not a good idea for other species to have humans go extinct.

Comment by denkenberger on Why we have over-rated Cool Earth · 2018-11-28T18:11:07.836Z · score: 4 (3 votes) · EA · GW

There are two ways for climate change reduction to be considered effective by EA frameworks: long-term future and saving lives/improving utility in the presentish generation. There is some discussion here about long-term future. For saving lives, I agree it is tricky. When I attempted this in 2005, I tried to do it based on increased utility. Even though it is true that climate change will likely fall disproportionately on less-developed countries, when you look at the actual economic impacts, they accrue mostly to richer people because they make up the majority of the economy. This is especially true in the longer term, when it is likely that even current less-developed countries will be significantly richer than today. For typical cost climate interventions, I was getting they are about 2.5 orders of magnitude lower cost effectiveness than direct global poverty interventions. Another attempt is here (though you may not agree with his discounting). If Cool Earth really is significantly lower cost, of course that would improve the comparison. But I still think it is very unlikely to be better than direct global poverty interventions.

Comment by denkenberger on Towards Better EA Career Advice · 2018-11-27T00:57:32.795Z · score: 8 (3 votes) · EA · GW

I use 80,000 Hours as a low-pressure way of introducing people to EA, because it is providing practical advice, rather than talking about giving lots of money away. So I think it is important for it to be inclusive. But maybe there is a way to direct these sorts of newcomers to articles like yours on having a high impact in any career? This is also great for older people who might become defensive if the first thing they see is that they chose a low impact career. I agree that it would be hard to do a really good job in both focus areas. But I think you have already produced useful content for a more general audience, so it is a question of making it accessible to the right people.

Comment by denkenberger on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-25T19:09:08.180Z · score: 2 (1 votes) · EA · GW

I think that neglectedness is useful for initial scoping. But I think then it makes sense to move to explicit cost-effectiveness modeling like this to address your concerns.

Comment by denkenberger on Literature Review: Why Do People Give Money To Charity? · 2018-11-22T07:43:46.173Z · score: 3 (2 votes) · EA · GW

Very useful! But be cautious with matching if it is not counterfactual; see here.

Comment by denkenberger on Alliance to Feed the Earth in Disasters (ALLFED) Progress Report & Giving Tuesday Appeal · 2018-11-22T02:16:45.299Z · score: 3 (2 votes) · EA · GW

Thank you for the feedback and valuable points.

As for how we knew the fundraiser wasn’t a good fit, two factors weighted in our mutual decision to part ways. Firstly, approaching mainstream funders with concerns of existential risk was proving somewhat challenging (this is also the experience that other EA-aligned organizations were telling us they had). Secondly, the fundraiser was finding remote working difficult and discovered he preferred a job with more face-to-face contact. We have a small but intercontinental team, based in the US, Europe and intermittently in Asia, and so our mode of operations at this time is based on networked individuals or teams of two. We have thus discovered we need to highlight this more in our recruitment so as to ensure a good organizational fit for any future hires.

  1. The DARPA meeting was about 10% of global agricultural shortfalls. So I started with that, but I also talked about agricultural collapse. We have found that generally outside of EA, it is hard to get people to take seriously more than 10% global agricultural shortfalls. There seemed to be good engagement at the meeting, but there was little follow-up.
  2. Very good point - I reworded to indicate that feeding humans and other species is not at least a technical trade-off because it is quite feasible to do both.
Comment by denkenberger on EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) · 2018-11-22T00:35:27.594Z · score: 4 (3 votes) · EA · GW

Nice idea! I see that it is due 5 pm Pacific Standard Time on Saturday. Here it says "We are planning to do a small round of grants ($50k-$100k) next week to help donors get a sense of what kinds of projects we are likely to fund before the end of the giving season in December." Is this a different solicitation from the <~$40k request? If so, when would the latter deadline be? Would that be more focused on existing organizations?

Alliance to Feed the Earth in Disasters (ALLFED) Progress Report & Giving Tuesday Appeal

2018-11-21T05:20:37.922Z · score: 13 (9 votes)
Comment by denkenberger on Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before? · 2018-11-20T19:38:09.982Z · score: 4 (3 votes) · EA · GW

Thanks for your good questions.

1. Indeed, Bayesian updating the lognormal distribution from Barrett based on no nuclear war for 72 years would cut off the high probability tail. Barrett notes that 7% per year is unlikely given the data. This is why Anders uses a beta distribution in his model. If you assume a uniform prior and 72 years of no nuclear war with a beta distribution, you get an annual probability of approximately 1.4%. He adjusted this downward to 0.7%, but as I noted this does not take into account the possibility of China conflicts. Hellman's model indicated roughly 1% per year. So I think this is the right order of magnitude, but feel free to put your own number in. The overall conclusions are likely not to change very much. To give you some idea, the ratio of marginal 100 millionth dollar on alternate foods versus marginal now is two orders of magnitude ratio in cost effectiveness of alternate food. So even if one were two orders of magnitude less optimistic about alternate foods, it would still have ~60% confidence that funding alternate foods now is better than AI.

2. I agree that slow climate changes are much less problematic. One way of quantifying this is the agricultural loss velocity, percent loss in productivity per year. For nuclear winter, this is roughly 100% per year. For the sudden 10% agricultural losses (like regional nuclear war or volcano like the year without a summer), this is about 10% per year. For the abrupt regional climate change, this is about 10% over 10 years, or 1% per year. But if the 5°C over 100 years is a 20% agricultural effect, this is only 0.2% per year. And yet, there seems to be much more concern in the EA community (e.g. CSER) about extreme climate change, than the abrupt 10% shortfalls. And as I noted, the 80,000 Hours estimate of the long-term impact of extreme climate change was 20%. I guess one possible mechanism of why slow extreme climate change could be bad is there could be mass migration causing political tensions and potentially nuclear war. But in general, these risks seem to be significantly less serious than nuclear war directly.

3. My time horizon is only about 20 years. For the blocking of the sun, improvements in agricultural productivity are not really relevant. They would be relevant for the 10% shortfalls. Another thing that would be relevant is general economic development so the poorest of the world could handle price shocks better. Anders' time horizon is significantly longer, but he is less concerned about the 10% shortfalls. So overall I do not think it would be too large of an adjustment.

4. For the probability of nuclear winter given full-scale nuclear war, there appear to be two camps: people that think it is near 100% and people to think it is near 0%. I did a Monte Carlo analysis on it and found if you define nuclear winter as near-complete agricultural collapse of crops where they are currently planted, this was around a 20% probability. This was so low because I considered significant probabilities of counter industrial and counterforce (trying to destroy the other side's nuclear weapons) attacks. The usual understanding is maximum casualties, and then I would say the probability is more like half. I would also note that it is possible to have far future impacts of nuclear war even without nuclear winter (e.g. worse values ending up in AGI).

Comment by denkenberger on Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before? · 2018-11-20T06:20:05.632Z · score: 3 (2 votes) · EA · GW

Thanks for your question, aarongertler (I can't seem to switch this to a reply). I think those mean numbers are a little high. But I think it is plausible that the work we have done so far could result in the saving of many lives (or even civilization) if the catastrophe happens soon. One possibility is that governments search the web and find our materials. Then the governments might realize that if they cooperated, we could feed everyone, and hopefully they would not resort to military action towards a "lifeboat" situation. Another possibility is that the mass media contacts we have so far call us and run hopeful stories. These could get picked up by other media and influence leaders. A further possibility is that the people we have talked to that have some influence in US, UK and Indian governments could get the message up. A fourth possibility is our message could go viral on social media and eventually influence leaders. In many of these scenarios, even if the governments don't change actions, since some food sources would work on the household scale, some lives could be saved this way. Also, governments might not choose to cooperate with other governments, but still learn how to feed more of their own people. It is possible that our work has prompted governments to make response plans, but they haven't told us (it could even be classified).

Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?

2018-11-19T21:57:05.518Z · score: 13 (16 votes)
Comment by denkenberger on EA needs a cause prioritization journal · 2018-09-15T18:29:42.603Z · score: 8 (8 votes) · EA · GW

Interesting idea, but I think GCR / X risk is further along. CSER has identified thousands of relevant papers, and there have been 3 special issues in the last 4 years. So I think GCR is ready for a journal (perhaps 2-4 issues per year). I would recommend for cause prioritization to do a few special issues and see how it turns out. Even that is a significant time commitment. One way to do it would be to have a special issue associated with an EA global.

Comment by denkenberger on Awesome Effective Altruism - a curated list of EA resources · 2018-08-25T05:54:15.983Z · score: 1 (1 votes) · EA · GW

Other long term EA-aligned organizations include Future of Life Institute, Global Catastrophic Risk Institute, Alliance to Feed the Earth in Disasters, AI Impacts, Berkeley Existential Risk Institute, etc.

Comment by denkenberger on Problems with EA representativeness and how to solve it · 2018-08-08T05:17:15.715Z · score: 1 (1 votes) · EA · GW

I agree that a lot of work on X risk/far future is value of information. But I argued here that the distributions of cost-effectiveness in the present generation of alternative food for agricultural catastrophes did not overlap with AMF. There very well could be flow-through effects from AMF to the far future, but I think it is hard to argue that they would be greater than actually addressing X risk. So I think if you do value the far future, it would be even harder to argue that the distribution of alternate foods and AMF overlap. There would be a similar results for AI vs AMF if you believe the model referred to here.

Comment by denkenberger on Open Thread #40 · 2018-07-11T13:46:23.180Z · score: 4 (6 votes) · EA · GW

I like that the forum is not sorted so one can keep abreast of the major developments and debates in all of EA. I don't think there is so much content as to be overwhelming.

Comment by denkenberger on EA Hotel with free accommodation and board for two years · 2018-06-24T16:13:29.591Z · score: 0 (0 votes) · EA · GW

Good backup plan. That's great that it has not been a dealbreaker for anyone.

Comment by denkenberger on EA Hotel with free accommodation and board for two years · 2018-06-18T04:15:13.363Z · score: 0 (8 votes) · EA · GW

This says 20% of EA is vegan or vegetarian, so I would guess less than 10% vegan. Granted, the hard core EAs you are attracting may be more likely vegan, and you are lowering the barrier if someone else is reading labels and is hopefully a good cook. But I still think you are really limiting your pool by having all meals vegan. I understand you want to be frugal, and vegan from scratch is cheaper, but animal product substitutes are generally more expensive than animal products.

Comment by denkenberger on EA Hotel with free accommodation and board for two years · 2018-06-18T03:58:00.063Z · score: 0 (0 votes) · EA · GW

Nice idea! The free health care in the UK helps make it low cost, though is there a probationary period for immigrants?

[Paper] Interventions that May Prevent or Mollify Supervolcanic Eruptions

2018-01-15T21:46:27.407Z · score: 20 (20 votes)

How you can save expected lives for $0.20-$400 each and reduce X risk

2017-11-27T02:23:44.742Z · score: 23 (27 votes)

Should we be spending no less on alternate foods than AI now?

2017-10-29T23:28:39.440Z · score: 31 (33 votes)

Futures of altruism special issue?

2016-12-18T19:16:02.114Z · score: 1 (3 votes)

Saving expected lives at $10 apiece?

2016-12-14T15:38:38.561Z · score: 15 (15 votes)

Advice Wanted on Expanding an EA Project

2016-04-23T23:20:02.455Z · score: 4 (4 votes)

Essay Competition on Preparation for Global Food Catastrophes

2016-03-17T01:49:12.131Z · score: 6 (6 votes)

Investment opportunity for the risk neutral

2016-01-25T15:29:48.579Z · score: -1 (11 votes)

My Cause Selection: Dave Denkenberger

2015-08-16T15:06:25.456Z · score: 6 (6 votes)