Comment by denkenberger on Does climate change deserve more attention within EA? · 2019-04-19T16:29:51.309Z · score: 3 (2 votes) · EA · GW

I think the UK government study I referred to was saying that even the ~1% chance per year of a 10% global food production shortfall now is increased by the climate change we have had so far. So the delta due to slow climate change could be larger. In general, I think it is a good idea for EA to have something to say to people who are interested in climate change. But the conventional emissions reductions that will cost ~$10 trillion present value is generally not very effective. However, I think there could be neglected opportunities that are effective. I talked about one of targeting neglected energy efficiency policy opportunities that reduce CO2 emissions and save money in that same 80,000 Hours interview (near the end). Carl Shulman notes that OPP has found some leveraged opportunities for climate change. Also, FLI and CSER do some work on climate change. The food system resilience work ALLFED does can also be valuable for abrupt regional climate change, where a region can lose 10°C in a decade, which has happened several times in the ice core record.

Comment by denkenberger on Do we have any recommendations for financial advisers for earning to give? · 2019-04-14T17:18:10.485Z · score: 2 (1 votes) · EA · GW

Thomas Denkenberger (disclosure: my brother) is a financial advisor and an EA. He has been working with me over the last 10 years to use rationality to exploit irrationalities in the market and enable greater donations. As many EAs have pointed out, one should be risk neutral and maximize returns when it comes to investing to give (though of course he can customize if you are risk averse).

Comment by denkenberger on Who is working on finding "Cause X"? · 2019-04-13T06:48:20.282Z · score: 8 (12 votes) · EA · GW

I think alternate foods for catastrophes like nuclear winter is a cause X (disclaimer, co-founder of ALLFED).

Comment by denkenberger on On AI and Compute · 2019-04-13T02:47:24.447Z · score: 2 (1 votes) · EA · GW

Nice post! I don't think we should assume that AI Moore's law would be capped by the regular Moore's law of total compute. If there is a new application of processors that is willing to pay a lot of money for a huge number of processors, I think we would build more chip fabs to keep up with demand. This does not necessarily accelerate the original Moore's law (transistors per chip), but it would accelerate total compute. This would be consistent with Robin Hanson's vision of doubling of value (and I believe ~compute) roughly every month. It's not even clear to me that the chips would be more expensive in such a scenario assuming we actually planned well for it, because generally we have learning where the cost per unit decreases with the cumulative production.

Comment by denkenberger on Announcing EA Hub 2.0 · 2019-04-11T06:36:04.524Z · score: 12 (6 votes) · EA · GW

I'm glad it's back up - I find it useful for connecting with EAs when I travel. Do you have a record of number of people registered over time? Could you please add the Alliance to Feed the Earth in Disasters (ALLFED) to the list of Organisational Affiliations?

Comment by denkenberger on Salary Negotiation for Earning to Give · 2019-04-10T22:38:13.392Z · score: 2 (1 votes) · EA · GW

Agreed.

Comment by denkenberger on Salary Negotiation for Earning to Give · 2019-04-10T22:37:49.509Z · score: 2 (1 votes) · EA · GW

Good point!

Comment by denkenberger on Salary Negotiation for Earning to Give · 2019-04-09T23:36:00.068Z · score: 2 (3 votes) · EA · GW

Wow - that is a lot of risk. One would need at least an 11% increase in compensation for negotiation to make sense, even if one were risk neutral. And most people are risk averse, meaning they would need an even bigger payoff, which might not be realistic.

Edit: this is not correct - thanks for people pointing that out!

Comment by denkenberger on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T06:42:43.142Z · score: 7 (5 votes) · EA · GW

And why so much focus on math rather than science/engineering?

Comment by denkenberger on Salary Negotiation for Earning to Give · 2019-04-07T21:31:05.917Z · score: 4 (2 votes) · EA · GW

Right - I'm saying the 2% rescission rate for professor positions is high because of competition (employer has more power), so the rescission rate outside of academia would likely be lower (employee has more power, at least now).

Comment by denkenberger on Salary Negotiation for Earning to Give · 2019-04-06T04:41:47.305Z · score: 1 (2 votes) · EA · GW

For professor negotiations, this says about 2% of the time you try to bargain, the offer is rescinded. But because university positions are so competitive, I would not be surprised if the general rescinding rate were significantly less than this.

Comment by denkenberger on I Am The 1% · 2019-03-16T05:13:25.381Z · score: 3 (2 votes) · EA · GW

If you would like to talk with more EAs, there is (was?) a program where you would get the contact info of a new EA every month and then you could set up phone/VOIP call with them. I didn't find the program easily online when I just searched, but I'm pretty sure someone else on the forum would have the info.

Comment by denkenberger on Why I made a career switch · 2019-03-15T02:48:30.189Z · score: 4 (2 votes) · EA · GW

Wow – then you will have 3 PhDs? Did you consider trying to get into academia with one of your existing PhDs and pivoting towards high impact research? My impression is that a lot of EAs are switching careers, but typically at a much earlier stage. When I tried to find data on how often people switched careers, I found that the US government doesn’t have a definition of a career switch and therefore has no data. I think it would be very interesting to compose a spectrum of career changes. Some people consider it a career change to stay in the same field, but e.g. switch from academia to industry. Some people see it as a career change to go from mechanical engineering to civil engineering. Most people do not end up in the same field as their college major (or even close to their college major), so then you could say most college-educated people switch careers.

Comment by denkenberger on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-14T05:23:31.296Z · score: 3 (2 votes) · EA · GW

Interesting analysis. You mean a Benefit to Cost Ratio (BCR) of 0.84. Then the Return On Investment (ROI) would be negative.

Comment by denkenberger on Radicalism, Pragmatism, and Rationality · 2019-03-03T16:58:13.171Z · score: 4 (2 votes) · EA · GW

Perhaps another way of saying this is that EA should have BHAGs (Big Hairy Audacious Goals). I think Feeding Everyone No Matter What qualifies, though I might be biased.

Comment by denkenberger on On the (In)Applicability of Corporate Rights Cases to Digital Minds · 2019-03-03T07:24:30.581Z · score: 3 (2 votes) · EA · GW

"digital minds as such are not mere extensions or associations of entities already bearing rights"

Unless they were ems?

Comment by denkenberger on Rodents farmed for pet snake food · 2019-02-22T07:40:37.690Z · score: 3 (2 votes) · EA · GW

I would think the protein substitutes would be a lot cheaper than farmed mice - is that correct? Then it seems like that could substitute for a lot of the non "spectacle of eating live animals" market.

Comment by denkenberger on David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop) · 2019-02-19T18:27:03.024Z · score: 9 (4 votes) · EA · GW

Thanks very much to CEA for doing the recording, editing, and transcription! Also thanks to CEA for the EA grant that has supported some of this work. Mitigating the impact of catastrophes that would disrupt electricity/industry such as solar storm, high altitude electromagnetic pulse, or narrow AI virus, is a parallel effort within The Alliance to Feed the Earth in Disasters (ALLFED) that I did not get a chance to talk about in my 80,000 Hours interview. The Guesstimate model I referred to in the workshop can be found here (blank to avoid anchoring). Also, the three papers involving losing electricity/industry are feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes, so input is welcome.

David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop)

2019-02-19T15:58:01.214Z · score: 17 (7 votes)
Comment by denkenberger on The Need for and Viability of an Effective Altruism Academy · 2019-02-18T03:19:54.871Z · score: 2 (1 votes) · EA · GW

There is also the EA MOOC. There does not appear to be a counter - does anyone know how many completions of this course there has been?

Comment by denkenberger on Reflections on doing good with lump sums - the retired person's dilemma · 2019-02-12T07:38:14.483Z · score: 2 (1 votes) · EA · GW

You can always do more than the Giving What We Can minimum of 10%, but it is true it is aimed at pre-retirement income. Bolder Giving encourages committing 50% of lump sums or income, so this might be more appropriate for you. It does not require effectiveness, though there are a number of EAs on the site.

Comment by denkenberger on Near-term focus, robustness, and flow-through effects · 2019-02-12T04:48:52.656Z · score: 2 (1 votes) · EA · GW

I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).

Comment by denkenberger on Near-term focus, robustness, and flow-through effects · 2019-02-08T04:36:50.661Z · score: 4 (5 votes) · EA · GW

I think the argument was written up formally on the forum, but I'm not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

Comment by denkenberger on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-30T06:31:28.724Z · score: 4 (3 votes) · EA · GW

The issue is that there are many sources of uncertainty of nuclear winter. When I developed a probabilistic model taking all these sources into account, I did get a median impact that was 2-3 C reduction (though I was also giving significant probability weight to industrial and counterforce strikes). However, I still got a ~20% probability of collapse of agriculture.

Comment by denkenberger on Vocational Career Guide for Effective Altruists · 2019-01-29T06:34:12.523Z · score: 2 (3 votes) · EA · GW

Accountants typically require a 4 year degree; vocational is generally 2 year degree or less.

Comment by denkenberger on Introducing Sparrow: a user-friendly app to simplify effective giving · 2019-01-18T07:21:35.752Z · score: 3 (2 votes) · EA · GW

Can it help enable Giving Tuesday matching despite many small donations throughout the year?

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-17T01:34:11.367Z · score: 7 (5 votes) · EA · GW

I generally agree. The question is whether we should call something an X-risk by the impact if it happens alone or by the impact*probability. If the latter, and if comets are an X-risk, then we should call extreme climate change (and definitely nuclear war) an X-risk.

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-16T04:32:17.176Z · score: 4 (3 votes) · EA · GW

I think it is useful to discuss qualifies as an X-risk. Asteroid/comet impact is widely regarded as an X-risk, but a big one that could cause human extinction might only have a one in a million probability in the next 100 years. This is a 0.0001% reduction in humanity's long term value. However, if you believe 80,000 Hours that nuclear war might have a ~3% chance in the next 100 years and this could reduce the long term future potential of humanity ~30%, that is a ~1% reduction in the future of humanity this century. So practically speaking, it is much more of an X-risk than asteroids are. Similarly, if you believe 80k that extreme climate change has a ~3% chance in the next 100 years and it reduces the long run potential by ~20%, that is a 0.6% reduction in the long term future of humanity. This again is much larger than asteroids. I personally think the nuclear risk is higher and the climate risk is lower than these numbers. It is true that some of the long-term impact could be classified as trajectory changes rather than traditional X risk. But I think most people are interested in trajectory changes as well.

Comment by denkenberger on What are ways to get more biologists into EA? · 2019-01-16T02:47:08.687Z · score: 4 (3 votes) · EA · GW

I'm not sure if this is answering the intent of the question, but one could refer undergrad/grad biologists to the biology part of effective thesis.

Comment by denkenberger on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T07:14:52.207Z · score: 7 (3 votes) · EA · GW

But remember, X-risk is not just extinction - there are many routes to long term future impacts from nuclear war - some are mentioned here.

Comment by denkenberger on The Global Priorities of the Copenhagen Consensus · 2019-01-12T07:46:04.841Z · score: 6 (5 votes) · EA · GW

I think EAs should look more into reducing trade barriers, both because of the global poverty benefits, but also because I think countries are less likely to go to (nuclear) war if they are economically dependent on each other.

Comment by denkenberger on An integrated model to evaluate the impact of animal products · 2019-01-12T07:40:21.465Z · score: 2 (1 votes) · EA · GW

Cattle's feed climate impact could be reduced if they ate agricultural residues (like they used to and still often do in less developed countries). I don't think that grass fed beef is really better because conventional cattle are grass fed part of their lives, so having some cattle completely grass fed means that the remainder would become a smaller percent grass fed. It looks like a little bit of seaweed reduces the methane from cattle.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2019-01-06T21:21:33.747Z · score: 2 (1 votes) · EA · GW

Thanks! However, neurons in smaller organisms tend to be smaller. So I think the actual brain mass of humans would be similar to the land arthropods and the nematodes. Fish are larger organisms, so it does look like the brain mass of fish would be significantly larger than humans. There is the question of whether a larger neuron could provide more value or dis-value than a smaller neuron. If it is the same, then neuron count would be the relevant number.

Comment by denkenberger on Keeping Absolutes in Mind · 2019-01-06T04:44:16.784Z · score: 2 (1 votes) · EA · GW

Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!

Comment by denkenberger on EA Hotel Fundraiser 1: the story · 2018-12-29T00:39:35.249Z · score: 9 (4 votes) · EA · GW

Perhaps for some, but I think most people working on X-risk are primarily altruistically motivated. And for them, it is more important to stay alive in a catastrophe so they can help more. A less extreme version of this is living outside of metros to reduce the chance of being killed in a nuclear war.

Comment by denkenberger on EA Hotel Fundraiser 1: the story · 2018-12-28T08:00:08.476Z · score: 6 (7 votes) · EA · GW

What about an EA hotel in Australia/New Zealand? Safer from nuclear war and pandemics...

Comment by denkenberger on EA Survey 2018 Series: Donation Data · 2018-12-28T00:26:20.978Z · score: 5 (4 votes) · EA · GW

Nice work! One more way of teasing out the origin of relatively low donations is asking about net worth. A person may be a full-time nonstudent and have a good salary, but still have college debt and therefore be hesitant to donate a lot.

Comment by denkenberger on Detecting Morally Significant Pain in Nonhumans: Some Philosophical Difficulties · 2018-12-26T09:19:36.801Z · score: 2 (1 votes) · EA · GW

Very nice piece! So how many plants are there in the world? And why did you choose that rather than motile bacteria? I would guess they would be a better candidate, given they are more numerous and the questionable assertion that the sea squirt eats its brain when it stops swimming around.

Comment by denkenberger on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-20T17:55:21.270Z · score: 7 (6 votes) · EA · GW

I think if EA can expand more with conservatives, it could dramatically increase its giving capability and influence. I'm curious about your assertion that no EA religious person is prioritizing converting people. When I talked to a former evangelical, he said that if you truly believe that people are going to hell if they do not believe in Christianity, then you have this burning desire to convert people-it is your top priority. Please note that there are multiple repeat posts.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-20T17:13:03.125Z · score: 2 (1 votes) · EA · GW

I'm not a biologist, but the point is that you can start with a tiny amount of material and still scale up to large quantities extremely quickly with short doubling times. As for competition, there are many ways in which human design technology can exceed (and has exceeded) natural biological organisms' capabilities. These include better materials, not being constrained by evolution, not being constrained by having the organism function as it is built, etc. As for the large end, good point about availability of uranium. But the super intelligence could design many highly transmissible and lethal viruses and hold the world hostage that way. Or think of much more effective ways than we can think of. The point is that we cannot dismiss that the super intelligence could take over the world very quickly.

Comment by denkenberger on Discussion: What are good legal entity structures for new EA groups? · 2018-12-20T06:33:30.312Z · score: 2 (1 votes) · EA · GW

There are charities that specialize in sponsoring projects to give them tax deductability, one being the Social and Environmental Entrepreneurs.

Comment by denkenberger on College and Earning to Give · 2018-12-20T05:16:56.758Z · score: 6 (3 votes) · EA · GW

I would love to see a paper actually breaking out how all the reasons why college has gotten so much more expensive actually contribute quantitatively, but I have not seen it yet. In general, we expect something that is labor dominated to increase faster than inflation, because salaries increase faster than inflation (this is roughly per capita economic growth). But college has increased faster than per capita income. You mention more administrators. Also, there are more services, like career services. You point out that tuition has increased faster than the amount actually paid, because there is more price discrimination. There is also reduced state funding for state colleges (I believe absolute and definitely as a percentage). There may be reduction in mean class sizes-there has been for primary and secondary. There is more technology use inside and outside the classroom - overall I think this has increased efficiency, but it would still probably show up as an increase in tuition. On the side for controlling costs, a much greater proportion of classes are taught by low-paid adjuncts. Also, terms have gotten shorter. Another factor that can control costs is endowments. Princeton has actually reduced its tuition recently because its endowment is so big.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2018-12-19T22:41:42.567Z · score: 3 (2 votes) · EA · GW

My prior here is brain size weighting for suffering, which means insects are similar importance to humans currently. But I would guess they would be less tractable than humans (though obviously far more neglected). So I think if there could be compelling evidence that we should be weighting insects 5% as much as humans, that would be an enormous update and make invertebrates the dominant consideration in the near future.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-19T08:04:35.558Z · score: 2 (1 votes) · EA · GW

Let's say they only mail you as much protein as one full human genome. Then the self-replicating nanotech it builds could consume biomass around it and concentrates uranium (there is a lot in the ocean, e.g.). Then since I believe the ideal doubling time is around 100 seconds, it would take about 2 hours to get 1 million intercontinental ballistic missiles. That is probably optimistic, but I think days is reasonable - no lawyers required.

Comment by denkenberger on Why You Should Invest In Upgrading Democracy And Give To The Center For Election Science · 2018-12-19T07:22:52.223Z · score: 6 (4 votes) · EA · GW

I think it would be very helpful to do an explicit cost effectiveness model for approval voting. Of course there would be a lot of uncertainty, but not necessarily more than AI or alternate foods. It could be for the present generation (like this or this for alternate foods) or for the long term future, like this for alternate foods and AI. Then we would have at least some quantitative way of comparing.

Comment by denkenberger on College and Earning to Give · 2018-12-18T18:10:38.366Z · score: 2 (1 votes) · EA · GW

I agree it is good to think about it early! The 529 still can be better than general taxable savings because of a tax deduction and then tax-free growth. The problem is limited investment options. However, the Coverdell has more freedom of investment, so I'm doing that. Depending on the college, faculty or even staff have their kids go at reduced or free tuition, which could be an option for some people. That is a creative idea to do direct work while kids are in college. If one does have significant savings by then (including retirement savings, because as you point out the penalty for withdrawal is not that large), one could even take off time from paid work. This is facilitated by being able to take out an interest free loan on the part one does not have to pay right away while the kids are in college. Or if one is doing direct work while kids are in college, one could just continue the direct work afterwards. Since parents are expected to pay 5% of taxable assets per year when in college, I believe that means one would need to pay about 30% over six years. As you point out, there is also uncertainty that it would even be required (like if MOOCs take over). But you can still think of this (and the interest free loan) as an additional reason other than avoiding taxes to donate more money now instead of accumulating it in a taxable account. (Disclaimer: I am not a financial advisor nor a tax professional.)

Comment by denkenberger on Lessons Learned from a Prospective Alternative Meat Startup Team · 2018-12-18T08:16:13.238Z · score: 4 (3 votes) · EA · GW

Nice writeup. One big advantage of fungus-based meat substitutes is that they can grow on waste products (lowering cost and environmental impact). This is typically done for mushrooms, but not Quorn. Does anyone know why?

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-17T07:26:11.728Z · score: 2 (1 votes) · EA · GW

Some possibilities for rapid gain in thinking speed/intelligence are here.

Comment by denkenberger on Critique of Superintelligence Part 2 · 2018-12-15T08:23:25.214Z · score: 7 (3 votes) · EA · GW

In regards to intelligence quickly turning into world domination, Yudkowsky paints this scenario, and points out that super human intelligence should be able to think of much better and faster ways:

"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology."

Comment by denkenberger on Existential risk as common cause · 2018-12-13T06:55:22.042Z · score: 2 (1 votes) · EA · GW

I haven't read much deep ecology either. Seth Baum has written that some people think there is intrinsic value in functioning ecosystems - presumably these people would want the ecosystems to continue as a garden world. Other people value biodiversity (number of species). But you're right that some just want whatever would have happened naturally.

Comment by denkenberger on Why I'm focusing on invertebrate sentience · 2018-12-11T07:34:38.298Z · score: 2 (1 votes) · EA · GW

Reflecting on the mirror test - nice pun!

Alliance to Feed the Earth in Disasters (ALLFED) Progress Report & Giving Tuesday Appeal

2018-11-21T05:20:37.922Z · score: 13 (9 votes)

Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?

2018-11-19T21:57:05.518Z · score: 14 (17 votes)

[Paper] Interventions that May Prevent or Mollify Supervolcanic Eruptions

2018-01-15T21:46:27.407Z · score: 20 (20 votes)

How you can save expected lives for $0.20-$400 each and reduce X risk

2017-11-27T02:23:44.742Z · score: 24 (28 votes)

Should we be spending no less on alternate foods than AI now?

2017-10-29T23:28:39.440Z · score: 31 (33 votes)

Futures of altruism special issue?

2016-12-18T19:16:02.114Z · score: 1 (3 votes)

Saving expected lives at $10 apiece?

2016-12-14T15:38:38.561Z · score: 15 (15 votes)

Advice Wanted on Expanding an EA Project

2016-04-23T23:20:02.455Z · score: 4 (4 votes)

Essay Competition on Preparation for Global Food Catastrophes

2016-03-17T01:49:12.131Z · score: 6 (6 votes)

Investment opportunity for the risk neutral

2016-01-25T15:29:48.579Z · score: -1 (11 votes)

My Cause Selection: Dave Denkenberger

2015-08-16T15:06:25.456Z · score: 6 (6 votes)