Posts

Request for feedback on sample blurbs for the EA fantasy novel I wrote 2022-10-02T09:02:11.707Z
I wrote a fantasy novel promoting Effective Altruism: More Chapters 2022-09-16T09:46:19.374Z
I’ve written a Fantasy Novel to Promote Effective Altruism 2022-09-12T12:03:44.409Z
I'm writing a novel to promote EA 2022-04-02T08:49:04.819Z
Request for review of a donation appeal that will be seen by several thousand normal people 2022-02-05T19:29:07.196Z
The Phil Torres essay in Aeon attacking Longtermism might be good 2021-10-21T00:32:55.949Z
When can Writing Fiction Change the World? 2020-08-24T13:53:24.347Z
timunderwood's Shortform 2020-08-07T13:48:00.149Z

Comments

Comment by timunderwood on The great energy descent (short version) - An important thing EA might have missed · 2022-10-03T16:34:02.977Z · EA · GW

"So let’s imagine an EROI [for solar panels] of 2:1. That would mean that, to simplify, half of our society's resources go toward producing energy. Let's say this means that, roughly, 50% of people are working in the energy sector (directly or indirectly), which is already huge."

I'll probably finish reading/ skimming your longer document in a bit, but there is a clear mistake in this sentence, and I think if you consider it for long enough, you will realize it severely and perhaps fatally undercuts the entire argument you are making.

If solar panels had an EROI of 2 to 1, and all our energy came from solar panels, you then need to make two solar panels for every single one that you are using for net energy generation. This doubles the cost of using solar panels from what it would be with an infinite EROI, which doubles the amount of resources (excluding returns and costs of scale effects) required to make enough solar panels to run our civilization on. So if with infinite EROI you needed to make say 1 quadrillion kilowatt hours worth of solar panels to run your civilization, now you need to make 2 quadrillion kilowatt hours worth, since half of them will be used up in the process of making the rest.

The point is, this says nothing about whether 50 percent of society's resources are being used to make these solar panels, or 1 percent, because that depends on how hard it is to make solar panels.

If it is very easy to make and deploy solar panels, any positive rate of return on EROI is fine, while of it is extremely expensive and hard to make them, we can't transition even of the EROI is infinite.

Comment by timunderwood on Why doesn't WWOTF mention the Bronze Age Collapse? · 2022-09-19T14:30:53.817Z · EA · GW

First quick note: Spans much of the Mediterranean region is still local.

Second, while I of course wasn't the author, I wouldn't have talked about it much if we were, because we simply don't know enough about it for it to make sense to draw any weight bearing conclusions from the details of the Bronze Age collapse. 

Comment by timunderwood on On Elitism in EA · 2022-07-24T21:15:22.823Z · EA · GW

So what concretely and scaleably can people who don't need help because they have lots of resources (and thus they are actually capable of helping) do to figure out what the people who need help actually need, that the EA community is not doing?

Comment by timunderwood on Arguments for Why Preventing Human Extinction is Wrong · 2022-06-09T12:51:05.187Z · EA · GW

Shrug, and Hitler was a vegetarian.

That an attitude is similar to Putin's attitude is not an argument for it being wrong - I suppose it is an sort of decent argument for it being dangerous.

I mean Putin is obviously right (to any consistent consequentialist) that there are things worth killing, destroying, wrecking, and even torturing to protect. My disagreement with him is about either what those things are, and whether this violence actually achieves them.

I find the criticism of long-termism that it can potentially motivate horrifying behavior very compelling. I just don't think the critics are offering an alternative way to act in cases where the stakes are really that high. Though I agree with the epistemic criticism that, a) when you think the stakes are that high, you are particularly likely to be wrong about something, and b) you are also particularly likely to be focusing on a bad set of methods for achieving your goals.

Comment by timunderwood on Unflattering reasons why I'm attracted to EA · 2022-06-09T12:41:47.216Z · EA · GW

Ummmm, so we say we want to do good, but we actually want to make friends and get laid, so we figure out ways to 'do good' that leads to lots of hanging out with interesting people,and chances to demonstrate how cool we are to them. Often these ways of 'doing good' don't actually benefit anyone who isn't part of the community.

This is at least the worry, which I think is a separate problem from Goodharting, ie when the cea provides money to fly someone from the US to go to an eagx conference in Europe, I don't think there is any metric that is trying to be maximized, but rather just a vague sense that this might something something person becomes effective and then lots of impact.

Now it could interact with Goodharting in a case where, for example, community organizers get funds and status primarily based on numbers of people attending events, when what actually matters is finding the right people, and having the right sorts of events.

Comment by timunderwood on Arguments for Why Preventing Human Extinction is Wrong · 2022-06-06T13:01:46.312Z · EA · GW

We just have our own values. There isn't some magic god thingie in the sky telling us what we should value. There also isn't some sort logical basis that describes the one true morality. Morality only exists as a matter of conscious beings, their thoughts, and their feelings.

So really: I do not care in the slightest if you can come up with a great argument for human extinction

If that is what the greatest good is, I will try to kill and destroy those who pursue the 'greatest good'. I value my family, I value my own life. I value the existence of beings like me extended as far and as widely as I possibly can, and I will try to fight anyone who does anything that might endanger that -- and trying to even contemplate this question strikes me as the sort of thing that falls so far out of my Overton window, that the only excuse for not censoring it is that censorship doesn't work.

Comment by timunderwood on Unflattering reasons why I'm attracted to EA · 2022-06-06T09:06:27.219Z · EA · GW

It's all good -- what matters is whether we make a (the biggest possible) positive difference in the world, not how the motivational system decided to pick this as a goal.

I do think that it is important for the EA community/system/whatever it is to successfully point the stuff that is done for making friends and feeling high status towards stuff that actually makes that biggest possible difference.

Comment by timunderwood on Fiction Writing Retreat: Ink in the Abbey · 2022-05-28T09:43:57.222Z · EA · GW

Lol, it's consistently readable. If you expect more, you need to widen your reading horizons.

Comment by timunderwood on Fiction Writing Retreat: Ink in the Abbey · 2022-05-27T14:08:18.644Z · EA · GW

Sure, I agree with you that the prose is passable, readable and fairly solid, but definitely not flashy, literary, or anything special (though I think it reaches a somewhat higher level by the middle, but it never is what is important or fun about the HPMOR).

I personally never had the delusion that pretty prose was particularly important (if anything I go too far in the other direction), but yeah, it is a mistake that people make. 

You definitely do not need to write a poem in prose to have a great deal of impact with your writing. 

Comment by timunderwood on There are no people to be effectively altruistic for on a dead planet: EA funding of projects without conducting Environmental Impact Assessments (EIAs), Health and Safety Assessments (HSAs) and Life Cycle Assessments (LCAs) = catastrophe · 2022-05-27T14:01:40.616Z · EA · GW

About sapioseparatism: 
 

I suppose this is naturally what I'll want to push back hardest on, since it is the part that is telling me that there is something wrong with my core identity, assumptions about the world, and way I think. Of course that implies it is likely to be tied up with your core emotions, ways of thinking about the world, identity and assumptions -- and hence it is much more difficult for any productive conversation to happen (and less likely for conversation, if it becomes productive) to change anyone's mind.


So a core utilitarian (which is not identical to EA) idea is that if something is bad, it has to be bad 'for' someone -- and that except in exceptional cases, that badness for that someone will show up in their stream of subjective experiences. 

Now certainly mosquitoes, fish, elephants, and small rodents living in Malawi are all someone's whose subjective wellbeing should have some weight in our moral calculations. But I suspect that I'm wired in a particular way such that I could never care very much about anything that happens to 'nature' without affecting anybody's subjective experiences. This probably goes back to intuitions that cannot be argued with, though possibly they can be modified through prompting examples, social pressure, or by shifting the salience of other considerations and feelings.

At the very least, to the extent that biodiversity (as opposed to individual animals), and nature (as opposed, again, to individual animals) is viewed as important, I'd like to see a greater amount of argument for why this is important for me or for the EA community generally to care about. 

Now, I personally would prefer a green earth full of trees, but nothing with brains to a completely dead planet, and I'd prefer more weird species of animals, to every ecological niche being filled with the same type of animal. But this isn't a very strong preference compared to my preference for a long happy human future -- and it is a presence which is not at all prompted by my core utilitarian value system.

===

A comment on insecticide treated insect nets:

It seems like impregnating bed nets with insecticide is the exact opposite of indiscriminate use of insecticide (ie spraying just about everywhere with it), and as a result I would be very surprised if the quantity is enough to cause substantial ecosystem effects.

===

On environmental impact assessment:

Obviously the numbers should be run -- at least to the extent that it is not prohibitively expensive to do the study. Research, calculations, checking additional fringe possibilities, etc is not free, and should only be done if it seems like there is a reasonable chance they will tell us that we were making a mistake. However trying to figure out the size of environmental damages from using nets for fishing, burning, from the insecticide messing with children's hormones etc seems like it would be fairly easy to get a decent guess on how big the effect is at a cost that is reasonable in the context of a program that has so far distributed 400 million dollars worth of nets.

However, based on my priors, I would be fairly surprised if any of these numbers changes the basic conclusion that this is a cheap way to improve the well being of currently living human beings, and that it has a vanishingly small chance of contributing to a plastics driven extinction event caused by fertility collapse. 

I suppose my question here is, to what extent are you actually thinking about these issues as something where that whole set of concerns might in actual fact be irrelevant, and to what extent would you resist having your view on the importance of environmental concerns be changed by mechanics level explanations for why a particular bad outcome is unlikely, or by numerical assessments of costs and benefits? 

You seem to be saying that environmental concerns have a high chance of convincing us to stop giving out bednets, which will lead to some children dying --  this is the alternative. While changing house designs to discourage mosquitoes sounds like a very good additional idea, I would be shocked if it can be done at the cost of 1 dollar per year per room, like bed nets can be.

Resources are always limited.

So in that context, it is really important that the good thing that we win by stopping giving out bednets to be just a big and awesome of a win as stopping children from dying miserably from malaria. Perhaps that bar can be met -- some of your concerns (extinction risks, widespread neurological damage, etc), if they are real, might be worth letting children die to avoid.  But those are the stakes that we need to pay attention to.

Comment by timunderwood on Fiction Writing Retreat: Ink in the Abbey · 2022-05-25T10:12:18.052Z · EA · GW

It is a common misconception that because a piece of fiction was bad for the particular individual writing, or is low status, or is missing some desired marker of 'goodness', that it therefore is not 'good'. 

There doesn't seem to be any commonly agreed upon definition of what 'good' means in the context of fiction -- so I think it is better to focus on whether it is good for particular individuals, where you can just ask the people if they find the text good.

So while HPMOR is not good for Arjun, it is extremely good for a lot of other text-individual pairings. 

Also, if by 'not that good' you mean 'easy to duplicate', as someone who would very much like to write something that is as powerful, compelling, interesting, emotionally satisfying, multilayered and inspiring as HPMOR, it is not in the slightest easy to write something like it. 

Comment by timunderwood on Yglesias on EA and politics · 2022-05-24T14:34:14.525Z · EA · GW

It's definitely not just long termism - and at least before sbf's money started becoming a huge thing there is still an order of magnitude more money going to children in Africa than anything else. For that matter, I'm mostly longtermist mentally, and most of what I do (partly because of inertia, partly because it's easier to promote) is saving children in Africa style donations.

Comment by timunderwood on Some unfun lessons I learned as a junior grantmaker · 2022-05-24T10:30:45.262Z · EA · GW

Also 'no because my intuitions say this is likely to be low impact', and 'other'

But I agree that those four options would be useful -- maybe even despite the risk that the person immediately decides to try arguing with the grant maker about how his proposal really is in fact likely to be high impact, beneficial rather than harmful, and totally not confusing, and that the proposal definitely shouldn't be othered.

Comment by timunderwood on Some potential lessons from Carrick’s Congressional bid · 2022-05-23T09:12:47.693Z · EA · GW

I don't think we have to accede to that at all - it's not like it's useful for our goals anyway. What probably happened is sbf's money hired consultants, and they just did their job without supervision on trying to push better epistemics. A reputation for not going negative in a misleading way ever might be a political advantage, if you can make it credible.

Comment by timunderwood on Early spending research and Carrick Flynn · 2022-05-20T13:05:57.573Z · EA · GW

"The following is a backhanded, unfair, insult to write in the immediate days after, but to be show one critique[1]: it reads like the associated account manager (the Google ads sales person whose bonus or promotion depends on volume) got carried away, or someone looked at conventional spending levers and “turned up the knob” to very high levels, out of band[2]."

 

That sounds about right to me about what happened -- I mean, I think it was definitely worth trying (with the only main downside being that the particular way SBF tried possibly crowded out other more effective ways of trying, but mistakes are how you learn), but yeah -- it is if nothing else well known that you can't use money to brute force election results.

I do think that approach of trying to get good local branding is a good idea, though OTOH, we also don't want it to turn into donating lots of money to comparatively low value local projects -- if for no other reason that that would dilute the brand.

Comment by timunderwood on EAG & EAGx lodging considerations · 2022-05-09T12:57:01.300Z · EA · GW

Yeah, I confirmed directly that the refugees weren't what was driving the Prague problem (though maybe on the margin it helps to make it so bad),  since last weekend and the weekend following the conference had normal Eastern European prices. 

Comment by timunderwood on 'Beneficentrism', by Richard Yetter Chappell · 2022-05-09T12:47:39.791Z · EA · GW

I like the idea, though I think its funny that we go from "It'd be helfpul to have a snappy name for this view," to another opaque and easily confused made up philosophical term.  Maybe 'Helping other peopleism'. 

Comment by timunderwood on The Phil Torres essay in Aeon attacking Longtermism might be good · 2022-05-05T10:41:51.186Z · EA · GW

Ummm, I think for me it is believing that for any fixed number of people with really good lives, there is some sufficiently large number of people with lives that are barely worth living that is preferable.

Comment by timunderwood on EAG & EAGx lodging considerations · 2022-05-05T10:39:14.726Z · EA · GW

I'm wondering if this is prompted by all of the hotels and hostels in Prague being bizarrely very packed on the exact weekend of EAGx this year. I could not figure out though just what is happening in Prague to do this, and fortunately I have a relative who lives in Prague whose apartment I can crash in.

Comment by timunderwood on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:48:17.446Z · EA · GW

Maybe, I mean I've been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he's basically right that it can push people towards ideas that don't have any guard rails.

A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.

That after all is what shutting up and multiplying tells you -- so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.

Of course there is also the other direction: If there was a 1/1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.

 

Also, of course, model error, and any estimate where someone actually uses numbers like '1/1 trillion' that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.

Comment by timunderwood on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:42:21.130Z · EA · GW

My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that 'hey, going extinct is an even biggler deal', but the name still stuck, because names are sticky things.

Comment by timunderwood on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:37:21.774Z · EA · GW

Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.

Comment by timunderwood on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:36:12.585Z · EA · GW

Based on my memory of how people thought while growing up in the church, I don't think increasing the number of saveable souls is something that makes sense for a Christian -- or even any sort of long termist utilitarian framework at all.

Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.

Comment by timunderwood on Where is the Social Justice in EA? · 2022-04-05T16:21:50.935Z · EA · GW

Summoning a benevolent AI god to remake the world for good is the real systemic change.

No, but seriously, I think a lot of the people who care about making processes that make the future good in important ways are actually focused on AI.

 

Comment by timunderwood on Where is the Social Justice in EA? · 2022-04-05T16:18:46.551Z · EA · GW

A very nitpicky comment, but maybe it does point towards something about something: "What if every person in low-income countries were cash-transferred one years’ wage?"

There is a lot of money in the EA space, but at most 5 percent of the sort of money that would be required for doing that (quick google of 'how many people live in low income countries' tells me there are 700 million people in countries with a per capita income below roughly 1000 usd a year, so your suggestion would have a 700 billion dollar bill. No individual, including Elon Musk or Jeff Bezos has more than a quarter of that amount of money, and while very rich, the big EA funders are no where near that rich). Also, of course, give directly is actually giving people in low income countries the equivalent of a year's wage to let them figure out what they want to do with the money. Of course they are operating on a small enough scale that is affordable within the funding constraints of the community.

I don't know, the on-topic thing that I would maybe say is that it is important to have a variety of people working in the community, people with a range of skills and experiences (ie we want to have some people who have an intuitive feel for big economic numbers and how they relate to each other -- but it is not at all important for everyone, or even most people to have that awareness). But at the same time, not everyone is in a place to be part of the analytic research oriented part of the EA community, and I simply don't think that decision making will become better at achieving the values I care about if the decision making process is spread out.

(But of course the counter point is that decision makers who ignore the voices of the people they are claiming to help often do more harm than good, and usually are maximizing something they care about, which is true).

Also, and I'm not sure how relevant this is, but I think it is likely that part of the reason why X-risks is the area of the community that is closest to being fully funded is because it is the cause area that people can care about for purely selfish reasons -- ie spending enough on X-risk reduction is more of a coordination problem than an altruism problem.

Comment by timunderwood on Introductory video on safeguarding the long-term future · 2022-04-02T08:42:15.569Z · EA · GW

The main thing I think is to keep trying lots of different things (probably even if something is working really well relative to expectations). The big fact of trying to get traction with a populat audience is that you simply cannot tell ahead of time what is good.

Comment by timunderwood on AI Risk is like Terminator; Stop Saying it's Not · 2022-04-02T08:39:11.498Z · EA · GW

I don't think the technical context is the only,  or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky's gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).

Comment by timunderwood on Announcing Impact Island: A New EA Reality TV Show · 2022-04-01T12:19:16.042Z · EA · GW

"Being on an island, Impact Island is naturally a safer location in case of a large scale pandemic. In addition, as part of the program, we plan to host talI ks and discussions about the most creative and deadly potential bioweapons and biological information hazards on live TV, helping to raise awareness of this very important cause area."

I am really looking forward to those episodes.

Comment by timunderwood on AI Risk is like Terminator; Stop Saying it's Not · 2022-03-09T14:12:41.671Z · EA · GW

Perhaps this is a bit tangential to the essay, but we ought to make an effort to actually test the assumptions underlying different public relations strategies. Perhaps the EA community ought to either build relations with marketing companies that work on focus grouping idea, or develop its own expertise in this way to test out the relative success of various public facing strategies (always keeping in mind that having just one public facing strategy is a really bad idea, because there is more than one type of person in the 'public'.)

Comment by timunderwood on Introductory video on safeguarding the long-term future · 2022-03-09T14:06:25.748Z · EA · GW

 This is nice, but I feel like it is trying to have good production values for normal people to be impressed, but it doesn't justify caring about the septillions of humans in a way that will actually appeal to normal people. Perhaps sticking that sort of number and the distant future as an issue at the back of the video rather than in the front -- I really like though that this was produced, and it seems to me that working on this sort of project is potentially really important and valuable, but the group doing it should be looking for ways to get feedback from people outside of the community (maybe recruiting through some sort of survey website, reddit, facebook groups, whatever), testing metrics, and systematically experimenting with other styles of videos and rhetoric (while at the same time, of course, keeping in mind that the goal is to make videos that convince people to act for the sake of the long term future, and that making videos that people actually watch and listen to is only useful to the extent that it actually leads them to help the long term future).

But a good job.

Comment by timunderwood on Request for review of a donation appeal that will be seen by several thousand normal people · 2022-02-11T21:01:39.339Z · EA · GW

"I have received many heartwarming emails from my readers who tell me they are also choosing to be part of making this world a better, safer and healthier place for everyone. "

Thanks, I particularly like this line

Comment by timunderwood on Request for review of a donation appeal that will be seen by several thousand normal people · 2022-02-07T13:22:40.373Z · EA · GW

I think if it leads to a shift in altruistic expenses away from local charities, or actually from 95% of international charities, to DWB I don't see that as a bad outcome, but the goal is more to increase the total altruistic giving. 

What were the assumptions that were challenged about DWB for you?

Comment by timunderwood on Request for review of a donation appeal that will be seen by several thousand normal people · 2022-02-07T13:19:54.655Z · EA · GW

I think I'll add  a line with a link to both OFTW and GWWC, and also I've removed the $100 and the $5. 

"The nice thing here is that you don't need to worry about driving people away with a big pitch (as long as you're nice about it), since they've already bought and finished your book."

I actually got negative reviews on my first two books about the donation appeal which had more guilt based / 'let me describe the suffering' arguments, and since then I've systematically tried to make them very positive.  

Comment by timunderwood on Modelling Great Power conflict as an existential risk factor · 2022-02-06T12:40:27.214Z · EA · GW

It definitely is possible. And perhaps more than 1 percent, but I don't think I'd put a credence at more than 2-3 percent.

Also, I think, and I think this is a dangerous error, that a lot of people confuse the Chinese elites not supporting democracy with them not wanting to create good lives for the average person in their country.

In both the US and China the average member of the equivalent of congress has a soulless power seizing machine in their brain, but they also have normal human drives and hopes.

I suppose I just don't think that with infinite power that Xi would create a world that is less pleasant to live in than Nancy Pelosi, and he'd probably make a much pleasanter one than Paul Ryan.

My real point in saying this is that while I'd modestly prefer a well aligned American democratic ai singleton to a well aligned communist Chinese one, both are pretty good outcomes in my view relative to an unaligned singleton, and we basically should do nothing that increases the odds of an aligned American singleton relative to a Chinese one at the cost of increasing the odds of an unaligned singleton relative to an aligned one.

Comment by timunderwood on Modelling Great Power conflict as an existential risk factor · 2022-02-05T21:52:58.367Z · EA · GW

Even if the person/ group who controls the drone security forces can never be forcibly pushed out of power from below, that doesn't mean that there won't be value drift over long periods of time.

I don't know if a system that stops all relevant value drift amongst its elites forever is actually more likely than 1 percent.  

Also, my  possibly irrelevant flame thrower comment is that China today really looks pretty good  in terms of values and ways of life on a scale that includes extinction and S-risks. I don't think the current Chinese system (as of the end of 2021) being locked in everywhere, forever, would qualify in any sense as an existential risk, or as destroying most value (though that would be worse from the pov of my  values than the outcomes I actually want).

Comment by timunderwood on Modelling Great Power conflict as an existential risk factor · 2022-02-05T21:47:45.196Z · EA · GW

This isn't really a reply to the article, but where are you making the little causal diagrams with the arrows? I suddenly am having a desire to use little tools like that to think about my own problems.

Comment by timunderwood on What's your prior probability that "good things are good" (for the long-term future)? · 2022-02-05T19:49:52.447Z · EA · GW

I agree with you that 'good now' gives us in general no reason to think it increases P(Utopia), and I'm hoping someone who disagrees with you replies.

As a possible example, that may or may not have reduced P(Utopia), I have a pet theory, that may be totally wrong, that the Black Death, by making capital far more valuable in Europe for a century and a half was an important part of triggering the shifts that caused Europe to be clearly ahead of the rest fo the world in the tech tree leading to industrialization by 1500 (claiming that Europe was clearly ahead by 1500 is also a disputed claim).

Assuming we think an earlier industrialization is a bigger good thing than the badness of the black death, then the black death was a good thing under this model.

Which line of thinking is how I learned to be highly skeptical of 'good now' = 'good in the distant future'.

Comment by timunderwood on The Fermi Paradox has not been dissolved · 2022-01-31T09:59:20.960Z · EA · GW

"First, the approach of multiplying many parameter intervals with an upper bound at one, but no corresponding lower bound, predisposes the resulting distribution of the number of alien civilisations to exhibit a very long negative tail, which drives the reported result."

I sort of thought this was the logical structure underlying why the paradox was dissolved -- specifically that given what we know, it is totally plausible that one of the factors has a really, really low value.

There only is a paradox if we can be confindentally lower bound all of the parameters in the equation. But if given what we know there is nothing weird (ie the odds of it happening are at least 1/1000) about one of the parameters being sufficiently close to zero to make a nothing else in the visible universe likely, then we should not be surprised that we are living in such a world.

Or alternatively the description I once saw of the paper, that if god throws dice a bunch of times in creating the universe, it isn't surprising that one of the rolls came up one.

What would actually resurrect the paradox is if we could actually create lower bounds for more of the parameters, rather than simply pointing out that there isn't very good evidence that the probability is really, really low for any given one of them -- which of course there isn't.

Comment by timunderwood on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-01-27T15:08:38.576Z · EA · GW

I think though the way the purpose of this exercise is understood is more about characterizing an  utopia, and not about trying to explain how to solve alignment in a world where a singularity is in the cards.

Comment by timunderwood on The Bioethicists are (Mostly) Alright · 2022-01-12T17:26:19.827Z · EA · GW

I think creating a system to contradict misunderstandings, is the important and difficult question (which I will do nothing to solve at this moment). I read the essay sampling the research papers, so I've known at least since then that actual 'bio-ethicists' are not the group we are talking about. But in my head angry rants about bioethicists would still sometimes pop up. And certainly the general discourse in the community didn't digest that result. 

I'd very much like to see a system that helps for us to call out these sort of issues.

An idea I encountered in a different discussion recently that might get at that is encouraging funding groups to fund research into the Devil's advocate case against ideas popular in the community. That would obviously not be sufficient but it could be a good step in the correct direction.

Comment by timunderwood on What are the bad EA memes? How could we reframe them? · 2021-11-16T15:09:31.644Z · EA · GW

Avoid catostrophic industrial/research accidents?

Comment by timunderwood on Samuel Shadrach's Shortform · 2021-10-21T08:29:11.084Z · EA · GW

Assuming that some people respond to these memetic tools by reducing the amount of children they have more than other people do, the next generation of the population will have an increased proportion of people who ignore these memetic tools. And then amongst that group, those who are most inclined to have larger numbers of children will be the biggest part of the following generation, and so on.

The current pattern of low fertility due to cultural reasons seems to me to be very unlikely to be a stable pattern. Note: There are people who think it can be stable, and even if I'm right that it is intrinsically unstable, there might be ways to plan out the population decline to make it stable without the substantial use of harsh coercive measures.

But really, fewer people being a really, really bad thing is the core of my value structure, and promoting any sort of anti natalism is something I'd only do if I was convinced there was no other path to get the hoped for good things.

Comment by timunderwood on Samuel Shadrach's Shortform · 2021-10-20T23:39:44.753Z · EA · GW

The really big con which is that people are awesome, and 1/70th of the people is way, way less awesome than the current number of people. Far, far fewer people reading fan fiction, falling in love, watching sports, creating weird contests, arguing with each other, etc is a really, really big loss.

Assuming that if it could be done, that it would be an efficient in utility loss/gain terms way to improve coordination, I think it probably goes way too slow to be relevant to the current risks from rapid technological change. It seems semi-tractable, but in the long run I think you'd end up with the population evolving resistance to any memetic tools used to encourage population decline.

Comment by timunderwood on What Makes Outreach to Progressives Hard · 2021-03-23T10:20:32.684Z · EA · GW

I feel like trying to be charitable here is missing the point.

It mostly is Moloch operating inside of the brains of people who are unaware that Moloch is a thing, so in a Hansonian sense they end up adopting lots of positions that pretend to be about helping the world, but are actually about jockeying for status position in their peer groups.

EA people also obviously are doing this, but the community is somewhat consciously trying to create an incentive dynamic where we get good status and belonging feelings from conspicuously burning resources in ways that are designed to do the most good for people distant in either time or space.

Comment by timunderwood on What Makes Outreach to Progressives Hard · 2021-03-23T10:10:59.175Z · EA · GW

Possibly the solution should be to not try to integrate everything you are interested in.

By analogy, both sex and cheese cake are god, but it is not troubling that for most people there isn't much overlap between sex and cheese cake. EA isn't trying to be a political movement, it is trying to be something else, and I don't think this is a problem.

Comment by timunderwood on What Makes Outreach to Progressives Hard · 2021-03-23T09:58:33.435Z · EA · GW

I think the survey is fairly strong evidence that EA has a comparative advantage in terms of recruiting left and center left people, and should lean into that.

The other side though is that the numbers show that there are a lot of libertarians (around 8 percent) and more 'center left' people who responded to the survey than there are 'left' people. There are substantial parts of SJ politics that are extremely disliked amongst most libertarians, and lots of 'center left' people. So while it might be okay from a recruiting and community stability pov to not really pay attention to right wing ideas, it is likely essential for avoiding community breakdown to maintain the current situation where this isn't a politicized space vis a vis left v center left arguments.

Probably the idea approach is some sort of marketing segmentation where the people in Yale or Harvard EA communities use a different recruiting pitch and message that emphasizes the way that EA is a way to fulfill the broader aim of attacking global oppression, inequity and systemic issues, while people who are talking to Silicon Valley inspired earn-to-give tech bros should keep with the current messages that seem to strongly resonate with them.

More succinctly:  Scott Alexander shouldn't change what he's saying, but a guy trying to convince Yale Law students to join up shouldn't sound exactly like Scott.

Epistemologically this suggests we should spend more time engaging with the ideas of people who identify as being on the right, since clearly this is very likely to a bigger blindspot than ideas popular with people who are 'left wing'.

Comment by timunderwood on Want to alleviate developing world poverty? Alleviate price risk.​ (2018) · 2021-03-23T09:10:51.153Z · EA · GW

I feel like this would end up like microloans: Interesting, inspiring, and useful for some people, but from the pov of solving the systemic issue a dead end. The obvious question being: Why doesn't this already exist? And the answer presumably being that it cannot be done profitably.

Still, it is the sort of thing that if someone who has the skills and resources to do so is directly trying to set up specific systems like this, their efforts likely have a very high probability of being way more useful than anything else they could do.

Comment by timunderwood on When can Writing Fiction Change the World? · 2020-08-27T11:54:41.146Z · EA · GW

Thanks for the links, which definitely include things I wish I'd managed to find earlier. Also I loved the special containment procedures framing of the story objects.

I wonder if there is any information on whether very many people's minds actually are changed by The Ones Who Walk Away from Omelas, my experience of reading it was very much like what I claimed the standard response of people exposed to fiction they already strongly disagree with was: Not getting convinced. I did think about it a bunch, and I realized that I have this weird non-utilitarian argument inside my head for why it is legitimate to subject someone to that sort of suffering whether or not they volunteer 'for the greater good'. But on the whole I thought the same after reading the story as before.

Comment by timunderwood on The EA Meta Fund is now the EA Infrastructure Fund · 2020-08-21T21:20:28.448Z · EA · GW

Okay, I suppose that's vaguely legit. They are in broadly the same space. And also the new name is definitely better.

Comment by timunderwood on timunderwood's Shortform · 2020-08-07T13:48:14.521Z · EA · GW

Does anyone know about research on the influence of fiction on changing elite/public behaviors and opinions?

The context of the question is that I'm a self published novelist, and I've decided that I want to focus the half of my time that I'm focusing on less commercial projects on writing books that might be directly useful in EA terms, probably by making certain ideas about AI more widely known. I at some point decided it might be a good idea to learn more about examples of literature actually making an important difference beyond the examples that immediately came to my mind -- which were Uncle Tom's Cabin, Atlas Shrugged, Methods of Rationality and the way the LGBTQ movement probably gained a lot of its present acceptance through fictional representation.

I've found some stuff through academia.edu searches (like this journal article describing the results of a survey of readers of climate change fiction), but it seems like there is a good chance that the community might be able to point me in useful directions that I won't quickly find on my own.