Posts

"Intergenerational communication": could something like "writing letters to the future" suscitate interest in long-termism? 2021-11-07T23:35:29.248Z
Content and Reputational Risk: Oxfam Brasil's "Astral Anti-racism" campaign as a cautionary tale 2021-11-05T19:57:59.348Z
On famines, food technologies and global shocks 2021-10-12T14:28:38.049Z
The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit - or "Do we discount the future only because we won't live in it?" 2021-08-03T15:06:59.839Z
The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty 2021-08-01T14:04:03.949Z
Negative screening and supervising environmental liabilities under IAS 37 2021-03-24T14:25:18.468Z
How does Amazon deforestation actually work? It's not about soy. 2021-01-26T03:06:01.764Z
[Linkpost] The Environment as an Obstacle 2020-08-31T17:15:07.273Z
[Linkpost] The Groundswell 2020-08-31T17:11:07.998Z
What is a pandemic compared to our sewer system? An example of how a society normalizes risks 2020-07-25T14:59:24.093Z
Is there anything like "green bonds" for x-risk mitigation? 2020-06-30T00:33:38.732Z
My amateur method for translations 2020-06-30T00:29:30.043Z
Indifference, racism and violence: what comes after justice for George Floyd? 2020-06-12T01:44:23.358Z
Who should / is going to win 2020 FLI award 2020? 2020-06-11T19:20:11.364Z
Is rapid diagnostic testing (RDT), such as for coronavirus, a neglected area in Global Health? 2020-03-17T22:24:05.915Z
Ramiro's Shortform 2019-10-17T13:16:14.822Z
Merging with AI would be suicide for the human mind - Susan Schneider 2019-10-03T17:55:07.789Z

Comments

Comment by Ramiro on Mortality, existential risk, and universal basic income · 2021-11-30T18:24:09.466Z · EA · GW

You are right. I rephrased it to avoid this misunderstanding. Thank you very much

Comment by Ramiro on Mortality, existential risk, and universal basic income · 2021-11-30T16:52:42.307Z · EA · GW

Thanks for this post. I’d like to see more about this in the future. I admit I'm very pro-UBI, so a bit biased.

I don’t quite follow with the associations you make with GC risks, though; for me, e.g., it’s unlikely that a global UBI would significantly decrease the risk of unaligned AGI. On the other hand, perhaps with an UBI we wouldn’t have almost 10% of humanity living in extreme poverty (or 37 million in US), which might help add a lot more smart people into those cause areas. Is it consistent with your case?

Also, it’s arguable that inequality and poverty imply a significant decrease in the expected welfare of future generations under uncertain growth (so justifying a lower SDR). Sometimes, I think some EAs underestimate the problem of inequality (how it affects social stability and welfare measures) and the importance of having effective redistributive policies. Do you think this makes sense?

Comment by Ramiro on Liberty in North Korea, quick cost-effectiveness estimate · 2021-11-30T15:45:38.631Z · EA · GW

Thanks for this, and thanks Michael for the post.
This made me think we should perhaps have an overall evaluation of the cause area "Helping refugees migrate" from different countries in crisis (e.g., South Sudan, Afghanistan, Haiti, NK etc.) and corresponding projects  in receiving countries - such as comparing LINK and GiveDirectly's cash transfers to refugess in Uganda.

I think that, for some countries, a useful proxy would be avg life expectancy, and maybe HDI differences (though HDI is not a cardinal measure, I think differences could help assess differences in life chances - especially if one can adjust for inequality). I made a personal rough evaluation about helping a specific Haitian family this way. However, I don't think this would extrapolate well for other countries (where HDI data is unreliable) or for people fleeing from persecution (where the counterfactual is not the average life, but just death); plus, in some cases (like NK and Afghanistan), it would be interesting to take into account political factors (point 4), but I have no idea about how to start to even begin to quantify this.

Comment by Ramiro on Liberty in North Korea, quick cost-effectiveness estimate · 2021-11-30T15:23:04.488Z · EA · GW

In other words, putting very rough guesses on the utility of each scenario:

  • Middle class in South Korea: 10
  • Muzak and potatoes: 0
  • Political dissident in North Korea: -100

I tend to agree that helping NK refugees prevents suffering, and that we should really have some back-of-the-envelope calculation to measure it. (Usually, when I assess the value of helping a refugee, I consider HDI differences between countries as a proxy for the increase in wellbeing; but we can't do this for NK because we can't rely on what they publish - and even if we could I don't think it could still work as a proxy for welfare in a totalitarian state.)

But I don't know if you considered how this could extrapolate to population ethics. Your conclusion that NK lives are net negative (and that the modulo of their value is 10x greater than that of a SK life) seems to imply that killing (or letting die, if you have deontic objections) NK people is a net good - and that letting 1 NK citizen die produces 10x more welfare than saving a SK life. Or that moving 1 NK citizen to SK produces about .55x the  welfare of letting 2 NK citizens die.

I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so.

I understand your argument is very speculative, but my overall take is that perhaps we should be extra careful when we apply negative cardinal utility measures to people - and that perhaps our own personal utility functions may not extrapolate very well to moral evaluations of the welfare of others.

Comment by Ramiro on Introducing Shrimp Welfare Project · 2021-11-30T12:58:57.345Z · EA · GW

Thanks for the post.

A confession: the first time I read about shrimp welfare in CE's reports I rolled my eyes and thought "C'mon, seriously? Weren't bugs enough?", but I came to radically change my opinion (because of normative uncertainty, and because cutting shrimps' eyes is evil, even if they are basically delicious bug monsters). I still use this example in a hideous "trap-joke" when talking to EAs - I start saying that I don't always agree with some of the cause areas people come up with, like shrimp welfare, then we laugh, then I explain how we torture these poor animals for a fraction of the proteins we can get with beans, and that almost nobody was talking about it before CE.

Comment by Ramiro on Kaleem's Shortform · 2021-11-30T12:47:29.795Z · EA · GW

There's an effective environmentalism group focusing on that. Founder's Pledge Climate Fund is another salient point.

Perhaps they should post more here.

Comment by Ramiro on What Small Weird Thing Do You Fund? · 2021-11-28T00:40:49.509Z · EA · GW
  1. Thanks.
  2. I'm considering writing a post to step into Todd vs. AppliedDivinityStudies fray on small donors. Maybe I'd be willing to do something similar in the future but... it'd be interesting to discuss it with more people first, perhaps someone with more experience in funding weird things
Comment by Ramiro on What Small Weird Thing Do You Fund? · 2021-11-26T20:41:03.059Z · EA · GW

A not-so-weird thing I’m considering to fund – except this is not EA at all.

I’ve recently read this piece (in Portuguese – from a very respected magazine) about this Haitian refugee who has a crowdfunding campaign to bring her children to Brazil. I also checked her bio in other media outlets.

She still needs around U$ 3,000 – roughly what AMF would need to save an additional life, in some calculations[1]. But life expectancy in Haiti is 64y, and its HDI is .51 – against 75.8y and HDI of .74 in Brazil; besides, it’s particularly higher in Porto Alegre (where she lives), I have to take into account the additional welfare of reuniting a family (kids without a mother probably don’t fare well in Haiti[2]), so I think that moving her kids would entail no less than 30 additional expected QALY, which I consider roughly equivalent to what people mean when they say “AMF saves a life for $3k.” Thus, helping this woman seems to be, according to this back of the envelope calculation, as worthy as donating to AMF in the long run.

Except that I found many other similar crowdfunding pages (e.g., here, here, here, here…) with similar projects which stalled before filling 30% of their budget. What drove my attention to J., instead of the others, is that the magazine made her case salient and confirmed it’s legit - if not for that, I’d be indifferent between helping her or any other Haitian in a similar situation. But it turns out that none of these immigrants will achieve their goals this way: they are competing for scarce resources, but would be better-off if they could coordinate, pool their donations and establish a procedure (maybe a lottery) to decide who is going to get their kids back.

Donating to J. is not scalable; I’d prefer to help solve this coordination problem. I am still thinking about how. On the other hand, I estimate I spent about three additional hours thinking about this problem – which I wouldn’t have done if I’d just donated to an EA charity.


 


[1] I am using a very old and not super high quality source, but I am not pretending this is an accurate CBA.

[2] On the other hand, they've already survived earlyinfancy, so this difference in life expectancy shouldn't be that large. But I am not going to compare mortality tables after all this.

Comment by Ramiro on A Red-Team Against the Impact of Small Donations · 2021-11-26T15:40:11.895Z · EA · GW

By the way, sometimes rep risks signal something is just a bad idea.

PR risk: It's not worth funding a sperm bank for nobel-prize winners that might later get you labeled a racist

Or you could just fund a gamete (why just sperm?) bank for very high-IQ / cognitive skilled / successful people - which would be way cheaper and more effective (you could buy the whole embryo if you want it). Or just fund genetics research and ethical eugenics advocacy, which is way more scalable. Thus people coulde better tell the difference between things that have a bad rep because they are bad ideas and things that have it because they are associated with bad ideas.

My point: best case scenario, you should be neutral to PR risks, and maybe see it as a con, instead of a complement to neglectedness in your cost-benefit analysis. But that's hard to do when you're looking for weird things by yourself.

Comment by Ramiro on What Small Weird Thing Do You Fund? · 2021-11-26T15:17:57.592Z · EA · GW

Last year, I took part in crowdfunding a ventilator for intensive care for Covid-19 in Brazil. I believe it was a mistake - I'd better have donated to GD. Of course, hindsight is 20/20, but I learned from this experience that I underestimated relevant points:

a) I wanted to feel important;

b) I gave a great weight to the fact that EA and rationalist friends (people I usually trust) were doing it, too, but I neglected that we were probably being affected by the same biases; 

c) None of us had previous experience in funding similar risky projects. However, we did analyze their credentials, and we had someone who understood ventilators and said that, though the project wasn't as impactful as we first thought, it was likely still worth funding - because funding for research totally vanished in Brazil.

d) my direct interaction with the team asking for funds probably made me overestimate their case;

e) Everyone was doing similar projects back then. I took it as a sign that it was a good idea. I was so wrong: I didn't realize the context had changed - the area got way less neglected, it attracted people whose projects  were in other areas, or that usually wouldn't be worth funding, and the low-hanging fruit was already being picked by large donors.

My point is that I failed to update my priors. If someone shows up today talking about how they can save thousands of lives in the next pandemic by lowering the costs of this particular medical procedure, they probably have thought about it deeply (possibly passionately) and put some skin in the game; they might be overestimating the general risk, but not so much their ability to deliver the product (before others do). If they do this after the pandemic started, they are (if not a total maverick) likely someone who used to do something else which is not being funded anymore because everyone is focused on the current catastrophe.
Concluding, though I still think there are impactful "weird things" that only I can fund out there, they are mixed with lots of bad fruits, and I'm rarely particularly skilled in telling the difference - actually, I realize that I might be particularly bad at doing so when some emotions get involved. I became an EA, and routinely check this Forum, not because I hope someday to be as impactful as Dustin Moskovitz, but because I can share this epistemic burden with others - or just outsource it to an expert I might trust.

Comment by Ramiro on A Red-Team Against the Impact of Small Donations · 2021-11-26T14:27:17.585Z · EA · GW

Thanks for the post. It has turned this into currently one of the most interesting discussions in the Forum.
However, I'm not convinced  that donor coordination  among EAs is particular hard by your argument (what makes it hard is that we might have conflicting goals, such as near-term vs. long-term, or environmentalism vs. wild animal suffering, etc. and even so EAs are the only guys talking about things like moral trade).

Actually, I'm particular suspicious of the recommendation "fund weird things" - I mean, yeah, I agree you should fund a project that you think has high-expected value and is neglected because only you know it, but... are you sure you paid all the relevant informational costs before getting to this conclusion? I guess I prefer to pay some EA orgs to select what wild things are worth funding.

I'll probably have to write a whole post to deal with this, but my TL;DR is: the movement / community Effective Altruism exists for us to efficiently deal with the informational costs and coordination necessary to do the most good. It isn't a movement created only to convince people they should do the most good (EAs often don't need to be convinced of this, but yeah, convincing others sure helps) or so they could feel less lonely doing it (but again, it helps) - I think we need a movement especially because we are trying to find out what is the most good you can do. It turns out it is more effective to do that in community of high-skilled and like-minded (up to a point: diversity is an asset, too) people. So when someone say "fund weird things", I want to reply something like "Sure... but how do I do it effectively, instead of just like another normie?"

Of course, I'm afraid someone might accuse me of misunderstanding the case for "fund weird things", but my point is precisely that this advice should have some caveats added to prevent misunderstanding. Though I agree EAs should look for more low-hanging fruit in the wild, they should also think about how, as a group, they could coordinate to make the most of it.

Comment by Ramiro on December 2021 monthly meme post · 2021-11-25T19:48:24.815Z · EA · GW

Perhaps, with effective moderation. If it doesn't work here, that's a good place to go. But I think people would just think of it as another Dank EA memes - instead of something like a "tougher environment to increase memetic fitness"

Comment by Ramiro on Announcing my retirement · 2021-11-25T19:42:28.243Z · EA · GW

Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.

That's precisely what you'd say if it was used as a proxy for deserving a better life, but you didn't want people to Goodhart-game it.


Seriously: congratulations for the job done, thank you so much for it, and I'm eager to see what you'll do in EGQ and beyond.

Comment by Ramiro on December 2021 monthly meme post · 2021-11-24T19:16:31.764Z · EA · GW

Oh, I gave the post a double upvote ;)
(But the Kangaroo meme got a downvote, sorry palz)
(The Drake one is meta and cool, but it only makes sense for people who are already tired of seeing the usual Drake meme)

Comment by Ramiro on December 2021 monthly meme post · 2021-11-24T16:58:49.646Z · EA · GW

Sorry to get into aesthetics, but maybe you could change people’s minds if you could show a meme that is as peculiar or poignant as (pseudo-)Hemingway’s "For sale: baby shoes, never worn." This might have an interesting effect on a new reader. Sometimes, comic strips can be like that.

But most memes end up just using a standard graphic format to display a very straightforward simplified message that is only interesting (and often trivial) for the in-group… they remind me of badges, flags, or slogans, and they soon become repetitive. This is not bad (I like it, indeed - that's why I check the fb group daily), but it requires you to enjoy this particular practice as an in-group and to share the corresponding references. And it won’t make you look at things in another way.

Sometimes, though, I think a meme can express an original joke (not just a simple mockery of the out-group), and use the corresponding format to enhance its effect (usually through contrast – like having a very intellectual debate instantiated in the American chopper meme) – but the technique will soon be copied (like having a very intellectual debate in the American chopper meme). And perhaps a meme can be as creative as a good short story, though I can’t recall anything like that right now; that’s a meme that should endure.
 

Comment by Ramiro on December 2021 monthly meme post · 2021-11-24T16:27:04.653Z · EA · GW

I really like your point and perhaps, in some sense, we should see this as something akin to "find a meme that Pablo might appreciate" - something that compress an EA message or discussion without trivializing or perverting it. I think that's quite hard - analogous to trying to produce a work of art using cheap materials and techniques any kid can dominate. So my answer is that Facebook is where memes are made and reproduce, but we should have another place where they are selected - and perhaps it might be here, precisely because it's a different environment.

Comment by Ramiro on December 2021 monthly meme post · 2021-11-24T16:01:47.571Z · EA · GW

Thanks. I kinda see memes as a very cheap and compressed form of creative writing - a standardized comic strip for mass consumption. That's why I commend this post, and the idea of getting some feedback from people outside the Facebook group. On the other hand, maybe, instead of a monthly selection, it would be cool to have some sort of ranking, or even contest regarding the best memes... Something even people who are not meme-addicted could appreciate - a meme for each tribe. Actually, I think it'd be particularly interesting to have feedback from people who, like Pablo, don't really like memes - because if they eventually appreciate certain meme, that's a strong signal it could spread further. But, TBH, I didn't enjoy this particular selection.

Comment by Ramiro on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T17:22:05.270Z · EA · GW

Plus, section 2 made me wonder if something like this would be feasible: large donors could disclose a list of potential grantees and projects they are considering for funding (and part of their respective analysis), then let small donors provide a part of the necessary amount, and then complement the rest of the necessary funding. I mean, this could arguably leverage their donations and maybe establish some information exchange between small and large donors… A bit like a VC, but maybe more widespread.

Comment by Ramiro on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T15:51:12.665Z · EA · GW

Thanks for the post. This has changed my mind a bit...
I'm a particularly attracted to the argument in section 2 and the "Other Benefits..." above.  What got me thinking, though, is your section 3 is sound… I think I miss (and likely other small donors) a more detailed framework to deal with informational costs.

First, I feel psychologically attracted to the idea large donors play the “angel investor” or VC role, while  small investors are often drawn to "safe portfolios" with lower variance / risk... On the other hand, the analogy shouldn't be applicable: I don’t measure my returns in philanthropy the same way I do with personal investments, and I think there's no case for something like an EMH in philanthropy, so I could deal with a risky portfolio – that’s why I’m pretty ok with donating to longtermist causes. The real problem is uncertainty: I won't regularly donate to a cause / project that I can’t quite understand, or where it's impossible to learn or observe improvements, even though it may score high in preliminary ITN-like CBA. But if there’s someone else I can trust vetting it, I can be OK with that.

Now, the case where I might have something like “private information” on the impact of a project - the "support people you know" advice - is the interesting one. A detour: this reminds me a friend of mine who, instead of using financial markets like everyone else, would provide loans to acquaintances with stable jobs and high income, and make a lot of money with that – since he could sidestep the information asymmetry plaguing banks. But, eventually, a friend would default, and now he had some trouble collecting the money… it was no tragedy, but he realized he’d neglected social costs, biases, and that he wasn’t so great at screening… Thus I imagine that, if I wanted to fund a grant to a skilled independent researcher I know, or to a new EA group, I’d be in an analogous situation. Thus, even if I were pretty confident these projects are great and underfunded, I’d still want some sort of professional external opinion vetting it – maybe even want to totally outsource this decision, so avoiding the social cost of having to discontinue funding if the evidence ends up requiring it. And, of course, this kind of applies to personal projects - even if you know better than anyone else what you could do, you could be particularly bad at deciding when to stop.


I think there could be some way to solve / mitigate this issue - maybe having a group of small donors interested in providing advice, or funding each other's "support people you know" projects, so you could have an external opinion on it, dillute and cap risks, and have an excuse to cut the funding... But that's just what popped in my brain now.

Comment by Ramiro on Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer · 2021-11-23T15:06:51.583Z · EA · GW

Thanks for the post (and the code). I got curious  with the subject...

I'm not convinced this is how I'd talk about Decision Theory or EA, and I miss something about explore vs. exploit and the learning costs (which perhaps could steelman your "math argument" about diversifying), and maybe there's just too much in a very short space... But it's amusing, I loved your references, you make a good case for increasing variance  (when your losses are capped - i.e., no absorbing states), I'll probably thinking about it for a while (at least to get some references), and I think it gives interesting insights on the problem of "what to do now that EA is rich / hipster?"
 

Comment by Ramiro on How a ventilation revolution could help mitigate the impacts of air pollution and airborne pathogens · 2021-11-17T15:50:51.258Z · EA · GW

Awesome!

It should score high in ITN evaluations. It's the sort of neglected near-term cause area that can be well understood by normies, could suscite bipartisan support, and so be tackled with more research, inovation and political will. And yet there's not much material in EA-like sources on that (except for this 80kh interview).

... And I didn't even know WHO had finally updated its air pollution limits!
 

Also, sorry if this is stupid, but it seems that, unlike CO2, risks from many pollutants (like particulate matters and pathogens) could be significantly mitigated by effective dispersion; so even a normal ventilator could have observable effect on indoor air quality, right? Thus, I wonder if there are / could be any relevant policy recommendations along this line - like for urban design, e.g., "locate potential air pollution emissions by the sea, or spread through areas where they can be dispersed by winds". Does it make any sense?

Comment by Ramiro on Sleep: effective ways to improve it · 2021-11-17T15:33:58.202Z · EA · GW

Thank you so much for this post - great work.
Your discussion of light therapy made me wonder about the effects of outdoor activities on sleep (and maybe other wellbeing dimensions). Any possibility you're going to analyse it in the future? 

Comment by Ramiro on evelynciara's Shortform · 2021-11-07T23:12:10.290Z · EA · GW

That's a pretty cool idea

Comment by Ramiro on EA Forum engagement doubled in the last year · 2021-11-05T19:33:51.894Z · EA · GW

Thanks and Congrats.
I wonder if part of these effects (+ engagement, +headcount, +funding) could be temporary due to covid etc. - instead of a stable trend. Can we  rule this hypothesis out?

Comment by Ramiro on What's the role of donations now that the EA movement is richer than ever? · 2021-11-04T13:09:36.700Z · EA · GW

I guess that, for me, donating is still morally better than things like buying wine. Plus, no headache.

Also, with Patient Philanthropy Fund, I guess it's unlikely that we can have too much funding.

Comment by Ramiro on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-03T16:14:23.423Z · EA · GW

I mostly agree with you, but I think that "raising the sophistication" here might be harder than most people think, and I strongly believe our current media environment (plus social networks) is not conducive to such sophistication.

Plus, I was intrigued by this sentence:

... Elon who is already fairly EA-aligned in his own unique way

I was wondering what made you say this, then I googled it up a bit, and decided to share some references, if anyone else ever needs to justify such a claim:

Elon Musk To Address 'Nerd Altruists' At Google HQ

Dear Elon Musk: Here’s how you should donate your money

Why I Stan Elon Musk

But notice we now apparently agree that Elon Musk is well aware of EA-thinking, so I'm not sure if there's any additional value in getting Musk's attention to EA - which makes me even more suspicious about what could be our benefit from stepping into this "6 bn" debate.

Comment by Ramiro on Buck's Shortform · 2021-11-02T21:46:43.828Z · EA · GW

...I don't have time to write the full post right now

I'm eager to read the full post, or any expansion on what makes you think that groups should actively discourage newbies from take the Pledge.

Comment by Ramiro on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-02T21:28:54.379Z · EA · GW

I tend to agree, and I find the world of UHNW individuals quite intriguing (especially because we don't have many reliable stats on them worldwide). But we do have EAs working for orgs that targeted rich people, like Founder's Pledge and Generation Pledge.

Comment by Ramiro on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-02T21:24:27.819Z · EA · GW

Thanks for the post. First, I think it’s important to clarify that UN didn’t say anything: it was David Beasley, director of the World Food Programme - WFP (a branch of UN), who tweeted this “challenge” and later on was interviewed about it; second, his original tweet was about U$ 6.6 bn to prevent 42 million people from starving – only when interviewed he was more emphatic and said “they are literally going to die”. After the CNN interview spread, they changed their original piece, and now the website states that “An earlier version of this story's headline incorrectly stated that the director of the UN's food scarcity organization believes 2% of Elon Musk's wealth could solve world hunger. He believes it could help solve world hunger.” To me, this implies Beasley backed from what he said / implied in the interview - but not from his original tweet.

My conjecture: maybe what we are seeing is a politician (Beasley is a member of the Republican party and was Governor of South Carolina) getting lost with numbers, mistaking something like the amount necessary to lift people beyond the poverty line (U$1,90) for what would be necessary to prevent (first) starvation and (second) death. Or maybe someone just did a regression using what WFP spends per person – and I’m actually surprised they could feed an additional person with only U$0,43 a day (= 6.6 bn / (365 * 42 mi)), and scale it up to 42mi. Either way, it’d not justify the bold (counterfactual) claim that people are "literally going to die" – though it’d justify the straightforward claims such as “we could feed up to additional 42 million people, or lift them from extreme poverty”. Even then, I’m quite surprised with that, and would like to see someone analyze their data.

However, I tend to disagree that this would be a peculiar opportunity for (most) EA orgs – unless one could do this without becoming one more of Moloch’s tools. I’m afraid this type of news driven by Twitter debate and suspicious analysis (like the Institute for Policy Studies and Americans for Tax Fairness published) is precisely what EA is usually trying to avoid.

Comment by Ramiro on Make a $100 donation into $200 (or more) · 2021-11-02T00:35:49.285Z · EA · GW

Thanks for the post. I realized that:

  1. you don't actually need to share the info about your donation, you just have to click on the "share" button
  2. you can double your impact with an additional email
  3. that's sorta dangerous, because they don't require confirmation through your mail box. so you could just use your friend's e-mail to get bonus donations, which I assume is probably not ethical not legal.
Comment by Ramiro on Ramiro's Shortform · 2021-10-28T14:33:22.300Z · EA · GW

So it's on!

The Effective Thesis Exceptional Research Award (that's how the website calls it), or High-Potential Award (that's how it shows up on Google), or maybe just Award (how apparently everyone calls it) is open to submissions up to Sep 2022.
(I'm pretty sure there's a top post coming, but I thought it'd be cool to mention it in shortform right away. Feels like a scoop)


This award has been established to encourage and recognize promising research by students that has the potential to significantly improve the world.
[...]

Submissions can consist of theses, dissertations, or capstone papers at the undergraduate or graduate level. Other substantive work forming part of a graduation semester may also be considered. To be eligible, submissions must have been produced in the academic year 2021 - 2022 and relate to one or multiple research directions prioritised by Effective Thesis. See the list of research directions below or see here for more information.

Comment by Ramiro on What book(s) would you want a gifted teenager to come across? · 2021-10-27T18:06:12.529Z · EA · GW

If you're still accepting suggestions: Ada Palmer's Terra Ignota series.

Comment by Ramiro on Low-Hanging (Monetary) Fruit for Wealthy EAs · 2021-10-18T19:15:56.411Z · EA · GW

Thanks for the post. I agree EAs should have lower decreasing marginal utility for money, since they can never be satisfied by it - as you can always help someone else.
On the other hand, I'm not sure you invoke the best examples. First, LTCM collapsed in 1998 (despite being managed by genius economists) and had a bad effect on financial markets; this shows that trying to earn a lot of money entails risks and externalities.
Second, I'm not sure what's your source for this premise:


Ordinary wealthy people don't care as much about getting more money because they already have a lot of it

A possible source is Kahneman & Deaton, but if that's the case, this paper: a) has been criticized by more recent studies, and b) is not focused on very wealthy people, which are a very special class of individuals. Actually, I'd say that people who become really wealthy (by themselves) already tend to have lower diminishing marginal utility for money - or they would work so hard to do so.

Comment by Ramiro on On famines, food technologies and global shocks · 2021-10-13T12:39:16.861Z · EA · GW

That's true. It also ocurred to me after I posted it here. Irish population declined steadly after 1840s (6.5 mi), long into 1960s (2.8 mi).

Comment by Ramiro on Major UN report discusses existential risk and future generations (summary) · 2021-10-08T15:53:38.840Z · EA · GW

Thanks for the post.
I still think longtermist cause areas are often a bit more neglected than "presentist" causes, but I guess this points to a need to revise ITN assessments accordingly, doesn't it?

Comment by Ramiro on Noticing the skulls, longtermism edition · 2021-10-08T15:49:38.986Z · EA · GW

I'd like to see how this "skull critique" will develop now that UN has adopted a kind of longtermist stance.

Comment by Ramiro on What are some moral catastrophes events in history? · 2021-10-02T15:29:44.448Z · EA · GW

Thanks for sharing this question with us. This is a very interesting idea, and it’s good that someone pursues it.

Plus, my suggestions:

  1. The Better Angels of our Nature, by Steven Pinker - particularly Ch. 4, on the “Humanitarian Revolution”. This is Pinker's book I enjoyed most; I thought it'd be a bit long when I bought it, but in the end I was complaining it was too short.
  2. Turchin’s Seshat database – the “Global History Database”. Btw, I guess Turchin’s mathematical approach to history may interest you, if you’re not acquainted with it yet. Besides, I notice there’s a correlation between some atrocities in White’s book and societal collapses; so perhaps you profit from checking Luke Kemp’s research. Also, if that’s what you’re looking for, studying societal collapses may provide insights for S-risk scholars on what makes unrecoverable dystopias unlikely – in the long run, they’re hard to perpetuate, depend on unstable acceptance, and face stark competition.
  3. I emphasize djbinder's tip on White’s book on atrocities. First, because it’s a good reading, second it helps consider some distinctions (like Lizka did infra) between , e.g., (i) long standing moral practices (like the slave trade - which I think is the point in the post you cite), (ii) "one-shot black swan" massacres which are (usually) quickly perceived as exceptional moral catastrophes (though White shows they happen more than you realize), and (iii) the ominous death toll caused by the side-effects (such as disease and hunger - the Horsemen often ride together) of conflicts, which are usually preventable and neglected. For instance, almost everyone has heard about Rwandan genocide (there's a Hollywood movie about it), a case of (ii), but few people have heard about the millions of deaths in the Congo wars that followed it - a case of (iii).
Comment by Ramiro on How would you run the Petrov Day game? · 2021-09-27T20:22:47.257Z · EA · GW

Thanks. So your point is that the "hard part" is to select who's going to receive the codes. It's not an exercise on building trust, but on selecting who is reliable.

Comment by Ramiro on How would you run the Petrov Day game? · 2021-09-27T12:48:21.870Z · EA · GW

For me, Petrov's (and Arkhipov's) legacy, the most important lesson, is that, in real MAD life, there should be no button at all.

Seeing Neel  & Habryka's apparent disagreement (the latter seems to think this is pretty hard, while the former thinks that the absence of incentives to press the button makes it too easy), I realize that it'd be interesting to have a long discussion, before the next Petrov Day, on what is the goal of the ritual and what we want to achieve with it.

My point: it's cool practicing "not pressing buttons" and building trust on this, and I agree with Neel we could make it more challenging... but the real catch here is that, though we can bet the stability of some web pages on some sort of Assurance Game, it's a tremendous tragedy that human welfare has to depend on the willingness of people like Petrov to not press buttons. I think this game should be a reminder of that . 

Comment by Ramiro on How would you run the Petrov Day game? · 2021-09-27T02:30:38.751Z · EA · GW
  1. We could have a vote on some of those to receive the codes.
  2. There could be some sort of noise - e.g., LW and EA forum websites could have some random moments of instability, so you can't be sure that no one has actually pressed the button.

I came to appreciate the idea of a "ritual" where we just practice the "art of not pressing buttons". And this year's edition got my attention because it can be conceived of as an Assurance Game. Even so, right now, there's no reason for someone to strike - except to show that, even in this low stakes scenarios, this art is harder than we usually think. So there's no trust or virtue actually being tested / expressed here - which makes the ritual less relevant than it could be.

Comment by Ramiro on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T22:18:43.285Z · EA · GW

oh crap! I accidentally pressed the button :O I'm super sorry

Comment by Ramiro on [Link post] Sam Scheffler: Conservatism, Temporal Bias, and Future Generations · 2021-09-22T14:21:22.502Z · EA · GW

3) "Practical issues" with utilitarianism vs. "ontological" concerns with value


I can make sense of the notion of something like "a community of rational agents" or "sentient beings", and I can see why I value principles coming from this notion; but I'm not sure what a POVU can mean. This is not an issue about abstraction per se. (I’m sorry, this is gonna be even more confusing than the previous comments, but I believe this very discussion is entangled in too many things, not just my thoughts.)

First, you have some issues concerning decision theory: I don’t know what sort of agent, preferences and judgments figure in the POVU; also, if the universe is infinite, the POVU may result in nihilistic infinite ethics. There are many proposals to avoid these obstacles, though.

I think the overall issue is that, even if you can make sense of POVU, it’s underspecified – and then you have to choose a more “normal” POV to make sense of it (the “abstract communities” I quoted above).

To see how this is different from “practical concerns”, take Singer’s mom example: I can totally understand that he spends more resources on his mother than on starving kids. On the other hand, I could also understand if he acted as a hardcore utilitarian. I'd find it a bit alien, but still rational and certainly not plain wrong; the same if you told me that someone else, in a different society far away from here, 500 years into the past or the future, had let their elders die to save strangers.

Now let’s do some sci-fi: I'd act very differently if you told me that a society had built a Super AI, the God Emoji, to turn their cosmic endowment into something like the "minimal hedonic unit" - see this SMBC strip. Or, to draw from another SMBC strip, if a society had decided to vanish from the Earth to get into a hedonic simulation. I think this would be a tragedy and a waste. (And that Aaron should declare SMBC comics hors concours for the EA Forum creative prize.) However, I'm not sure the world in My little pony: Friendship is optimal, or the hedonist aliens in Three worlds collide, would be equally a waste - even though I don't want any of that for our descendants.

But I don't think even these examples picture something like "the POV of the universe"; I think they try to capture a conception of what the POV of sentient life, or the POV of all rational beings, could be… But these notions are more “parochial” than philosophers usually admit - they still focus on a community of beings doing the evaluation. If that’s the case, though, you could think about some hard constraints on your population axiology – concerning the “minimal status” of the members of the community I (or any other agent in our decision problem) want to belong to. In some sense, the sci-fi examples above are "wrong" to me: I can be in no "community" with the "pleasure structures" of the God Emoji; and I don't think the "community" I'd form with the hedonist aliens would be optimal.

Maybe I'm being biased… but it's hard for me to avoid something like that when I think about what policies and values I'd want for the longterm future (I guess that’s why we would need some sort of Long Reflexion). I want our descendants to be very different from me, even in ways I'd find strange, just like Aristotle would likely find my values strange… and yet I think of myself (and them) as sharing a path with him, and I believe he could see it this way, too. So I believe Scheffler has a point here: it’s still me doing a good deal of the valuing. I think it's way less conservative than what he thinks, though.

Comment by Ramiro on [Link post] Sam Scheffler: Conservatism, Temporal Bias, and Future Generations · 2021-09-22T13:41:31.695Z · EA · GW

2) Existence, population, persons and moral philosophy

[...] the principles of impartial concern and temporal neutrality are underrated by most people, but overrated by some moral philosophers.

[..] Often, the dispute seems “merely” practical—something like: 

[…] In these cases, it often seems like people are talking past each other, and are in greater agreement than they realise. C.f. Multi-level utilitarianism.

[…] Williams thinks of philosophy as fundamentally about “making sense of being human”, so the metaphysical moral realists' attempt to represent “the world as it is anyway”—to construct a theory of value abstracted from any human perspective—strikes him as misguided.

I agree with these claims. However, I think (and that's more Scheffler's fault than yours) they neglect one of the cores of the debate between utilitarians vs. almost everyone else: the argument over personal identity and the separability of persons.

One of the main accusations that Rawls (and other members of the MIND group that gravitated around him: Nozick, Dworkin, Nagel...) throws against utilitarianism is that they violate the separability of persons. For instance, Dworkin (Ch. 16 of Justice for Hedgehogs) says that utilitarian impartiality expresses equal respect for a commodity (i.e., mental states like pleasure, pain, or preferences), not for persons. B. Williams, who seems to dislike weird thought experiments, uses a "body switch" example to argue for a strong notion of personal identity.

However, Parfit's discussion on personal identity, backed by a straightforward (ontological, even if not epistemic) scientific reductivism has convinced me that personal identity is an illusion; there's a long philosophical tradition along this line. A funnier (and maybe more persuasive) argument is expressed in Raymond Smullyan’s Is God a Taoist? – which I believe should be mandatory for philosophy students. 

That being said, I'd add that I believe a rebuttal to Williams's limited relativism would be that we can actually conceive of ourselves as part of a large community of rational agents across generations; we do that every time we partake intergenerational projects – even with things as mundane as long-term bonds. It’s way easier to think like that today than 2000 years ago, when we needed to believe in some eternal afterlife to adopt this stance to, e.g., build cathedrals. We do that whenever we judge our ancestors' decisions - which can be extrapolated to how we want to be judged by future generations. I believe this results, in practice, in a somewhat middle ground between Scheffler's conservative view and an impartial POVU - "point of view of the universe".

I say this because I'm still writing my third comment on why, even though I think personal identity is an illusion, and I'm all with Parfit on the non-identity problem, it's hard for me to make sense of the notion of POVU. This goes way beyond the "practical / psychological limitations" for utilitarians.

Comment by Ramiro on [Link post] Sam Scheffler: Conservatism, Temporal Bias, and Future Generations · 2021-09-22T13:10:34.589Z · EA · GW

Thanks. This is a great post. I'd like to read (and write) more posts like this - an enganging summary of a long and complex debate. I read The Afterlife  and listened to the book Why worry... I believe your remarks are accurate, and can't detect anything worth correcting.

But I do have some remarks; I'm gonna post one comment for each one, for the sake of readability:


1) Scheffler on value & acquaintance

2.4 Valuing involves attachment, attachment requires acquaintance, and non-existence makes the relevant form of acquaintance impossible

I'm not sure I follow this; in my interpretation, it's either wrong or useless.

I know there's a lot of people who value more than anything else something I believe doesn't exist (e.g., God(s)).

And I sort of value their beliefs and rituals concerning it, even though I'm not acquainted with this sort of value - because I know it's important for them.

Maybe one could say that, at least from my POV, what they really value is not "god itself", but its "very idea". I'd be ok with that, but … if I accept someone can mostly value something that's totally absent, and if I can value their valuing, so why can't I also value the welfare of future generations that may value something totally distinct from my personal values?

Thus, I believe this conjunction is not true: "attachment requires acquaintance, and non-existence makes the relevant form of acquaintance impossible".
 

Perhaps there's a catch here that I'm kinda surprised no one points out in this discussion: it's knowing  that something does not exist that prevents attachment - not non-existence per se. I believe this is important, e.g., for "the afterlife conjuncture": some philosophers have replied to Scheffler that, given any positive probability p that we'll go extinct, we could not say that our present values depend on the existence of future generations - because we know (so they say) that at some point there'll be none. Call this Alvy Singer's nihilism. I believe this reply is wrong because our situation is analogue to an iterated prisoner's dilemma: all we need for Scheffler's argument to work (along this line, of course - there are other objections) is that, for each present generation, they have a high credence they'll have successors - so they can't use some sort of backward induction reasoning to conclude any sort of thing they value (that depends on the future) is worthless.
(I would like to see someone analysing this debate as a possible instance of a paradox of backward induction)

Comment by Ramiro on Inflation · 2021-09-19T22:42:45.792Z · EA · GW

I think it's awesome, but Harrison should get more credit for pointing out the "patient philanthropy" critique. I'd like to see what you could get if you wrote a short story about it.

Comment by Ramiro on EA Forum Creative Writing Contest: Submission thread for work first published elsewhere · 2021-09-19T21:04:26.674Z · EA · GW

I'm not sure this should qualify, but I usually play hawk in a prisoner's dilemma - so I'm gonna post it before someone else does it... especially because it's gonna be worthwhile if even just one more person has the delightful experience of discovering Raymond Smullyan and his mind-boggling metaphysical / moral dialogue Is God a Taoist?
I don't know if this qualifies as fiction or as creative non-fiction. It's your call, Aaron.

Is God a Taoist?

Raymond M. Smullyan, 1977

Mortal:
   And therefore, O God, I pray thee, if thou hast one ounce of mercy for this thy suffering creature, absolve me of having to have free will!

God:
   You reject the greatest gift I have given thee?

Mortal:
   How can you call that which was forced on me a gift? I have free will, but not of my own choice. I have never freely chosen to have free will. I have to have free will, whether I like it or not!

God:
   Why would you wish not to have free will?

Mortal:
   Because free will means moral responsibility, and moral responsibility is more than I can bear!

God:
   Why do you find moral responsibility so unbearable?

Mortal:
   Why? I honestly can't analyze why; all I know is that I do.

God:
   All right, in that case suppose I absolve you from all moral responsibility but leave you still with free will. Will this be satisfactory?

Mortal (after a pause):
   No, I am afraid not.

God:
   Ah, just as I thought! So moral responsibility is not the only aspect of free will to which you object. What else about free will is bothering you?

Mortal:
   With free will I am capable of sinning, and I don't want to sin!

God:
   If you don't want to sin, then why do you?

Mortal:
   Good God! I don't know why I sin, I just do! Evil temptations come along, and try as I can, I cannot resist them.

God:
   If it is really true that you cannot resist them, then you are not sinning of your own free will and hence (at least according to me) not sinning at all.

Mortal:
   No, no! I keep feeling that if only I tried harder I could avoid sinning. I understand that the will is infinite. If one wholeheartedly wills not to sin, then one won't.

God:
   Well now, you should know. Do you try as hard as you can to avoid sinning or don't you?

Mortal:
   I honestly don't know! At the time, I feel I am trying as hard as I can, but in retrospect, I am worried that maybe I didn't!

God:
   So in other words, you don't really know whether or not you have been sinning. So the possibility is open that you haven't been sinning at all!

Mortal:
   Of course this possibility is open, but maybe I have been sinning, and this thought is what so frightens me!

God:
   Why does the thought of your sinning frighten you?

Mortal:
   I don't know why! For one thing, you do have a reputation for meting out rather gruesome punishments in the afterlife!

God:
   Oh, that's what's bothering you! Why didn't you say so in the first place instead of all this peripheral talk about free will and responsibility? Why didn't you simply request me not to punish you for any of your sins?

Mortal:
   I think I am realistic enough to know that you would hardly grant such a request!

God:
   You don't say! You have a realistic knowledge of what requests I will grant, eh? Well, I'll tell you what I'm going to do! I will grant you a very, very special dispensation to sin as much as you like, and I give you my divine word of honor that I will never punish you for it in the least. Agreed?

Mortal (in great terror):
   No, no, don't do that!

God:
   Why not? Don't you trust my divine word?

Mortal:
   Of course I do! But don't you see, I don't want to sin! I have an utter abhorrence of sinning, quite apart from any punishments it may entail.

God:
   In that case, I'll go you one better. I'll remove your abhorrence of sinning. Here is a magic pill! Just swallow it, and you will lose all abhorrence of sinning. You will joyfully and merrily sin away, you will have no regrets, no abhorrence and I still promise you will never be punished by me, or yourself, or by any source whatever. You will be blissful for all eternity. So here is the pill!

Mortal:
   No, no!

God:
   Are you not being irrational? I am even removing your abhorrence of sin, which is your last obstacle.

Mortal:
   I still won't take it!

God:
   Why not?

Mortal:
   I believe that the pill will indeed remove my future abhorrence for sin, but my present abhorrence is enough to prevent me from being willing to take it.

God:
   I command you to take it!

Mortal:
   I refuse!

God:
   What, you refuse of your own free will?

Mortal:
   Yes!

God:
   So it seems that your free will comes in pretty handy, doesn't it?

Mortal:
   I don't understand!

God:
   Are you not glad now that you have the free will to refuse such a ghastly offer? How would you like it if I forced you to take this pill, whether you wanted it or not?

Mortal:
   No, no! Please don't!

God:
   Of course I won't; I'm just trying to illustrate a point. All right, let me put it this way. Instead of forcing you to take the pill, suppose I grant your original prayer of removing your free will -- but with the understanding that the moment you are no longer free, then you will take the pill.

Mortal:
   Once my will is gone, how could I possibly choose to take the pill?

God:
   I did not say you would choose it; I merely said you would take it. You would act, let us say, according to purely deterministic laws which are such that you would as a matter of fact take it.

Mortal:
   I still refuse.

God:
   So you refuse my offer to remove your free will. This is rather different from your original prayer, isn't it?

Mortal:
   Now I see what you are up to. Your argument is ingenious, but I'm not sure it is really correct. There are some points we will have to go over again.

God:
   Certainly.

Mortal:
   There are two things you said which seem contradictory to me. First you said that one cannot sin unless one does so of one's own free will. But then you said you would give me a pill which would deprive me of my own free will, and then I could sin as much as I liked. But if I no longer had free will, then, according to your first statement, how could I be capable of sinning?

God:
   You are confusing two separate parts of our conversation. I never said the pill would deprive you of your free will, but only that it would remove your abhorrence of sinning.

Mortal:
   I'm afraid I'm a bit confused.

God:
   All right, then let us make a fresh start. Suppose I agree to remove your free will, but with the understanding that you will then commit an enormous number of acts which you now regard as sinful. Technically speaking, you will not then be sinning since you will not be doing these acts of your own free will. And these acts will carry no moral responsibility, nor moral culpability, nor any punishment whatsoever. Nevertheless, these acts will all be of the type which you presently regard as sinful; they will all have this quality which you presently feel as abhorrent, but your abhorrence will disappear; so you will not then feel abhorrence toward the acts.

Mortal:
   No, but I have present abhorrence toward the acts, and this present abhorrence is sufficient to prevent me from accepting your proposal.

God:
   Hm! So let me get this absolutely straight. I take it you no longer wish me to remove your free will.

Mortal (reluctantly):
   No, I guess not.

God:
   All right, I agree not to. But I am still not exactly clear as to why you now no longer wish to be rid of your free will. Please tell me again.

Mortal:
   Because, as you have told me, without free will I would sin even more than I do now.

God:
   But I have already told you that without free will you cannot sin.

Mortal:
   But if I choose now to be rid of free will, then all my subsequent evil actions will be sins, not of the future, but of the present moment in which I choose not to have free will.

God:
   Sounds like you are pretty badly trapped, doesn't it?

Mortal:
   Of course I am trapped! You have placed me in a hideous double bind! Now whatever I do is wrong. If I retain free will, I will continue to sin, and if I abandon free will (with your help, of course) I will now be sinning in so doing.

God:
   But by the same token, you place me in a double bind. I am willing to leave you free will or remove it as you choose, but neither alternative satisfies you. I wish to help you, but it seems I cannot.

Mortal:
   True!

God:
   But since it is not my fault, why are you still angry with me?

Mortal:
   For having placed me in such a horrible predicament in first place!

God:
   But, according to you, there is nothing satisfactory I could have done.

Mortal:
   You mean there is nothing satisfactory you can now do, that does not mean that there is nothing you could have done.

God:
   Why? What could I have done?

Mortal:
   Obviously you should never have given me free will in the first place. Now that you have given it to me, it is too late -- anything I do will be bad. But you should never have given it to me in the first place.

God:
   Oh, that's it! Why would it have been better had I never given it to you?

Mortal:
   Because then I never would have been capable of sinning at all.

God:
   Well, I'm always glad to learn from my mistakes.

Mortal:
   What!

God:
   I know, that sounds sort of self-blasphemous, doesn't it? It almost involves a logical paradox! On the one hand, as you have been taught, it is morally wrong for any sentient being to claim that I am capable of making mistakes. On the other hand, I have the right to do anything. But I am also a sentient being. So the question is, Do, I or do I not have the right to claim that I am capable of making mistakes?

Mortal:
   That is a bad joke! One of your premises is simply false. I have not been taught that it is wrong for any sentient being to doubt your omniscience, but only for a mortal to doubt it. But since you are not mortal, then you are obviously free from this injunction.

God:
   Good, so you realize this on a rational level. Nevertheless, you did appear shocked when I said, "I am always glad to learn from my mistakes."

Mortal:
   Of course I was shocked. I was shocked not by your self-blasphemy (as you jokingly called it), not by the fact that you had no right to say it, but just by the fact that you did say it, since I have been taught that as a matter of fact you don't make mistakes. So I was amazed that you claimed that it is possible for you to make mistakes.

God:
   I have not claimed that it is possible. All I am saying is that if I make mistakes, I will be happy to learn from them. But this says nothing about whether the if has or ever can be realized.

Mortal:
   Let's please stop quibbling about this point. Do you or do you not admit it was a mistake to have given me free will?

God:
   Well now, this is precisely what I propose we should investigate. Let me review your present predicament. You don't want to have free will because with free will you can sin, and you don't want to sin. (Though I still find this puzzling; in a way you must want to sin, or else you wouldn't. But let this pass for now.) On the other hand, if you agreed to give up free will, then you would now be responsible for the acts of the future. Ergo, I should never have given you free will in the first place.

Mortal:
   Exactly!

God:
   I understand exactly how you feel. Many mortals -- even some theologians -- have complained that I have been unfair in that it was I, not they, who decided that they should have free will, and then I hold them responsible for their actions. In other words, they feel that they are expected to live up to a contract with me which they never agreed to in the first place.

Mortal:
   Exactly!

God:
   As I said, I understand the feeling perfectly. And I can appreciate the justice of the complaint. But the complaint arises only from an unrealistic understanding of the true issues involved. I am about to enlighten you as to what these are, and I think the results will surprise you! But instead of telling you outright, I shall continue to use the Socratic method.

To repeat, you regret that I ever gave you free will. I claim that when you see the true ramifications you will no longer have this regret. To prove my point, I'll tell you what I'm going to do. I am about to create a new universe -- a new space-time continuum. In this new universe will be born a mortal just like you -- for all practical purposes, we might say that you will be reborn. Now, I can give this new mortal -- this new you -- free will or not. What would you like me to do?

Mortal (in great relief):
   Oh, please! Spare him from having to have free will!

God:
   All right, I'll do as you say. But you do realize that this new you without free will, will commit all sorts of horrible acts.

Mortal:
   But they will not be sins since he will have no free will.

God:
   Whether you call them sins or not, the fact remains that they will be horrible acts in the sense that they will cause great pain to many sentient beings.

Mortal (after a pause):
   Good God, you have trapped me again! Always the same game! If I now give you the go-ahead to create this new creature with no free will who will nevertheless commit atrocious acts, then true enough he will not be sinning, but I again will be the sinner to sanction this.

God:
   In that case, I'll go you one better! Here, I have already decided whether to create this new you with free will or not. Now, I am writing my decision on this piece of paper and I won't show it to you until later. But my decision is now made and is absolutely irrevocable. There is nothing you can possibly do to alter it; you have no responsibility in the matter. Now, what I wish to know is this: Which way do you hope I have decided? Remember now, the responsibility for the decision falls entirely on my shoulders, not yours. So you can tell me perfectly honestly and without any fear, which way do you hope I have decided?

Mortal (after a very long pause):
   I hope you have decided to give him free will.

God:
   Most interesting! I have removed your last obstacle! If I do not give him free will, then no sin is to be imputed to anybody. So why do you hope I will give him free will?

Mortal:
   Because sin or no sin, the important point is that if you do not give him free will, then (at least according to what you have said) he will go around hurting people, and I don't want to see people hurt.

GOD (with an infinite sigh of relief):
   At last! At last you see the real point!

Mortal:
   What point is that?

God:
   That sinning is not the real issue! The important thing is that people as well as other sentient beings don't get hurt!

Mortal:
   You sound like a utilitarian!

God:
   I am a utilitarian!

Mortal:
   What!

God:
   Whats or no whats, I am a utilitarian. Not a unitarian, mind you, but a utilitarian.

Mortal:
   I just can't believe it!

God:
   Yes, I know, your religious training has taught you otherwise. You have probably thought of me more like a Kantian than a utilitarian, but your training was simply wrong.

Mortal:
   You leave me speechless!

God:
   I leave you speechless, do I! Well, that is perhaps not too bad a thing -- you have a tendency to speak too much as it is. Seriously, though, why do you think I ever did give you free will in the first place?

Mortal:
   Why did you? I never have thought much about why you did; all I have been arguing for is that you shouldn't have! But why did you? I guess all I can think of is the standard religious explanation: Without free will, one is not capable of meriting either salvation or damnation. So without free will, we could not earn the right to eternal life.

God:
   Most interesting! I have eternal life; do you think I have ever done anything to merit it?

Mortal:
   Of course not! With you it is different. You are already so good and perfect (at least allegedly) that it is not necessary for you to merit eternal life.

God:
   Really now? That puts me in a rather enviable position, doesn't it?

Mortal:
   I don't think I understand you.

God:
   Here I am eternally blissful without ever having to suffer or make sacrifices or struggle against evil temptations or anything like that. Without any of that type of "merit", I enjoy blissful eternal existence. By contrast, you poor mortals have to sweat and suffer and have all sorts of horrible conflicts about morality, and all for what? You don't even know whether I really exist or not, or if there really is any afterlife, or if there is, where you come into the picture. No matter how much you try to placate me by being "good," you never have any real assurance that your "best" is good enough for me, and hence you have no real security in obtaining salvation. Just think of it! I already have the equivalent of "salvation" -- and have never had to go through this infinitely lugubrious process of earning it. Don't you ever envy me for this?

Mortal:
   But it is blasphemous to envy you!

God:
   Oh come off it! You're not now talking to your Sunday school teacher, you are talking to me. Blasphemous or not, the important question is not whether you have the right to be envious of me but whether you are. Are you?

Mortal:
   Of course I am!

God:
   Good! Under your present world view, you sure should be most envious of me. But I think with a more realistic world view, you no longer will be. So you really have swallowed the idea which has been taught you that your life on earth is like an examination period and that the purpose of providing you with free will is to test you, to see if you merit blissful eternal life. But what puzzles me is this: If you really believe I am as good and benevolent as I am cracked up to be, why should I require people to merit things like happiness and eternal life? Why should I not grant such things to everyone regardless of whether or not he deserves them?

Mortal:
   But I have been taught that your sense of morality -- your sense of justice -- demands that goodness be rewarded with happiness and evil be punished with pain.

God:
   Then you have been taught wrong.

Mortal:
   But the religious literature is so full of this idea! Take for example Jonathan Edwards's "Sinners in the Hands of an Angry God." How he describes you as holding your enemies like loathsome scorpions over the flaming pit of hell, preventing them from falling into the fate that they deserve only by dint of your mercy.

God:
   Fortunately, I have not been exposed to the tirades of Mr. Jonathan Edwards. Few sermons have ever been preached which are more misleading. The very title "Sinners in the Hands of an Angry God" tells its own tale. In the first place, I am never angry. In the second place, I do not think at all in terms of "sin." In the third place, I have no enemies.

Mortal:
   By that do you mean that there are no people whom you hate, or that there are no people who hate you?

God:
   I meant the former although the latter also happens to be true.

Mortal:
   Oh come now, I know people who have openly claimed to have hated you. At times I have hated you!

God:
   You mean you have hated your image of me. That is not the same thing as hating me as I really am.

Mortal:
   Are you trying to say that it is not wrong to hate a false conception of you, but that it is wrong to hate you as you really are?

God:
   No, I am not saying that at all; I am saying something far more drastic! What I am saying has absolutely nothing to do with right or wrong. What I am saying is that one who knows me for what I really am would simply find it psychologically impossible to hate me.

Mortal:
   Tell me, since we mortals seem to have such erroneous views about your real nature, why don't you enlighten us? Why don't you guide us the right way?

God:
   What makes you think I'm not?

Mortal:
   I mean, why don't you appear to our very senses and simply tell us that we are wrong?

GOD:
   Are you really so naive as to believe that I am the sort of being which can appear to your senses? It would be more correct to say that I am your senses.

Mortal (astonished):
   You are my senses?

God:
   Not quite, I am more than that. But it comes closer to the truth than the idea that I am perceivable by the senses. I am not an object; like you, I am a subject, and a subject can perceive, but cannot be perceived. You can no more see me than you can see your own thoughts. You can see an apple, but the event of your seeing an apple is itself not seeable. And I am far more like the seeing of an apple than the apple itself.

Mortal:
   If I can't see you, how do I know you exist?

God:
   Good question! How in fact do you know I exist?

Mortal:
   Well, I am talking to you, am I not?

God:
   How do you know you are talking to me? Suppose you told a psychiatrist, "Yesterday I talked to God." What do you think he would say?

Mortal:
   That might depend on the psychiatrist. Since most of them are atheistic, I guess most would tell me I had simply been talking to myself.

God:
   And they would be right!

Mortal:
   What? You mean you don't exist?

God:
   You have the strangest faculty of drawing false conclusions! Just because you are talking to yourself, it follows that I don't exist?

Mortal:
   Well, if I think I am talking to you, but I am really talking to myself, in what sense do you exist?

God:
   Your question is based on two fallacies plus a confusion. The question of whether or not you are now talking to me and the question of whether or not I exist are totally separate. Even if you were not now talking to me (which obviously you are), it still would not mean that I don't exist.

Mortal:
   Well, all right, of course! So instead of saying "if I am talking to myself, then you don't exist," I should rather have said, "if I am talking to myself, then I obviously am not talking to you."

God:
   A very different statement indeed, but still false.

Mortal:
   Oh, come now, if I am only talking to myself, then how can I be talking to you?

God:
   Your use of the word "only" is quite misleading! I can suggest several logical possibilities under which your talking to yourself does not imply that you are not talking to me.

Mortal:
   Suggest just one!

God:
   Well, obviously one such possibility is that you and I are identical.

Mortal:
   Such a blasphemous thought -- at least had I uttered it!

God:
   According to some religions, yes. According to others, it is the plain, simple, immediately perceived truth.

Mortal:
   So the only way out of my dilemma is to believe that you and I are identical?

God:
   Not at all! This is only one way out. There are several others. For example, it may be that you are part of me, in which case you may be talking to that part of me which is you. Or I may be part of you, in which case you may be talking to that part of you which is me. Or again, you and I might partially overlap, in which case you may be talking to the intersection and hence talking both to you and to me. The only way your talking to yourself might seem to imply that you are not talking to me is if you and I were totally disjoint -- and even then, you could conceivably be talking to both of us.

Mortal:
   So you claim you do exist.

God:
   Not at all. Again you draw false conclusions! The question of my existence has not even come up. All I have said is that from the fact that you are talking to yourself one cannot possibly infer my nonexistence, let alone the weaker fact that you are not talking to me.

Mortal:
   All right, I'll grant your point! But what I really want to know is do you exist?

God:
   What a strange question!

Mortal:
   Why? Men have been asking it for countless millennia.

God:
   I know that! The question itself is not strange; what I mean is that it is a most strange question to ask of me!

Mortal:
   Why?

God:
   Because I am the very one whose existence you doubt! I perfectly well understand your anxiety. You are worried that your present experience with me is a mere hallucination. But how can you possibly expect to obtain reliable information from a being about his very existence when you suspect the nonexistence of the very same being?

Mortal:
   So you won't tell me whether or not you exist?

God:
   I am not being willful! I merely wish to point out that no answer I could give could possibly satisfy you. All right, suppose I said, "No, I don't exist." What would that prove? Absolutely nothing! Or if I said, "Yes, I exist." Would that convince you? Of course not!

Mortal:
   Well, if you can't tell me whether or not you exist, then who possibly can?

God:
   That is something which no one can tell you. It is something which only you can find out for yourself.

Mortal:
   How do I go about finding this out for myself?

God:
   That also no one can tell you. This is another thing you will have to find out for yourself.

Mortal:
   So there is no way you can help me?

God:
   I didn't say that. I said there is no way I can tell you. But that doesn't mean there is no way I can help you.

Mortal:
   In what manner then can you help me?

God:
   I suggest you leave that to me! We have gotten sidetracked as it is, and I would like to return to the question of what you believed my purpose to be in giving you free will. Your first idea of my giving you free will in order to test whether you merit salvation or not may appeal to many moralists, but the idea is quite hideous to me. You cannot think of any nicer reason -- any more humane reason -- why I gave you free will?

Mortal:
   Well now, I once asked this question of an Orthodox rabbi. He told me that the way we are constituted, it is simply not possible for us to enjoy salvation unless we feel we have earned it. And to earn it, we of course need free will.

God:
   That explanation is indeed much nicer than your former but still is far from correct. According to Orthodox Judaism, I created angels, and they have no free will. They are in actual sight of me and are so completely attracted by goodness that they never have even the slightest temptation toward evil. They really have no choice in the matter. Yet they are eternally happy even though they have never earned it. So if your rabbi's explanation were correct, why wouldn't I have simply created only angels rather than mortals?

Mortal:
   Beats me! Why didn't you?

God:
   Because the explanation is simply not correct. In the first place, I have never created any ready-made angels. All sentient beings ultimately approach the state which might be called "angelhood." But just as the race of human beings is in a certain stage of biologic evolution, so angels are simply the end result of a process of Cosmic Evolution. The only difference between the so-called saint and the so-called sinner is that the former is vastly older than the latter. Unfortunately it takes countless life cycles to learn what is perhaps the most important fact of the universe -- evil is simply painful. All the arguments of the moralists -- all the alleged reasons why people shouldn't commit evil acts -- simply pale into insignificance in light of the one basic truth that evil is suffering.

No, my dear friend, I am not a moralist. I am wholly a utilitarian. That I should have been conceived in the role of a moralist is one of the great tragedies of the human race. My role in the scheme of things (if one can use this misleading expression) is neither to punish nor reward, but to aid the process by which all sentient beings achieve ultimate perfection.

Mortal:
   Why did you say your expression is misleading?

God:
   What I said was misleading in two respects. First of all it is inaccurate to speak of my role in the scheme of things. I am the scheme of things. Secondly, it is equally misleading to speak of my aiding the process of sentient beings attaining enlightenment. I am the process. The ancient Taoists were quite close when they said of me (whom they called "Tao") that I do not do things, yet through me all things get done. In more modem terms, I am not the cause of Cosmic Process, I am Cosmic Process itself. I think the most accurate and fruitful definition of me which man can frame -- at least in his present state of evolution -- is that I am the very process of enlightenment. Those who wish to think of the devil (although I wish they wouldn't!) might analogously define him as the unfortunate length of time the process takes. In this sense, the devil is necessary; the process simply does take an enormous length of time, and there is absolutely nothing I can do about it. But, I assure you, once the process is more correctly understood, the painful length of time will no longer be regarded as an essential limitation or an evil. It will be seen to be the very essence of the process itself. I know this is not completely consoling to you who are now in the finite sea of suffering, but the amazing thing is that once you grasp this fundamental attitude, your very finite suffering will begin to diminish -- ultimately to the vanishing point.

Mortal:
   I have been told this, and I tend to believe it. But suppose I personally succeed in seeing things through your eternal eyes. Then I will be happier, but don't I have a duty to others?

GOD (laughing):
   You remind me of the Mahayana Buddhists! Each one says, "I will not enter Nirvana until I first see that all other sentient beings do so." So each one waits for the other fellow to go first. No wonder it takes them so long! The Hinayana Buddhist errs in a different direction. He believes that no one can be of the slightest help to others in obtaining salvation; each one has to do it entirely by himself. And so each tries only for his own salvation. But this very detached attitude makes salvation impossible. The truth of the matter is that salvation is partly an individual and partly a social process. But it is a grave mistake to believe -- as do many Mahayana Buddhists -- that the attaining of enlightenment puts one out of commission, so to speak, for helping others. The best way of helping others is by first seeing the light oneself.

Mortal:
   There is one thing about your self-description which is somewhat disturbing. You describe yourself essentially as a process. This puts you in such an impersonal light, and so many people have a need for a personal God.

God:
   So because they need a personal God, it follows that I am one?

Mortal:
   Of course not. But to be acceptable to a mortal a religion must satisfy his needs.

God:
   I realize that. But the so-called "personality" of a being is really more in the eyes of the beholder than in the being itself. The controversies which have raged, about whether I am a personal or an impersonal being are rather silly because neither side is right or wrong. From one point of view, I am personal, from another, I am not. It is the same with a human being. A creature from another planet may look at him purely impersonally as a mere collection of atomic particles behaving according to strictly prescribed physical laws. He may have no more feeling for the personality of a human than the average human has for an ant. Yet an ant has just as much individual personality as a human to beings like myself who really know the ant. To look at something impersonally is no more correct or incorrect than to look at it personally, but in general, the better you get to know something, the more personal it becomes. To illustrate my point, do you think of me as a personal or impersonal being?

Mortal:
   Well, I'm talking to you, am I not?

God:
   Exactly! From that point of view, your attitude toward me might be described as a personal one. And yet, from another point of view -- no less valid -- I can also be looked at impersonally.

Mortal:
   But if you are really such an abstract thing as a process, I don't see what sense it can make my talking to a mere "process."

God:
   I love the way you say "mere." You might just as well say that you are living in a "mere universe." Also, why must everything one does make sense? Does it make sense to talk to a tree?

Mortal:
   Of course not!

God:
   And yet, many children and primitives do just that.

Mortal:
   But I am neither a child nor a primitive.

God:
   I realize that, unfortunately.

Mortal:
   Why unfortunately?

God:
   Because many children and primitives have a primal intuition which the likes of you have lost. Frankly, I think it would do you a lot of good to talk to a tree once in a while, even more good than talking to me! But we seem always to be getting sidetracked! For the last time, I would like us to try to come to an understanding about why I gave you free will.

Mortal:
   I have been thinking about this all the while.

God:
   You mean you haven't been paying attention to our conversation?

Mortal:
   Of course I have. But all the while, on another level, I have been thinking about it.

God:
   And have you come to any conclusion?

Mortal:
   Well, you say the reason is not to test our worthiness. And you disclaimed the reason that we need to feel that we must merit things in order to enjoy them. And you claim to be a utilitarian. Most significant of all, you appeared so delighted when I came to the sudden realization that it is not sinning in itself which is bad but only the suffering which it causes.

God:
   Well of course! What else could conceivably be bad about sinning?

Mortal:
   All right, you know that, and now I know that. But all my life I unfortunately have been under the influence of those moralists who hold sinning to be bad in itself. Anyway, putting all these pieces together, it occurs to me that the only reason you gave free will is because of your belief that with free will, people will tend to hurt each other -- and themselves -- less than without free will.

God:
   Bravo! That is by far the best reason you have yet given! I can assure you that had I chosen to give free will, that would have been my very reason for so choosing.

Mortal:
   What! You mean to say you did not choose to give us free will?

God:
   My dear fellow, I could no more choose to give you free will than I could choose to make an equilateral triangle equiangular. I could choose to make or not to make an equilateral triangle in the first place, but having chosen to make one, I would then have no choice but to make it equiangular.

Mortal:
   I thought you could do anything!

God:
   Only things which are logically possible. As St. Thomas said, "It is a sin to regard the fact that God cannot do the impossible, as a limitation on His powers." I agree, except that in place of his using the word sin I would use the term error.

Mortal:
   Anyhow, I am still puzzled by your implication that you did not choose to give me free will.

God:
   Well, it is high time I inform you that the entire discussion -- from the very beginning -- has been based on one monstrous fallacy! We have been talking purely on a moral level -- you originally complained that I gave you free will, and raised the whole question as to whether I should have. It never once occurred to you that I had absolutely no choice in the matter.

Mortal:
   I am still in the dark!

God:
   Absolutely! Because you are only able to look at it through the eyes of a moralist. The more fundamental metaphysical aspects of the question you never even considered.

Mortal:
   I still do not see what you are driving at.

God:
   Before you requested me to remove your free will, shouldn't your first question have been whether as a matter of fact you do have free will?

Mortal:
   That I simply took for granted.

God:
   But why should you?

Mortal:
   I don't know. Do I have free will?

God:
   Yes.

Mortal:
   Then why did you say I shouldn't have taken it for granted?

God:
   Because you shouldn't. Just because something happens to be true, it does not follow that it should be taken for granted.

Mortal:
   Anyway, it is reassuring to know that my natural intuition about having free will is correct. Sometimes I have been worried that determinists are correct.

God:
   They are correct.

Mortal:
   Wait a minute now, do I have free will or don't I?

God:
   I already told you you do. But that does not mean that determinism is incorrect.

Mortal:
   Well, are my acts determined by the laws of nature or aren't they?

God:
   The word determined here is subtly but powerfully misleading and has contributed so much to the confusions of the free will versus determinism controversies. Your acts are certainly in accordance with the laws of nature, but to say they are determined by the laws of nature creates a totally misleading psychological image which is that your will could somehow be in conflict with the laws of nature and that the latter is somehow more powerful than you, and could "determine" your acts whether you liked it or not. But it is simply impossible for your will to ever conflict with natural law. You and natural law are really one and the same.

Mortal:
   What do you mean that I cannot conflict with nature? Suppose I were to become very stubborn, and I determined not to obey the laws of nature. What could stop me? If I became sufficiently stubborn even you could not stop me!

God:
   You are absolutely right! I certainly could not stop you. Nothing could stop you. But there is no need to stop you, because you could not even start! As Goethe very beautifully expressed it, "In trying to oppose Nature, we are, in the very process of doing so, acting according to the laws of nature!" Don't you see that the so-called "laws of nature" are nothing more than a description of how in fact you and other beings do act? They are merely a description of how you act, not a prescription of of how you should act, not a power or force which compels or determines your acts. To be valid a law of nature must take into account how in fact you do act, or, if you like, how you choose to act.

Mortal:
   So you really claim that I am incapable of determining to act against natural law?

God:
   It is interesting that you have twice now used the phrase "determined to act" instead of "chosen to act." This identification is quite common. Often one uses the statement "I am determined to do this" synonymously with "I have chosen to do this." This very psychological identification should reveal that determinism and choice are much closer than they might appear. Of course, you might well say that the doctrine of free will says that it is you who are doing the determining, whereas the doctrine of determinism appears to say that your acts are determined by something apparently outside you. But the confusion is largely caused by your bifurcation of reality into the "you" and the "not you." Really now, just where do you leave off and the rest of the universe begin? Or where does the rest of the universe leave off and you begin? Once you can see the so-called "you" and the so-called "nature" as a continuous whole, then you can never again be bothered by such questions as whether it is you who are controlling nature or nature who is controlling you. Thus the muddle of free will versus determinism will vanish. If I may use a crude analogy, imagine two bodies moving toward each other by virtue of gravitational attraction. Each body, if sentient, might wonder whether it is he or the other fellow who is exerting the "force." In a way it is both, in a way it is neither. It is best to say that it is the configuration of the two which is crucial.

Mortal:
   You said a short while ago that our whole discussion was based on a monstrous fallacy. You still have not told me what this fallacy is.

God:
   Why, the idea that I could possibly have created you without free will! You acted as if this were a genuine possibility, and wondered why I did not choose it! It never occurred to you that a sentient being without free will is no more conceivable than a physical object which exerts no gravitational attraction. (There is, incidentally, more analogy than you realize between a physical object exerting gravitational attraction and a sentient being exerting free will!) Can you honestly even imagine a conscious being without free will? What on earth could it be like? I think that one thing in your life that has so misled you is your having been told that I gave man the gift of free will. As if I first created man, and then as an afterthought endowed him with the extra property of free will. Maybe you think I have some sort of "paint brush" with which I daub some creatures with free will and not others. No, free will is not an "extra"; it is part and parcel of the very essence of consciousness. A conscious being without free will is simply a metaphysical absurdity.

Mortal:
   Then why did you play along with me all this while discussing what I thought was a moral problem, when, as you say, my basic confusion was metaphysical?

God:
   Because I thought it would be good therapy for you to get some of this moral poison out of your system. Much of your metaphysical confusion was due to faulty moral notions, and so the latter had to be dealt with first.

And now we must part -- at least until you need me again. I think our present union will do much to sustain you for a long while. But do remember what I told you about trees. Of course, you don't have to literally talk to them if doing so makes you feel silly. But there is so much you can learn from them, as well as from the rocks and streams and other aspects of nature. There is nothing like a naturalistic orientation to dispel all these morbid thoughts of "sin" and "free will" and "moral responsibility." At one stage of history, such notions were actually useful. I refer to the days when tyrants had unlimited power and nothing short of fears of hell could possibly restrain them. But mankind has grown up since then, and this gruesome way of thinking is no longer necessary.

It might be helpful to you to recall what I once said through the writings of the great Zen poet Seng-Ts'an:

If you want to get the plain truth,
Be not concerned with right and wrong.
The conflict between right and wrong
Is the sickness of the mind.

Comment by Ramiro on Cultured meat predictions were overly optimistic · 2021-09-16T14:47:29.415Z · EA · GW

Thanks for the post.
I am one of those who lost a bet about the availability of cultured meat in grocery stores by now :(

Comment by Ramiro on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T18:14:08.825Z · EA · GW

This question is surprisingly hard... I can barely start thinking about very ordinary stuff like "automatized mailbox management..." Your "gold example" made me think about artificial diamonds, which are still regarded as less valuable than natural ones in jewelry - but that's because jewelry is a luxury / status good. It helps a bit to think about tech that sort of existed for very long and was only largely deployed in the last hundred years, like bicycles. I mean, we could have it since at least 18th Century, but they only appeared around 1840s, and somehow it only became a real option after the 1890s - when we already had trains and cars.

Comment by Ramiro on rohinmshah's Shortform · 2021-08-25T16:26:14.987Z · EA · GW

I share your feeling towards it... but I also often say that one's "skin in the game" (your latter example) is someone else's "conflict of interest."

I don't think that the listener / reader is usually in a good position to distinguish between your first and your second example; that's enough to justify the practice of disclosing this as a potential "conflict of interest." In addition, by knowing you already work for cause X, I might consider if your case is affected by some kind of cognitive bias.
 

Comment by Ramiro on Ramiro's Shortform · 2021-08-25T02:32:19.542Z · EA · GW

I was recently reading about the International Panel for Social Progress: https://www.ipsp.org/ I had never heard of it before. Which surprised me, since it's kind of like the IPCC, but for social progress. I got the impression that it somehow failed - in reaching significant consensus, in influencing policy... but why?