Posts

[Linkpost] The Environment as an Obstacle 2020-08-31T17:15:07.273Z · score: 5 (2 votes)
[Linkpost] The Groundswell 2020-08-31T17:11:07.998Z · score: 9 (2 votes)
What is a pandemic compared to our sewer system? An example of how a society normalizes risks 2020-07-25T14:59:24.093Z · score: 25 (10 votes)
Is there anything like "green bonds" for x-risk mitigation? 2020-06-30T00:33:38.732Z · score: 21 (10 votes)
My amateur method for translations 2020-06-30T00:29:30.043Z · score: 11 (6 votes)
Indifference, racism and violence: what comes after justice for George Floyd? 2020-06-12T01:44:23.358Z · score: 38 (16 votes)
Who should / is going to win 2020 FLI award 2020? 2020-06-11T19:20:11.364Z · score: 11 (6 votes)
Is rapid diagnostic testing (RDT), such as for coronavirus, a neglected area in Global Health? 2020-03-17T22:24:05.915Z · score: 11 (4 votes)
Ramiro's Shortform 2019-10-17T13:16:14.822Z · score: 3 (2 votes)
Merging with AI would be suicide for the human mind - Susan Schneider 2019-10-03T17:55:07.789Z · score: 0 (2 votes)

Comments

Comment by ramiro on The end of the Bronze Age as an example of a sudden collapse of civilization · 2020-10-30T14:35:24.019Z · score: 4 (3 votes) · EA · GW

Guys, great post and discussion. I was trimming through the discussion about Hekla's role... even if the eruption succeeded the breakdown of those civilizations by half a century, it'd likely have an effect concerning their prospects for recovery.

Comment by ramiro on List of EA-related organisations · 2020-10-21T18:52:52.321Z · score: 2 (2 votes) · EA · GW

Thanks a million for that!

It would be so cool if someone put this on a map...

Comment by ramiro on Can my self-worth compare to my instrumental value? · 2020-10-14T03:05:02.521Z · score: 8 (5 votes) · EA · GW

First, of course, thanks, C Tilli, for the post, and thanks willbradshaw for these comments.
This pierced my mind:

As you say, I'm not sure EA will ever be as comforting as religion – it's optimising for very different things. But over time I hope we will generate community structures and wisdom literature to help manage this tension, care for each other, and create the emotional (as well as intellectual) conditions we need to survive and flourish.

I think my background is the opposite of C Tilli's: I have been an atheist for many years (and still am - well, maybe more of an agnostic, since we might be in a simulation...), but since I found out about EA, I think I became a little bit more understanding towards not only the need for comfort, but also the idea of valuing something that goes way beyond one's own personal value and social circle, that is sought by religious people (on the other hand, I also became a little bit supicious of some cult-like traits we might be tempted to mimic).

I am sort of surprised we wrote so much, so far, without talking about death and mortality. I know I have intrinsic value, but it's fragile and perishable (cryonics aside); and yet, the set of things I can value extends way beyond my perishable self - actually, my own self-worth depends a little bit on that (as Scheffer argues, it'd be hard not to be nihilistic if we knew humanity was going to end after us), and there's no necessary upper bound for what I can value. I reckon that, as much as I fear humanity falling into the precipice, I feel joy by thinking it may continue for eons, and that I may play a role, contribute and add my own personal experience to this narrative.

I guess that's the 'trick' played by religion that might be missing here: religion 'grants' me some sort of intrinsic value through some metaphysical cosmic privilege (or the love of God) - and this provides us some comfort. But then, without it, all that is left, despite enjoyable and worthy, is perishable - transient love, fading joy, endured pain, limited virtue, pleasure... Like Dworkin (who considered this to be a religious conviction - though non-theistic), we can say that a life well-lived is an achievement in itself, and stands for itself even after we die, like a work of art - but art itself will be meaningless when humanity is gone. Maybe altruism is just another way to trick (the fear of) death: when one realizes that "All those moments will be lost in time, like tears in rain. Time to die" one might see it not as realizing some external value, but as an important part of one's own self-worth. (if Bladerunner is too melodramatic, one can use the bureaucrat in Ikiru as an example of this reasoning)

Comment by ramiro on Can my self-worth compare to my instrumental value? · 2020-10-14T00:12:07.556Z · score: 9 (4 votes) · EA · GW

For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.

I think this is still an instrumental reason for someone to place "substantial intrinsic value on themselves." Though I have no problem with that, I thought what C Tilli complained about was precisely that, for EAs, all self-concern is for the sake of the greater good, even when it is rephrased as a psychological need for a small amount self-indulgence.
Second, I'd say that people who are "more successful and have a larger social impact in the long term" are "people who place substantial intrinsic value on themselves,” but that's just selection dynamics: if you have a large impact, then you (likely) place substantial intrinsic value on yourself. Even if it does imply that you’re more likely to succeed if you place substantial intrinsic value on yourself (if only people who do that can succeed), it does not say anything about failure – confident people fail all the time, and the worst way of failing seems to be reserved for those who place substantial value on themselves and end up being successful with the wrong values.

But I wonder if our sample of “successful people” is not too biased towards those who get the spotlights. Petrov didn’t seem to put a lot of value on himself, and Arkhipov is often described as exceptionally humble; no one strives to be an unsung hero.

Comment by ramiro on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-12T20:30:09.192Z · score: 4 (3 votes) · EA · GW

Though I agree that the marginal utility of income drops a lot after some threshold, and I am not sure about how long people take to adjust their lifestyles to a drop in income, I would like to see a study taking into account the effects of wealth, savings and uncertainty. So yeah, maybe you'll be equally happy if you earn 75k or 100k, but in the latter you'll be better hedged against risks and be able to get additional utility by investing in someone else's welfare (your relatives, or donations).

Comment by ramiro on Timeline Utilitarianism · 2020-10-10T05:13:28.531Z · score: 2 (2 votes) · EA · GW

Thanks for the post. Coincidentally, I was thinking about how I have a strong moral preference for a longer timeline when I saw it.
I feel attracted by total total utilitarianism, but suppose we have N individuals, each living 80y, with the same constant utility U. Now, these individuals can either live more concentrated (say, in 100y) or more scattered (say, in 10000y) in time; I strongly prefer the latter (I'd pay some utility for it) - even though it runs against any notion of (pure) temporal discount. My intuition (though I don't trust it) is that, from the "point of view of nowhere", at some point, length may trump population; but maybe it's just some ad hoc influence of a strong bias against extinction.
Please, let me know about any source discussing this (I admit I didn't search enough for it).

Comment by ramiro on Lumpyproletariat's Shortform · 2020-10-07T15:12:05.556Z · score: 2 (2 votes) · EA · GW

There's some theoretical work on Dominant Assurance Contracts
The nice guy I know in EA who has thought more about that and is quite accessible is Dony Christie.

Comment by ramiro on Ramiro's Shortform · 2020-09-30T20:31:17.938Z · score: 1 (1 votes) · EA · GW

Thanks for this clarifying comment. I see your point - and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa - at least at the same time.
Really, I'm  more puzzled than anything else - and also surprised that I haven't seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn't change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841)  and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)

Finally, really, please, don't take this as a criticism (I'm a major fan of CE), but: 

We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)

First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it'd be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn't know how to compare the hypothetical welfare of the world's super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
 

Comment by ramiro on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-30T14:22:09.858Z · score: 4 (4 votes) · EA · GW

Thanks for this. I'm really glad for this milestone, and super proud to be part of it - tbh, it changed my life.
I'd like to see something about trends by year. I remember having read some people concerned that the quantity of new members was decreasing. Maybe, together with other info (e.g., from EA survey), we could have an idea about how EA as a whole tends to evolve.

Comment by ramiro on Ramiro's Shortform · 2020-09-28T21:57:29.270Z · score: 1 (1 votes) · EA · GW

True, thanks.
I inserted a link to the CE's webpage on the Weighted Factor Model

Comment by ramiro on Factors other than ITN? · 2020-09-27T19:23:05.484Z · score: 1 (1 votes) · EA · GW

I've seen people suggest Urgency as an additional dimension. I wonder if anyone has tried to integrate it into an ITN evaluation

Comment by ramiro on Ramiro's Shortform · 2020-09-25T11:23:26.740Z · score: 1 (1 votes) · EA · GW

Thanks. I'm glad to see I wasn't profoundly misunderstanding it. Now, I think this is a very important issue: either there's something really wrong with Charity Entreneurship assessment of welfare in different species, or I will really have to rethink my priorities ;)

Comment by ramiro on Ramiro's Shortform · 2020-09-25T01:57:53.245Z · score: 21 (6 votes) · EA · GW

Maybe I didn't understand it properly, but I guess there's something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it's 32.
Depending on your population ethics, one may think "we should improve the prospects in poor countries", but others can say "we should have more chimps." 
Or this scale has serious problems for comparisons between different species.

Source: https://www.charityentrepreneurship.com/weighted-factor-model.html
Comment by ramiro on Keynesian Altruism · 2020-09-20T22:52:53.033Z · score: 2 (2 votes) · EA · GW

I wonder if exchange rates volatility during global recessions (usually, the US$ dollar and the Euro rise in relation to national currencies in developing countries) would add another point, at least for charities located in the developing world.
(Personally, since my job is very stable and opportunities for investments scarce, I have been increasing my own donations to account for my declining consumption)

Comment by ramiro on EA Relationship Status · 2020-09-20T21:40:20.654Z · score: 3 (2 votes) · EA · GW

If you allow me a little joke, maybe this can be explained by people trying to follow the "marry to give" path?

Comment by ramiro on EA Relationship Status · 2020-09-20T15:22:23.450Z · score: 7 (3 votes) · EA · GW

Did anyone stratify the data by gender? We seem to have way more males in EA.

Comment by ramiro on When can Writing Fiction Change the World? · 2020-08-31T17:52:10.551Z · score: 3 (2 votes) · EA · GW

Nice post! I'd really like to see more on how fiction might publicize an idea and influence people - specially young ones.

And that's why I couldn't stop thinking about Terry Pratchett while I was reading this post; and I'm often surprised that this is not a such a salient common reference in this community. When I started reading HPMOR, I thought "Yudkowsky is doing to Rowling what Pratchett did to Tolkien etc." - and of course, Yudkowsky wrote some sort of elegy in the HPMOR blog on the day Pratchett died.
You see, I can't avoid thinking I got here because, as a teenager, I wanted to read about comic fantasy, and then... I got "empathically entangled" with some characters which became role models like Dangerous Beans, Brutha, Vimes, Granny Weatherwax, even Death (at least in Hogfather). I think this might happen for some people (find some role model infiction), but not for everyone, of course.

Comment by ramiro on A (Very) Short History of the Collapse of Civilizations, and Why it Matters · 2020-08-30T13:39:15.850Z · score: 7 (3 votes) · EA · GW

Thanks for the post. I wonder if one the great gaps in education (at least for me), that prevents people from becoming more concerned about the longterm future, is the lack of emphasis on civilization collapses - as much as the lack of emphasis on the progress and the risks from the last 110 years.

Comment by ramiro on It's Not Hard to Be Morally Excellent; You Just Choose Not To Be · 2020-08-25T19:38:43.482Z · score: 8 (5 votes) · EA · GW

I would agree that moral improvement is "easy", like saving +$100 or running more 100m might be easy, but moral excellence? Yeah, Khorton is totally right.

What I realize is that moral excellence is really hard not because of the reasons most people invoke to justify not striving for that ("selfishness is natural", "it's just signaling"), but because, to extend the comparison with mountain climbing, it's like climbing without never knowing where and when it will end.

Maybe hiking is a better metaphor. It's quite "easy & simple", but... Really, can you climb Aconcagua right now? Without prep? What if there are no maps,compass, GPS? Wouldn't you prefer to do it with others you can count on?

Comment by ramiro on Kick someone's butt today · 2020-08-20T14:56:39.396Z · score: 2 (2 votes) · EA · GW

In that case an external person kicking your butt can be particularly useful, perhaps even more than in other situations. I think this butt kicking thing can be a way of acknowledging and avoiding your own biases and motivated reasoning to stay in harmful situations that stall your career.

This is true, but perhaps it'd not extrapolate so well for everyone - I can imagine the risk of making the butt-kicked person just feel even more pressured. But if you really master the Art of Butt-Kicking (I'd say "softly butt-kicking," but it sounds creepy), I see how this can go well ;)

Comment by ramiro on Kick someone's butt today · 2020-08-17T12:26:56.727Z · score: 7 (5 votes) · EA · GW

Great post! We all know encouragement is often great, but I hadn't considered that it might be necessary or more effective in those specific situations.
One of the things that caught my attention in your personal experience is that the person was a recent acquaintance. I wonder how friendship might insert other nuances into the process of butt-kicking; I mean, that's what friends are for, but they may end up being more protective (like "Hey, you're a great ukulule player, but maybe you should get your Master's first"), and maybe butt-kicked may end up discounting their feedback because of that ("Of course, you think I can do anything, look at your Christmas Card).

Comment by ramiro on Should we think more about EA dating? · 2020-07-26T15:17:09.327Z · score: 6 (3 votes) · EA · GW

I loved this post and its comments. I'd add:

1. You should totally tell that girl (and maybe everyone else) about the drowning child, the real challenge is to find the best way to do that. Now, instead of emphasizing how having a significant other aligned with your goals might improve your prospects, I wonder how it affects your own personal happiness. People don't have to identify as EAs to support you or share your ultimate goals, but it sure helps; this might be demanding, as other people emphasized above, but actually the effect of your personal lifestyle is usually not so big, so you can compromise a little bit if your acquaintances do it, too. The real problem, in my opinion, is that you'll probably live way better if your significant other understands why something is important to you, instead of just accepting it as some sort of peculiar hobby. Now if that significant other loves you because of that...

Plus, the opposite is also true. You may fall in love with someone for their charm, wit & beauty, but passion fades; now if you're with someone because you love what they do and you can in some sense feel a part of it...

I'm definitively outside of my expertise here (I can only provide negative examples); I'd not say "Nuca Zaria: Effective Dating", but I'd advise young people to seriously entertain the idea that their choice of partners might be comparable (from a personal POV) to some decisions on career paths.

2. This problem extrapolates to friends, though in a milder way. I'm profoundly grateful to my EA friends for the way they make me feel comfortable. I've always felt sort of an outsider in my personal social life, but now, with other people, I'm often that guy who stops in the middle of a sentence to refrain from quoting The Precipice or shedding some tears for human suffering and dreams, etc. I don't want to be the one who lends EA a cult-like appearance.

3. I'd totally welcome EA tips on social life in general; not about how to be charming (that's useful, but I learned one trick or two), but focused on how to be happy with this. Besides my own welfare, I believe it could make me more effective; even if I'm not always trying to "convert" my acquaintances, I want to have a positive impact on / through them. Personally, sometimes I admit to my old friends - at least those who I think can sort of understand it - that I'm trying to "use" them to maximize something like general expected utility through our interactions. I don't think that's the optimal strategy, but it's hard to lie to smart friends, and I sort of see this as a higher form of friendship; so they might forgive my lame or cynical comments like "Wow, this wine is totally worth 20 bednets", or "Now you face Global Warming, the Red Dragon, Destroyer of Worlds; roll initiative."

4. MacAskill is just too handsome, it's counterfactually more effective to pick less dreamy characters. I'd be prefer Toby Ord, which sees the present as a more hingey moment.

Comment by ramiro on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-25T13:03:47.181Z · score: 7 (4 votes) · EA · GW

Discussions over local vs. global remind me the contrast between the performances of two Give Directly programs, 100+ (cash tranfers for American families), which received US$ 114.3 mi, and Covid-19 Africa, which received US$ 53.7. I can see reasons for GD supporting 100+, and I'm not surprised that US$1 is more likely to be donated to poor Americans than to sub-saharian Africa, but this made me (and other people, of course, but I speak for me) wonder if we can draw a line between "we're using parochialism to promote EA-like goals" and "we're compromising with parochialism, diverting scarce resources and giving up effectiveness"? I don't think of this as a main issue, but as a puzzle; it would be interesting to have some research on public criteria or clues about this difference.

Comment by ramiro on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-25T12:59:29.820Z · score: 5 (3 votes) · EA · GW

Thanks for the post! Would you have any examples of causes that could be a local priority, but not a global one?

Comment by ramiro on Mike Huemer on The Case for Tyranny · 2020-07-16T17:28:35.279Z · score: 4 (3 votes) · EA · GW

The conclusion:

That’s the problem with freedom, in an advanced society. What can be done about it?
a. Targeted restrictions: The most natural thought is that we should tightly control just the really dangerous technologies, the ones that could be used to kill millions of people. So far, that’s worked because there aren’t that many such technologies (esp. nuclear weapons). It may not work in the future, though, when there are more such technologies. [...]
b. Defensive technologies: We’ll build defenses against the main threats. E.g., we’ll build defenses against nuclear weapons, we’ll engineer ourselves to resist genetically engineered viruses, etc. Problem: same as above; we may not be able to anticipate all the threats in advance. Also, defense is generally a losing game. It’s easier and cheaper to destroy things than to protect them. That’s why we have the saying “the best defense is a good offense”.
[...]
c. Tyranny/the End of Privacy: Maybe in the future, everyone will need to be closely monitored at all times, so that, if someone starts trying to destroy the world, other people can immediately intervene. Sam Harris suggested this in a podcast somewhere. Note: obviously, this applies as well (especially!) to government officials.
d. A better alternative . . . ?
Someone please fill in (d) for me. Thanks.

I don't think (c) works so better than the others. It implies a single-point-of-failure and bad incentives due to no accountability, besides the really hard problem of controlling everyone.

Transhuminsts would say (d) is super AGI, but that's basically (c) with more tech.

(Interplanetary civilization would possibly solve it... but as Huemer remarked, we're closer to destruction than to spreading through the galaxy)

Comment by ramiro on Ramiro's Shortform · 2020-07-15T17:40:33.232Z · score: 1 (1 votes) · EA · GW

Legal personality & AI systems

From the first draft of the UNESCO Recommendation on AI Ethics:

Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.

I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of "AI DAO" - i.e., from creating a legal entity totally implemented by an autonomous system. This doesn't seem reasonable; after all, what is company if not some sort of artificial agent?

Comment by ramiro on Collection of good 2012-2017 EA forum posts · 2020-07-13T00:52:21.691Z · score: 8 (3 votes) · EA · GW

We should have some sort of e-book with some of the "best picks" by year

Comment by ramiro on Is it possible, and if so how, to arrive at ‘strong’ EA conclusions without the use of utilitarian principles? · 2020-07-12T16:30:26.816Z · score: 2 (2 votes) · EA · GW

[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]

I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.

It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).

My own personal and sketchy take here would be something like:

To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).

To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.

To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare function

Comment by ramiro on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-07-08T19:14:35.379Z · score: 1 (1 votes) · EA · GW

Where can we get the video?

Comment by ramiro on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T13:27:21.356Z · score: 3 (3 votes) · EA · GW

I think there's a small typo, probably from your previous post on prisons:

Note that each prison’s profit-maximizing bid is independent of the other prisons’ bids

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Comment by ramiro on Ramiro's Shortform · 2020-07-04T23:39:22.401Z · score: 4 (3 votes) · EA · GW

Should donations be counter-cyclical? At least as a "matter of when" (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don't think patient philanthropists should "give now instead of later" just because of that (we'll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?

Comment by ramiro on Ramiro's Shortform · 2020-07-02T02:18:43.153Z · score: 3 (3 votes) · EA · GW

I just answered to UNESCO Public Online Consultation on the draft of a Recommendation on AI Ethics - it was longer and more complex than I thought.

I'd really love to know what other EA's think of it. I'm very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it's the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address "long-term catastrophic harms"), I haven't seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.

Comment by ramiro on Prabhat Soni's Shortform · 2020-06-30T15:59:15.106Z · score: 1 (1 votes) · EA · GW

I think people already do some of it. I guess the rhetorical shift from x-risk reasoning ("hey, we're all gonna die!") to lontermist arguments ("imagine how wonderful the future can be after the Precipice...") is based on that.

However, I think that, besides cultural challenges, the greatest obstacles for longtermist reasoning, in our societies (particularly in LMIC), is that we have an "intergenerational Tragedy of the Commons" aggravated by short-term bias (and hyperbolic discount) and representativeness heuristic (we've never observed human extinction). People don't usually think about the longterm future - but, even when they do it, they don't want to trade their individual-present-certain welfare for a collective (and non-identifiable), future and uncertain welfare.

Comment by ramiro on My amateur method for translations · 2020-06-30T15:41:28.907Z · score: 2 (2 votes) · EA · GW

Thanks!

I find DeepL more useful because, unlike Google Translate, I don't have to slice my text into 5k characters bits (though I often appeal to Google and Linguee when I want to check small excerpts). It has provided me with a better experience than Microsoft Word translation tool, too.

Sure, I added some remarks on how we used it to translate some EA-related material. But, honestly, it's basically a handy guide.

Comment by ramiro on How should we run the EA Forum Prize? · 2020-06-24T14:50:17.521Z · score: 2 (2 votes) · EA · GW

I totally agree with this:

Also, as someone who doesn't read every single post on the Forum, I also find the Prize useful for highlighting what content is actually worth reading,

On the other hand, I think high-karma posts are already in evidence in the Forum Favorites section.

Comment by ramiro on Patrick Collison on Effective Altruism · 2020-06-24T13:56:22.988Z · score: 28 (10 votes) · EA · GW
But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.

Actually, there's a lot of EAs researching philosophy and human psychology.

I think Collison's conception of EA is something like "GiveWell charity recommendations" - this seems to be a common misunderstanding shared by most non-EA people. I didn't check the whole interview, but it seems weird that he doesn't account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-06-23T01:18:15.048Z · score: 3 (2 votes) · EA · GW

Sorry, I should have been more clear: I think "treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners" is hard to build support for, and may imply some risk of abuse.

Comment by ramiro on Khorton's Shortform · 2020-06-22T19:43:29.040Z · score: 2 (2 votes) · EA · GW

Cool. Any special reason for 7?

There's even a specific term I can't recall for intentional changes in the environment that a social group would make to domesticate a landscape and provide services for future. It will take me some time to find it.

On the other hand, besides the specifics of strong longtermism, I guess that the conjugation of these ideas is pretty recent: a) concern for humanity as a whole, b) a scope longer than 150 years, c) the existence of a trade-off between present and future welfare, d) the balance is tipped in favor of the long-term. [epistemic status: just an insight, would take me too long to look for a counter-example)

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-06-22T17:07:14.363Z · score: 3 (2 votes) · EA · GW

I'dlike to have read this before having our discussion:

In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.

But their recommendations sound scary:

First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.
Comment by ramiro on My open-for-feedback donation plans · 2020-06-21T18:53:27.967Z · score: 2 (2 votes) · EA · GW

I'm not sure I can help you, but I thank you for this post - it made me include ALLFED in my donation plans.

Should I give more than 10% this year, due to COVID-19


Well, it won't hurt anyone if you donate more than what you pledged for. I pondered on a similar issue, and have decided to donate to Covid-related charities what I've saved due to my decrease in consumption. It feels kind of "fair".

And there seem to be good arguments for mostly investing, letting interest compound, and giving a lot later (or setting up a trust or something to do so on one’s behalf).

Please let me know if you change your mind after reading Tramwell's argument. At least for me, in my home country, is very complex to invest in such a volatile scenarion. I'm probably biased here; I have already lost a significant portion of my savings (which was dumb, because I knew Covid was coming), and my first thought was "I should have given it all to AMF."

Comment by ramiro on EA considerations regarding increasing political polarization · 2020-06-21T18:40:10.797Z · score: 5 (4 votes) · EA · GW

Really thanks for this post. I have some questions:

1) I wonder if you think the current polarization might be somehow associated with these possible trends:

a) Increased use of social networks, misinformation and The revolt of the public;

b) The rise of a new cold war – where countries engage in memetic warfare, and elites become divided over international policy;

c) Something even broader, like Peter Turchin’s Secular cycles (or more accepted Kondratieff cycles, if you don’t like something resembling Harri Seldon’s psycho-history). These inequality-polarization-populism-conflict trend seem to be as old as Urukagina's rule.

2) Do you think the current issues in American universities are more comparable to the Cultural Revolution than to May ’68 in France (which led to social disruption) - or maybe other examples of student activism? This seems to be historically more common. A very important disanalogy between the Chinese Revolution was that it was perceived to be fueled by the Great Leader, which is not presently happening in any student activism I’m aware of.

Comment by ramiro on Geographic diversity in EA · 2020-06-16T14:24:09.956Z · score: 12 (7 votes) · EA · GW

Thanks for the post. My comments:

I have the intuition that with a volatile dollar price it doesn't always make sense to donate to EA recommended charities and perhaps donors could allocate better their donations by donating locally

1. Actually, if you're from a poor country and use the current TLYCS calculator, you likely have to be rich for them to recommend you to donate a significant portion of your income.

2. I have mixed intuitions here, maybe someone could better disentangle them: a) if my currency vs. U$dollar exchange rate goes from 1:2 to 1:4, my donations apparently lose half of their value; b) however, if this movement is global (because exchange rates markets overvalue U$ dollar, due to the uncertainties caused by the pandemic), then probably the currency in the countries receiving aid will drop, too - so, on average, everything remains the same; c) due to recession, people donate less, thus saving money to donate later may have a cyclical effect.

EA recommends policy careers but I suspect that it's an even more important path in LMICs, where policies are weaker, policymakers are even less evidence based and where institutions have a lot more potential to improve.

I totally agree with that. But LMICS have their own peculiarities and serious governance issues; for instance, I haven't found 80kh advice on public policy that is applicable to someone beggining a civil service career in Brazil. It'd be probably impactful to find organizations with more local expertise.

I won't convince my friend's uncle to donate to Against Malaria but I could convince him to donate to a colombian charity

I don't know how much it scales, but in Brazil, Doebem offers to tranfer donations to GiveWell charities (AMF, GD and SCI), and also to Brazilian charities recognizedly transparent and that have had their impact previously evaluated by international researchers (though not with the same rigor of GW). Besides, they have experimented with direct transfers during the pandemic.

On the other hand, in LMICS, I think many people are often suspicious of local charities they don't have direct contact with, and might be more trustful of foreign recognized charities - with established reputations and rigorous evaluation. For example, when I talk about GD, people usually say "great idea"; but when I mention doedireto, I face all kinds of questions: "how can you ensure the money gets to the right person? or that they won't spend in drinks? etc." This is not unjustified, considering the bad rep the charity sector may have in some circles.

I wonder if there is a bias when EA talks about problems not being “neglected” enough when dismissing some cause areas or focus topics

1. I think "neglectedness" is actually a proxy to assess the expected marginal impact of and additional contribution to a cause - . So, it might not be applicable to causes advocating for systemic change, where you should perform some sort of tipping point analysis instead. On the other hand, the true problem here is: how do you evaluate charities / projects aiming for systemic change?

2. This might lead to a selection bias - we'll end up focusing on projects that might be easier to evaluate; this is often compared to that joke where an economist searches for her keys under the lightpost because that's the only place she can see. I think most people working with charity evaluation in EA are aware of that; on the other hand, requiring no evidence would likely lead to bad incentives, and you still need some evidence to assess the opportunity costs of a project.

3. I actually think improving women participation in LMIC governments (and leading positions in general) would be a good cause precisely because (epistemic status: guess based on anecdotal experiences and some light readings on organizations and management) it would improve institutional decision-making (besides, of course, mitigating discrimination). It would be interesting to see a more profound assessment of this area.

Comment by ramiro on Geographic diversity in EA · 2020-06-16T12:49:36.177Z · score: 11 (9 votes) · EA · GW

I wonder what you'd think about having a network connecting South American EAs. In Brazil, we have considered many of the matters you are now posing; I believe it would be, overall, mutually beneficial, possibly even fun.

Comment by ramiro on EA and tackling racism · 2020-06-15T02:58:29.007Z · score: 4 (3 votes) · EA · GW

Actually, I think that, before BLM, I underestimated the impact of racism (probably because it's hard to evaluate and compare current interventions to GW's charity recommendations); also, given BLM and the possibility of systemic change, I now think it might be more tractable - this might even be a social urgency.

But what most bothered me in your text was:

a) EA does not reduce everything to mosquito nets and AI - the problem is that almost no one else was paying attention to these issues before, and they're really important;

b) the reason why most people don't think about it is that the concerned populations are neglected - they're seen as having less value than the average life in the developed world. Moreover, in the case of global health and poverty interventions in poor countries (mostly African countries), I think it's quite plausible that racism (i.e., ethnic conflicts, brutal colonial past, indifference from developed countries) is partially responsible for those problems (neglected diseases and extreme poverty). For instance, racism was a key issue in previous humanitarian tragedies, such as the Great Famines in Ireland and Bengal.

Comment by ramiro on Who should / is going to win 2020 FLI award 2020? · 2020-06-12T23:03:20.918Z · score: 3 (2 votes) · EA · GW

Good point. I do think it has to be an expensive signal, but why not U$25k instead of 50?

Comment by ramiro on Who should / is going to win 2020 FLI award 2020? · 2020-06-12T22:56:04.826Z · score: 2 (2 votes) · EA · GW

A very strong candidate, indeed. But my nomination goes to a classical: Viktor Zhdanov, the soviet bioweapons expert who convinced WHO to eradicate smallpox. (I just realized that it would be the third soviet citizen winning the award)

Comment by ramiro on Gordon Irlam: an effective altruist ahead of his time · 2020-06-11T20:45:42.395Z · score: 14 (9 votes) · EA · GW

Thanks for this post. Besides due recognition, I think that studying people who professed EA ideas before the movement began may provide insights on, e.g., what prevented these ideas from spreading before, what shortcomes they faced, what actually worked, etc.

Comment by ramiro on Idea: statements on behalf of the general EA community · 2020-06-11T13:57:40.831Z · score: 7 (5 votes) · EA · GW

I think CEA often plays the role of expressing some sort of aggregate or social choice for the EA movement - like in the case fo the guiding principles.

On the other hand, I take reputational risk really seriously, especially if we start criticizing policy decisons or specific institutions; so it would be more prudent to have particular organizations issuing statements and open letters (like TLYCS, or FLI, etc.), so that any eventual backlash wouldn't extrapolate to EA as a whole.

Comment by ramiro on EA and tackling racism · 2020-06-10T03:15:06.210Z · score: 21 (12 votes) · EA · GW

Thanks for this post. I am trying to think about charities, like CEA's Groups team recommendations, in this light. Besides, I think someone should deeply think about how EAs should react to the possibility of social changes – when we are more likely to reach a tipping point leading to a very impactful event (or, in a more pessimistic tone, where it can escalate into catastrophe). For instance, in situations like this, neglectedness is probably a bad heuristics - as remarked by A. Broi.

On the other hand, this sounds inaccurate, if not unfair, to me:

Is EA really all about taking every question and twisting it back to malaria nets and AI risk?

I don’t actually want to argue about “what should EAs do”. Just like you, all I want is to share a thought – in my case, my deep realization that attention is a scarce resource. I had this “epiphany” on Monday, when I read that a new Ebola outbreak had been detected in the Democratic Republic of Congo (DRC). In the same week, same country, the High Commissioner for Human Rights denounced the massacre about 1,300 civilians. Which reminded me this region has faced ethnic and political violence since the 1990s, when the First and the Second Congo Wars happened, leading to death more than 5 million people.

But most people have never even heard of it – I hadn't, until three years ago, when I had my first contact with EA. Likewise, if the refugee crisis in Europe is a hot topic in world politics, the fact that Uganda is home to more than 1.4 million refugees (mainly from DRC and Sudan) is largely ignored - but not by GD.

So, I didn't really see your point with this:

Do we also need an “EA So White” too
Comment by ramiro on It's OK To Also Donate To Non-EA Causes · 2020-06-03T14:02:56.581Z · score: 8 (5 votes) · EA · GW

I'm really sorry for that, I didn't intend it at all. Thanks for pointing it out.

It's just that I was reading the Vox newsletter on this issue right now and thought "Well, maybe Campaign Zero is really good and I should consider it, or this guy may want to check those other charities."

(Even if it's all about fuzzies... when I'm purchasing a wine, I still want the best wine for the lowest cost, and I'd appreciate any info on how to obtain it - even though I can't change my past consumption)