Posts

Is there anything like "green bonds" for x-risk mitigation? 2020-06-30T00:33:38.732Z · score: 21 (10 votes)
My amateur method for translations 2020-06-30T00:29:30.043Z · score: 11 (6 votes)
Indifference, racism and violence: what comes after justice for George Floyd? 2020-06-12T01:44:23.358Z · score: 38 (16 votes)
Who should / is going to win 2020 FLI award 2020? 2020-06-11T19:20:11.364Z · score: 9 (5 votes)
Is rapid diagnostic testing (RDT), such as for coronavirus, a neglected area in Global Health? 2020-03-17T22:24:05.915Z · score: 11 (4 votes)
Ramiro's Shortform 2019-10-17T13:16:14.822Z · score: 3 (2 votes)
Merging with AI would be suicide for the human mind - Susan Schneider 2019-10-03T17:55:07.789Z · score: 0 (2 votes)

Comments

Comment by ramiro on Ramiro's Shortform · 2020-07-15T17:40:33.232Z · score: 1 (1 votes) · EA · GW

Legal personality & AI systems

From the first draft of the UNESCO Recommendation on AI Ethics:

Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.

I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of "AI DAO" - i.e., from creating a legal entity totally implemented by an autonomous system. This doesn't seem reasonable; after all, what is company if not some sort of artificial agent?

Comment by ramiro on Collection of good 2012-2017 EA forum posts · 2020-07-13T00:52:21.691Z · score: 8 (3 votes) · EA · GW

We should have some sort of e-book with some of the "best picks" by year

Comment by ramiro on Is it possible, and if so how, to arrive at ‘strong’ EA conclusions without the use of utilitarian principles? · 2020-07-12T16:30:26.816Z · score: 2 (2 votes) · EA · GW

[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]

I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.

It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).

My own personal and sketchy take here would be something like:

To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).

To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.

To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare function

Comment by ramiro on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-07-08T19:14:35.379Z · score: 1 (1 votes) · EA · GW

Where can we get the video?

Comment by ramiro on Maximizing the Long-Run Returns of Forced Savings · 2020-07-07T13:27:21.356Z · score: 3 (3 votes) · EA · GW

I think there's a small typo, probably from your previous post on prisons:

Note that each prison’s profit-maximizing bid is independent of the other prisons’ bids

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Comment by ramiro on Ramiro's Shortform · 2020-07-04T23:39:22.401Z · score: 4 (3 votes) · EA · GW

Should donations be counter-cyclical? At least as a "matter of when" (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don't think patient philanthropists should "give now instead of later" just because of that (we'll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?

Comment by ramiro on Ramiro's Shortform · 2020-07-02T02:18:43.153Z · score: 3 (3 votes) · EA · GW

I just answered to UNESCO Public Online Consultation on the draft of a Recommendation on AI Ethics - it was longer and more complex than I thought.

I'd really love to know what other EA's think of it. I'm very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it's the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address "long-term catastrophic harms"), I haven't seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.

Comment by ramiro on Prabhat Soni's Shortform · 2020-06-30T15:59:15.106Z · score: 1 (1 votes) · EA · GW

I think people already do some of it. I guess the rhetorical shift from x-risk reasoning ("hey, we're all gonna die!") to lontermist arguments ("imagine how wonderful the future can be after the Precipice...") is based on that.

However, I think that, besides cultural challenges, the greatest obstacles for longtermist reasoning, in our societies (particularly in LMIC), is that we have an "intergenerational Tragedy of the Commons" aggravated by short-term bias (and hyperbolic discount) and representativeness heuristic (we've never observed human extinction). People don't usually think about the longterm future - but, even when they do it, they don't want to trade their individual-present-certain welfare for a collective (and non-identifiable), future and uncertain welfare.

Comment by ramiro on My amateur method for translations · 2020-06-30T15:41:28.907Z · score: 2 (2 votes) · EA · GW

Thanks!

I find DeepL more useful because, unlike Google Translate, I don't have to slice my text into 5k characters bits (though I often appeal to Google and Linguee when I want to check small excerpts). It has provided me with a better experience than Microsoft Word translation tool, too.

Sure, I added some remarks on how we used it to translate some EA-related material. But, honestly, it's basically a handy guide.

Comment by ramiro on How should we run the EA Forum Prize? · 2020-06-24T14:50:17.521Z · score: 2 (2 votes) · EA · GW

I totally agree with this:

Also, as someone who doesn't read every single post on the Forum, I also find the Prize useful for highlighting what content is actually worth reading,

On the other hand, I think high-karma posts are already in evidence in the Forum Favorites section.

Comment by ramiro on Patrick Collison on Effective Altruism · 2020-06-24T13:56:22.988Z · score: 26 (8 votes) · EA · GW
But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.

Actually, there's a lot of EAs researching philosophy and human psychology.

I think Collison's conception of EA is something like "GiveWell charity recommendations" - this seems to be a common misunderstanding shared by most non-EA people. I didn't check the whole interview, but it seems weird that he doesn't account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-06-23T01:18:15.048Z · score: 3 (2 votes) · EA · GW

Sorry, I should have been more clear: I think "treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners" is hard to build support for, and may imply some risk of abuse.

Comment by ramiro on Khorton's Shortform · 2020-06-22T19:43:29.040Z · score: 2 (2 votes) · EA · GW

Cool. Any special reason for 7?

There's even a specific term I can't recall for intentional changes in the environment that a social group would make to domesticate a landscape and provide services for future. It will take me some time to find it.

On the other hand, besides the specifics of strong longtermism, I guess that the conjugation of these ideas is pretty recent: a) concern for humanity as a whole, b) a scope longer than 150 years, c) the existence of a trade-off between present and future welfare, d) the balance is tipped in favor of the long-term. [epistemic status: just an insight, would take me too long to look for a counter-example)

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-06-22T17:07:14.363Z · score: 3 (2 votes) · EA · GW

I'dlike to have read this before having our discussion:

In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.

But their recommendations sound scary:

First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.
Comment by ramiro on My open-for-feedback donation plans · 2020-06-21T18:53:27.967Z · score: 2 (2 votes) · EA · GW

I'm not sure I can help you, but I thank you for this post - it made me include ALLFED in my donation plans.

Should I give more than 10% this year, due to COVID-19


Well, it won't hurt anyone if you donate more than what you pledged for. I pondered on a similar issue, and have decided to donate to Covid-related charities what I've saved due to my decrease in consumption. It feels kind of "fair".

And there seem to be good arguments for mostly investing, letting interest compound, and giving a lot later (or setting up a trust or something to do so on one’s behalf).

Please let me know if you change your mind after reading Tramwell's argument. At least for me, in my home country, is very complex to invest in such a volatile scenarion. I'm probably biased here; I have already lost a significant portion of my savings (which was dumb, because I knew Covid was coming), and my first thought was "I should have given it all to AMF."

Comment by ramiro on EA considerations regarding increasing political polarization · 2020-06-21T18:40:10.797Z · score: 5 (4 votes) · EA · GW

Really thanks for this post. I have some questions:

1) I wonder if you think the current polarization might be somehow associated with these possible trends:

a) Increased use of social networks, misinformation and The revolt of the public;

b) The rise of a new cold war – where countries engage in memetic warfare, and elites become divided over international policy;

c) Something even broader, like Peter Turchin’s Secular cycles (or more accepted Kondratieff cycles, if you don’t like something resembling Harri Seldon’s psycho-history). These inequality-polarization-populism-conflict trend seem to be as old as Urukagina's rule.

2) Do you think the current issues in American universities are more comparable to the Cultural Revolution than to May ’68 in France (which led to social disruption) - or maybe other examples of student activism? This seems to be historically more common. A very important disanalogy between the Chinese Revolution was that it was perceived to be fueled by the Great Leader, which is not presently happening in any student activism I’m aware of.

Comment by ramiro on Geographic diversity in EA · 2020-06-16T14:24:09.956Z · score: 12 (7 votes) · EA · GW

Thanks for the post. My comments:

I have the intuition that with a volatile dollar price it doesn't always make sense to donate to EA recommended charities and perhaps donors could allocate better their donations by donating locally

1. Actually, if you're from a poor country and use the current TLYCS calculator, you likely have to be rich for them to recommend you to donate a significant portion of your income.

2. I have mixed intuitions here, maybe someone could better disentangle them: a) if my currency vs. U$dollar exchange rate goes from 1:2 to 1:4, my donations apparently lose half of their value; b) however, if this movement is global (because exchange rates markets overvalue U$ dollar, due to the uncertainties caused by the pandemic), then probably the currency in the countries receiving aid will drop, too - so, on average, everything remains the same; c) due to recession, people donate less, thus saving money to donate later may have a cyclical effect.

EA recommends policy careers but I suspect that it's an even more important path in LMICs, where policies are weaker, policymakers are even less evidence based and where institutions have a lot more potential to improve.

I totally agree with that. But LMICS have their own peculiarities and serious governance issues; for instance, I haven't found 80kh advice on public policy that is applicable to someone beggining a civil service career in Brazil. It'd be probably impactful to find organizations with more local expertise.

I won't convince my friend's uncle to donate to Against Malaria but I could convince him to donate to a colombian charity

I don't know how much it scales, but in Brazil, Doebem offers to tranfer donations to GiveWell charities (AMF, GD and SCI), and also to Brazilian charities recognizedly transparent and that have had their impact previously evaluated by international researchers (though not with the same rigor of GW). Besides, they have experimented with direct transfers during the pandemic.

On the other hand, in LMICS, I think many people are often suspicious of local charities they don't have direct contact with, and might be more trustful of foreign recognized charities - with established reputations and rigorous evaluation. For example, when I talk about GD, people usually say "great idea"; but when I mention doedireto, I face all kinds of questions: "how can you ensure the money gets to the right person? or that they won't spend in drinks? etc." This is not unjustified, considering the bad rep the charity sector may have in some circles.

I wonder if there is a bias when EA talks about problems not being “neglected” enough when dismissing some cause areas or focus topics

1. I think "neglectedness" is actually a proxy to assess the expected marginal impact of and additional contribution to a cause - . So, it might not be applicable to causes advocating for systemic change, where you should perform some sort of tipping point analysis instead. On the other hand, the true problem here is: how do you evaluate charities / projects aiming for systemic change?

2. This might lead to a selection bias - we'll end up focusing on projects that might be easier to evaluate; this is often compared to that joke where an economist searches for her keys under the lightpost because that's the only place she can see. I think most people working with charity evaluation in EA are aware of that; on the other hand, requiring no evidence would likely lead to bad incentives, and you still need some evidence to assess the opportunity costs of a project.

3. I actually think improving women participation in LMIC governments (and leading positions in general) would be a good cause precisely because (epistemic status: guess based on anecdotal experiences and some light readings on organizations and management) it would improve institutional decision-making (besides, of course, mitigating discrimination). It would be interesting to see a more profound assessment of this area.

Comment by ramiro on Geographic diversity in EA · 2020-06-16T12:49:36.177Z · score: 11 (9 votes) · EA · GW

I wonder what you'd think about having a network connecting South American EAs. In Brazil, we have considered many of the matters you are now posing; I believe it would be, overall, mutually beneficial, possibly even fun.

Comment by ramiro on EA and tackling racism · 2020-06-15T02:58:29.007Z · score: 3 (2 votes) · EA · GW

Actually, I think that, before BLM, I underestimated the impact of racism (probably because it's hard to evaluate and compare current interventions to GW's charity recommendations); also, given BLM and the possibility of systemic change, I now think it might be more tractable - this might even be a social urgency.

But what most bothered me in your text was:

a) EA does not reduce everything to mosquito nets and AI - the problem is that almost no one else was paying attention to these issues before, and they're really important;

b) the reason why most people don't think about it is that the concerned populations are neglected - they're seen as having less value than the average life in the developed world. Moreover, in the case of global health and poverty interventions in poor countries (mostly African countries), I think it's quite plausible that racism (i.e., ethnic conflicts, brutal colonial past, indifference from developed countries) is partially responsible for those problems (neglected diseases and extreme poverty). For instance, racism was a key issue in previous humanitarian tragedies, such as the Great Famines in Ireland and Bengal.

Comment by ramiro on Who should / is going to win 2020 FLI award 2020? · 2020-06-12T23:03:20.918Z · score: 3 (2 votes) · EA · GW

Good point. I do think it has to be an expensive signal, but why not U$25k instead of 50?

Comment by ramiro on Who should / is going to win 2020 FLI award 2020? · 2020-06-12T22:56:04.826Z · score: 1 (1 votes) · EA · GW

A very strong candidate, indeed. But my nomination goes to a classical: Viktor Zhdanov, the soviet bioweapons expert who convinced WHO to eradicate smallpox. (I just realized that it would be the third soviet citizen winning the award)

Comment by ramiro on Gordon Irlam: an effective altruist ahead of his time · 2020-06-11T20:45:42.395Z · score: 14 (9 votes) · EA · GW

Thanks for this post. Besides due recognition, I think that studying people who professed EA ideas before the movement began may provide insights on, e.g., what prevented these ideas from spreading before, what shortcomes they faced, what actually worked, etc.

Comment by ramiro on Idea: statements on behalf of the general EA community · 2020-06-11T13:57:40.831Z · score: 7 (5 votes) · EA · GW

I think CEA often plays the role of expressing some sort of aggregate or social choice for the EA movement - like in the case fo the guiding principles.

On the other hand, I take reputational risk really seriously, especially if we start criticizing policy decisons or specific institutions; so it would be more prudent to have particular organizations issuing statements and open letters (like TLYCS, or FLI, etc.), so that any eventual backlash wouldn't extrapolate to EA as a whole.

Comment by ramiro on EA and tackling racism · 2020-06-10T03:15:06.210Z · score: 21 (12 votes) · EA · GW

Thanks for this post. I am trying to think about charities, like CEA's Groups team recommendations, in this light. Besides, I think someone should deeply think about how EAs should react to the possibility of social changes – when we are more likely to reach a tipping point leading to a very impactful event (or, in a more pessimistic tone, where it can escalate into catastrophe). For instance, in situations like this, neglectedness is probably a bad heuristics - as remarked by A. Broi.

On the other hand, this sounds inaccurate, if not unfair, to me:

Is EA really all about taking every question and twisting it back to malaria nets and AI risk?

I don’t actually want to argue about “what should EAs do”. Just like you, all I want is to share a thought – in my case, my deep realization that attention is a scarce resource. I had this “epiphany” on Monday, when I read that a new Ebola outbreak had been detected in the Democratic Republic of Congo (DRC). In the same week, same country, the High Commissioner for Human Rights denounced the massacre about 1,300 civilians. Which reminded me this region has faced ethnic and political violence since the 1990s, when the First and the Second Congo Wars happened, leading to death more than 5 million people.

But most people have never even heard of it – I hadn't, until three years ago, when I had my first contact with EA. Likewise, if the refugee crisis in Europe is a hot topic in world politics, the fact that Uganda is home to more than 1.4 million refugees (mainly from DRC and Sudan) is largely ignored - but not by GD.

So, I didn't really see your point with this:

Do we also need an “EA So White” too
Comment by ramiro on It's OK To Also Donate To Non-EA Causes · 2020-06-03T14:02:56.581Z · score: 8 (5 votes) · EA · GW

I'm really sorry for that, I didn't intend it at all. Thanks for pointing it out.

It's just that I was reading the Vox newsletter on this issue right now and thought "Well, maybe Campaign Zero is really good and I should consider it, or this guy may want to check those other charities."

(Even if it's all about fuzzies... when I'm purchasing a wine, I still want the best wine for the lowest cost, and I'd appreciate any info on how to obtain it - even though I can't change my past consumption)

Comment by ramiro on It's OK To Also Donate To Non-EA Causes · 2020-06-03T13:48:49.050Z · score: 6 (4 votes) · EA · GW
I'm taking my own advice, by the way; during the nationwide protests over the killing of George Floyd, I've donated $1000 to Campaign Zero, but I'm not counting it toward my 10% EA donation pledge

Have you considered Open Phil's suggestions, ASJ and the Bronx Freedom Fund?

Comment by ramiro on Expert Communities and Public Revolt · 2020-06-01T01:10:55.154Z · score: 1 (1 votes) · EA · GW

I probably should inform my credences shifted a little more towards your claims - due to politicians making bizarre statements (such as Trump taking US out of WHO), Tara Sell raising the issue in 80kh, and other readings about how misinformation may lead to generalized mistrust (e.g, here).

Comment by ramiro on Any good organizations fighting racism? · 2020-05-28T17:19:15.858Z · score: 6 (6 votes) · EA · GW

I’d like to see a more detailed explanation about this question, too. Particularly, I wonder how a specific intervention to fight racism would fare in tractability, neglecteness and impact.

On the other hand, I guess that, in a very broad sense, racism (broadly understood as ethnic prejudice and discrimination) likely has externalities affecting EA causes:

a) it fuels internal social strife (civil war, genocide, mistrust, immigration crisis) and increases the odds of external conflict (and even nuclear warfare, like India vs. Pakistan).

b) It may rationalize scope neglect: people fail to recognize the impact of interventions in other cultures, either because they think their lives worth less, or they think progress is unachievable (“what’s the point of saving a child from malaria, if she’ll starve?”). (this is a falsifiable claim, but I couldn't find anyone testing it)

c) It raises suspicion over other areas. For instance, I think the past association between eugenics and racism may pose an obstacle to discuss improving humanity’s long-term prospects through genetic engineering.

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-05-28T14:35:19.878Z · score: 2 (2 votes) · EA · GW
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation.

Agreed. But I don't think we could do that without changing the environment a little bit. My point is that rationality isn’t just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and it’s way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...

The core issue isn't really “statements that are false”, or people who are actually fooled by them. The problem is that, if I’m convinced I’m surrounded by lies and nonsense, I’ll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I haven’t found any paper testing this hypothesis, though. If it is right, then most articles I’ve seen on “fake news didn’t affect political outcomes” might be wrong.

You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like “digit 1 in position nth”. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like “digit 1 in positions n, m and o”.

But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; it’s even worse if the Agent already knows some of the Principal's biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobs - like social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like “military personnel support this, financial investors would never accept that”. If you can convince voters they’ll face an economic crisis or political instability by voting candidate A, they will avoid it.

My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my “rationality skills” arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldn’t extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didn’t take 2 min to google it (and find out that, yes, “greenhouse” is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldn’t have done it myself if I didn’t already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldn’t provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against “anthropic global warming”, without even caring to put a consistent credence on them - like first pointing to alternative causes for warming, and then denying the warming itself. He didn't really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.

P.S.: I wonder if fact-checking agencies could evolve to some sort of "rating agencies"; I mean, they shouldn't only screen for false statements, but actually provide information about who is accurate - so mitigating what I've been calling the "lemons problem in news". But who rates the raters? Besides the risk of capture, I don't know how to make people actually trust the agencies in the first place.

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-05-25T17:17:03.710Z · score: 2 (2 votes) · EA · GW

Really thanks!

I think "offense-deffense balance" is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it's particularly concerning how easily it can overrun our defenses - so that, even if we succeed by fact-checking every inaccurate statement, it'll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).

So, despite everything I've read about the subject (though notvery sistematically), I haven't seen feasible well-written strategies to address this asymmetry - except for some papers on moderation in social networks and forums (even so, it's quite time consuming, unless moderators draw clear guidelines - like in this forum). I wonder why societies (through authorities or self-regulation) can't agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source - something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.

Comment by ramiro on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-05-22T18:13:07.785Z · score: 1 (1 votes) · EA · GW

I imagined so; but the idea just kept coming to my head, and since I hadn't seen it explicitly stated, I thought it could be worth mentioning.

I think there might be some antitrust problems with that

I agree that, with current legislation, this is likely so.

But let me share a thought: even though we don't have hedge for when one company succeeds so well it ends up dominating the whole market (and ruining all competitors in the process), we do have some compensation schemes (based on specific legislation) for when a company fails, like deposit insurance. The economic literature usually presents it as a public good (they'd decrease the odds of a bank run and so increase macroeconomic stability), but it was only accepted by the industry because it solved a lemons problem. Even today, the "green swan" (s. section 2) talk in finances often appeals to the risk of losses in a future global crisis (the Tragedy of the Horizon argument). My impression is that an innovation in financial regulation often starts with convincing banks and institutions that it's in their general self-interest, and then it will become compulsory only to avoid free-riders.

(So, yeah, if tech companies get together with the excuse of protecting their investors (& everyone else in the process) in case of someone dominating the market, that's collusion; if banks do so, it's CSR)

(epistemic status about the claims on deposit insurance: I shoud have made a better investigation in economic history, but I lack the time, the argument is consistent, and I did have first hand experience with the creation of a depositor insurance fund for credit unions - i.e., it didn't mitigate systemic risk, it just solved depositors risk-aversion)

Comment by ramiro on Thoughts on improving governance in developing countries · 2020-05-18T13:08:59.811Z · score: 2 (2 votes) · EA · GW

Thanks for your post. I'm from a developing country and, after living some months in Europe, I couldn't stop thinking about this subject. Things like "why don't they have any holes in the streets pavement?"; it evoked me Cixin Liu's Dark Forest, when humans first meet the "droplet and are astonished by its perfect flatness.

Let me add some non-structured remarks:

  • I think your "three sources of bad governance" are still too vague to be useful; moreover, they're not totally endemic to developing/poor countries (and even among them, it works in very different ways), and they seem to relate with bad governance (and outcomes) in a feedback loop. I think a good path is to compare many different societies and how they've dealt with those problems, like Acemoglu & Robinson's Why Nations Fail. Focusing on culture (something really hard to change) and how those societies fulfill positions (which ends up depending too much on culture, education and politics) seems to be a better start.
  • I think you've mentioned what seems to me the real problem: developing countries governments (exception: China) are less capable of engaging in long-term planning, particularly to address uncertain risks / outcomes. Some really bad feedback loops explain that: politics (they plan for the next election), urgencies (after you've paid for disaster relief, social security, debt and compulsory spending...) and, of course, culture (there's a beautiful Brazilian samba Felicidade ("Happiness"), with a verse like "we work the whole year for this brief moment, the carnival"). This reflects in education (which usually only pays-off years after reforms are implemented), macroeconomics (higher interest rates), infrastructure (Brazil adopted an annual social discount rate of 10% for infrastructure projects; and, like the example of the pavement, low-quality infrastructure and maintenance is usual), catastrophe prevention, etc.
  • One of the challenges of proposing a reform is how it affects the current status quo; for instance, think of political reform and pension legislation. I wonder if proposing reforms to be implemented years later, so leaving the current status quo unaffected, isn't usually more successful, and if it has been tried in these countries.
  • I think geography is underrated: development concentrates in clusters (it's easier to develop if your neighbor does it, too), and, though almost no one talks about it, seems to be highly correlated with climate (though this might be partly explained by culture + this neighbor effect, but the correlation remains even in the same country). I really hope Sachs's new book casts some light on that. (These spreadsheets I found by googling; I checked only a little bit of the data, so be careful with them)
  • I wonder if someone has ever made anonymous interviews with junior and senior civil officers (and maybe businessmen, too) from these countries about this subject. I'd really like to read it.
  • I think the idea of charter cities is that "it's so hard to change these current inadequate equilibria that we better start anew somewhere else". A micro-revolution. It might work out great in the long-term, for some places - but I don't think it would solve
Comment by ramiro on Ramiro's Shortform · 2020-05-17T02:18:27.456Z · score: 1 (1 votes) · EA · GW

Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight - yet, I didn't even mention it above. Some countries report almost half of the deaths from those environments.

(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it's implausible, but has anyone tested if this isn't just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)

Comment by ramiro on Ramiro's Shortform · 2020-05-16T20:17:31.359Z · score: 1 (1 votes) · EA · GW

Why didn't we have more previous alarm concerning the spread of Covid through care and nursing homes? Would it have made any difference? https://www.theguardian.com/world/2020/may/16/across-the-world-figures-reveal-horrific-covid-19-toll-of-care-home-deaths

Comment by ramiro on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-05-16T02:49:50.525Z · score: 1 (1 votes) · EA · GW

I wonder if, in addition to the section B.2, the clause could framed as a compensation scheme in favor of a firm's shareholders - at least if it was adopted conditionally to other firms adopting it (a kind of "good cartel"). Since the ex ante probability of one specific firm A obtaining future windfall profits from an AGI is lower than the probability of any of its present or future competitors doing it (so driving A out of business), it might be in the interest of these firms' shareholders to hedge each other by committing to a windfall clause. (Of course, the problem with this argument is that it'd only justify an agreement covering the shareholders of each agreeing firm)

Comment by ramiro on Ramiro's Shortform · 2020-05-12T17:26:47.070Z · score: 3 (3 votes) · EA · GW

Can Longtermists "profit" from short-term bias?

We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:

  1. They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
  2. And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.

I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.

On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it's true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy's nuke insurance could benefit from this bias, too.

I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…

Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?

(One possibility is that we can’t really do that - this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)

Comment by ramiro on Are we entering an experiment in Modern Monetary Theory (MMT)? · 2020-05-12T00:00:00.558Z · score: 5 (4 votes) · EA · GW

I am as perplexed as you are, and I share the doubts you mention in the last paragraphs.

Did you check this at Open Philanthropy?

I'm not sure about how neglected this subject is. However, if there's some sort of optimal monetary theory, this is a very hingy time to find it and implement its policy recommendations.

Maybe you could cite some bibliography. I've recently read and can recommend M. King's The End of Alchemy, SEP Philosophy of Money, and Macmillan's The End of Banking (more focused on finances, but with some good insights and recommendations about monetary theories)

Comment by ramiro on Ramiro's Shortform · 2020-05-11T23:42:56.871Z · score: 1 (1 votes) · EA · GW

Thanks. Maybe it's just my blindspot. I couldn't find anyone discussing this for more than 5min, except for this one. I googled it and found some blogs that are not about what I have in mind

I agree that donating to my favourite charity instead of my friend's favorite one would be unpolite, at least; however, I was thinking about friends who are not EAs, or who don't use to donate at all. It might be a better gift than a card or a lame souvenir, and perhaps interest this friend in EA charities (I try to think about which charity would interest this person most). Is there any reason against it?

Comment by ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. · 2020-05-11T17:58:44.430Z · score: 3 (3 votes) · EA · GW

I've seen some serious stuff on epistemic and memetic warfare. Do you think misinformation in the web has recently been or is currently being used as an effective weapon against countries or peoples? Is it qualitatively different from good old conspiracies and smear campaigns? Do you have some examples? Do standard ways of counter-acting (e.g., fact-checking) can effectively work in the case of an intentional attack (my guess: probably not; an attacker can spread misinformation more effectively than we can spread fact-checking - and warning about it wil increase mistrust and polarization, which might be the goal of the campaign)? What would be your credences on your answers?

Comment by ramiro on Ramiro's Shortform · 2020-05-11T17:24:54.939Z · score: 3 (3 votes) · EA · GW

Is 'donations as gifts' neglected?

I enjoy sending 'donations as gifts' - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn't actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.

I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven't seen any remark or analysis about it (well, maybe I'm just wasting my time: only one friend of mine stated he enjoyed his gift, but I don't think he has ever done it himself), and many organizations don't provide an accessible tool to do this.

P.S.: BTW, my birthday is on May 14th, so if anyone wants to send me one of these "gifts", I'd rather have you donating to GCRI.

Comment by ramiro on Mati_Roy's Shortform · 2020-05-10T17:25:36.738Z · score: 3 (3 votes) · EA · GW

BTW, I have recently learned that ICJ missed an opportunity to explicitly state that using nukes (or at least a first strike) is a violation of international law.

Comment by ramiro on Ramiro's Shortform · 2020-05-10T15:25:31.323Z · score: 3 (2 votes) · EA · GW

Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn't been leaked yet, and I didn't see anything in EA community - maybe my bubble is too small. https://en.unesco.org/artificial-intelligence

Comment by ramiro on Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels? · 2020-05-10T04:34:42.591Z · score: 2 (2 votes) · EA · GW

Thanks for the post. I’m very willing to read the whole sequel. I agree the TPNW probably has no positive “formal effect” for non-proliferation, and I am anxious to read about possible informal effects – mainly “shaming” nuclear powers.

I wonder if those treaties are mostly a matter of self-binding: these countries are unlikely to produce nuclear weapons in the next years, but they can’t trust their own future governments to remain this way; so, by ratifying the TPNW, they assure each other of this commitment.

Maybe I’m biased by Brazil’s example: during the Military Dictatorship, the country refused to ratify the NPT, pursued its own nuclear program, got into an arms race with Argentina, and (likely) exported uranium to the Iraq – despite lacking political approval (from the people and from the legislation) for that. Since then, democratic governments seem to have ratified any nuclear weapons convention they can, just to signal Brazil will never risk contributing to proliferation again.

Another point: isn’t TPNW an attempt to change scholar’s interpretations concerning the international jus cogens on the use / threat of nuclear weapons? I don't think international legal opinions are a concern for potential supporters of nuclear weapons, though.

Comment by ramiro on Reducing long-term risks from malevolent actors · 2020-05-09T15:08:35.782Z · score: 7 (3 votes) · EA · GW

Thanks for the remarks concerning Hitler and Stalin.

I think it might be quite valuable, for the project as a whole, to better understand why people are drawn to leaders with features they would not tolerate in peers, such as dark traits.

For one, it’s very plausible that, as you mentioned, the explanation is (a) dark traits are very useful – these individuals are (almost) the only ones with incentives enough to take risks, get things done, innovate, etc. Particularly, if we do need things like strategies of mutually assured destruction, then we need someone credibly capable of “playing hawk”, and it's arguably hard to believe nice people would do that. This hypothesis really lowers my credence in us decreasing x-risks by screening for dark traits; malevolent people would be analogous to nukes, and it’s hard to unilaterally get rid of them.

A competing explanation is that (b) they’re not that useful, they’re parasitical. Dark traits are uncorrelated with achievement, they just make someone better at outcompeting useful pro-social people, by, e.g., occupying their corresponding niches, or getting more publicity - and so making people think (due to representative bias) that bad guys are more useful than they are. That's plausible, too; for instance, almost no one outside EA and LW communities knows about Arkhipov and Petrov. If that’s the case, then a group could indeed unilaterally benefit from getting rid of malevolent / dark trait leaders.

(Maybe I should make clear that I don't have anything against dark triad traits individuals per se, and I'm as afraid of the possibility of witch hunts and other abuses as everyone else. And even if dark traits were uncorrelated with capacity for achievement, a group might deprise itself from a scarce resource by selecting against very useful dark trait individuals, like scientists and entrepreneurs)

Comment by ramiro on Cause-specific Effectiveness Prize (Project Plan) · 2020-05-05T19:54:33.151Z · score: 8 (6 votes) · EA · GW

Thanks fo the post, I really enjoyed this idea; please, let us know about its progress. I'd like to see something analogous for academic research and published papers.

As for your open questions:

Personal counseling, or online comprehensive guides?

I guess comprehensive guidelines are better, but they’re not exclusive alternatives, right?

RCTs are the golden standard of measuring impact, but are expensive and complicated. What easier alternatives do we allow? What’s the level of evidence and reliability we accept?

Maybe you should let your contestants and committee have this discussion; of course you can't RCT everything - e.g., no one would do it for parachutes. Actually, it might even be an easier problem than finding a common metric to compare different interventions - e.g., take a look at GiveWell's blog.

Evaluation process: Are charities required to file documents, do we pay for third-party analysts? Should we have a committee? How transparent can/should we be about the results?

Yes, yes, a lot. Suggestion: evaluate contestants in a two, perhaps three-level process:

a) approval voting in an online platform: you can either have an open website where people can vote for free (after identifying themselves), or you can charge voters a fee (two pros: voters will have some skin in the game, and partially fund your costs);

b) individual reviewers rate / select the n most voted contestants;

c) your committee decides the winners.

Risks: I guess reputational risk might be your real issue. Some sort of worst-case scenario: you lose your money and resources to this awful winner... but the real problem is that it goes viral, and everyone starts associating "Effective Altruism" with something like "Canadian Satanists chanting religious poetry in rich schools". Even my illustrative example sucks. Is there any way of letting EA get the benefits of the exposure, but not its risks?

BTW, do you plan to do it just once, or do you intend people to expect it to become a periodic contest? Signalling it could be repeated could influence future projects (in case you end up being successful)

Comment by ramiro on Mati_Roy's Shortform · 2020-05-03T02:03:17.881Z · score: 1 (1 votes) · EA · GW

Look on the bright side: they don't have factory farming ;)

Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there's probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.

(By the way, I thought you were more a sort of preference utilitarianist)

Comment by ramiro on Reducing long-term risks from malevolent actors · 2020-04-29T18:50:32.318Z · score: 17 (7 votes) · EA · GW

Personally, I think this is gold.

On the other hand, I’m not so sure that sadism is particularly worse for the long-term future than the dark triad traits. Yes, sadistic leaders may pose a special S-risk, but I am not sure they increase other x-risks so much – e.g., Hitler was kind of cautious concerning the risk of nuclear weapons igniting the atmosphere, and Stalin was partially useful in WW II and avoided WW III.

Contrast:

Some dark traits, such as Machiavellianism, can be beneficial under certain circumstances. It might be better if one could single out a dark trait, such as sadism, and only select against it while leaving other dark traits unchanged.

with these parts of the Appendix A:

Dark Triad traits predict increased intention to engage in political violence (Gøtzsche-Astrup, 2019)…
Historical evidence suggests that even many of their political adversaries—at least for some time—did not realize that Hitler, Mao, and Stalin were malevolent

Actually, open sadism often raises opposition; and I think the big 3 above weren't salient because they were sadistic – would they have killed less people if they were just functional psychopaths?

It might be hard to spot dark tetrad individuals, but it’s not so hard to realize an individual is narcissistic or manipulative. I don’t think people contend that Stalin and Mao were so – they may argue that they were beneficial, or that they were excused because they were “great man”.

So why do such guys acquire power? Why do people support it? We often dislike acquaintances that exhibit one of the dark traits; then why do people tolerate it when it comes from the alpha male boss?

My point is that the problem might be precisely that we often claim things like “Some dark traits, such as Machiavellianism, can be beneficial under certain circumstances. It might be better if one could single out a dark trait, such as sadism, and only select against it while leaving other dark traits unchanged.” If we could just acknowledge that dark triad traits individuals are very dangerous, even if they’re sometimes useful (like a necessary evil), then perhaps we could avoid (or at least be particularly cautious with) malevolent leaders.

Comment by ramiro on Rejecting Supererogationism · 2020-04-21T00:04:49.830Z · score: 4 (3 votes) · EA · GW

Actually, the point doesn't seem to be "we cannot justify the existence of supererogatory actions", but more something like "unless you are 100% sure the act is not obligatory, if you think that doing it is morally better than not doing it, then you should do it"

Comment by ramiro on Dying for a day at the beach · 2020-04-15T19:32:35.611Z · score: 1 (1 votes) · EA · GW
I assume the concerns about people visiting open spaces - even if social distancing - are largely about the other associated risks, from people using public transport to get there, going into shops to buy sandwiches/drinks, etc.

Why not restrict only those sources of contagion? Is it easier to prevent people from accessing parks than buses? (honestly, I don't know)

This confirms my priors that outdoor contagion must be really rare. Moreover: if that's true, then outdoors / environmental disinfection would be a waste, if not overall harmful.

Comment by ramiro on Dying for a day at the beach · 2020-04-12T01:20:26.348Z · score: 0 (2 votes) · EA · GW

0.5 years = aprox. 182 days going to the beach. So, she shouldn't go to the beach today - and we're not even discussing the odds of her infecting someone else.

However, I think limiting the access to open spaces to a number of people per day would be a better policy than just shutting then - since it seems extremely unlikely you can catch covid in sunny open spaces if you're careful enough.

Nevertheless, I totally agree with you that we should consider a little bit more how much is being sacrificed in terms of quality of life in the current lockdown, and try to find ways to mitigate this loss. Too often, the fiercest defenders of a strict lockdown are something like aspies who spend more time in front of a computer/phone than outdoors (and enjoy it), so they don't see it as a real sacrifice. I just think your example is not very good.