Posts

I scraped all public "Effective Altruists" Goodreads reading lists 2021-03-23T20:28:30.476Z
Funding essay-prizes as part of pledged donations? 2021-02-03T18:43:03.329Z
What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z

Comments

Comment by MaxRa on On Scaling Academia · 2021-09-25T14:43:09.118Z · EA · GW

Interesting points. Anecdotally, I also think that the research output of most PIs that I got to know better was significantly constrained by busywork that could be outsourced relatively easily.

A scientific prediction market would allow the Ph.D. with niche expertise to systematically beat the market on a few questions and earn a living wage that way

I'm spontaneously doubtful that a realistic version of a scientific prediction market will provide a living wage this way. E.g. with the most liquid polymarket question, currently at .75c, you can at maximum expect a profit of 5.000$ if you're sure the event will happen and wager 10.000$.

Re Setting up an academic prediction markets: Eva Vivalt has set up a prediction platform for social science studies some time ago that might be interesting to look into: https://socialscienceprediction.org/

The social sciences have witnessed a dramatic shift in the last 25 years. Rigorous research designs have drastically improved the credibility of research findings. However, while research designs have improved, less progress has been made in relating experimental findings to the views of the broader scientific community and policy-makers. Individual researchers have begun to gather predictions about the effects they might find in their studies, as this information can be helpful in interpreting research results. Still, we could learn more from predictions if they were elicited in a coordinated, systematic way. See this Science article for a short summary.

This prediction platform will allow for the systematic collection and assessment of expert forecasts of the effects of untested social programs. In turn, this should help both policy makers and social scientists by improving the accuracy of forecasts, allowing for more effective decision-making and improving experimental design and analysis.

Comment by MaxRa on The Importance-Avoidance Effect · 2021-09-23T13:15:52.936Z · EA · GW

Were you able to "share the load" (so to say) in some capacity with your PhD and research?

I was, I started working with a colleague and I got a research assistant which really did a lot of a difference. It was very motivating to have another mind looking at the same problems and finding them interesting/challenging, plus I could focus more on the things that were most interesting to me by outsourcing some tasks. 

And I think I don't have to do much else than just scheduling a meeting every two weeks or so to make it enjoyable and fun, at least for me that is all that is needed.

Comment by MaxRa on The Importance-Avoidance Effect · 2021-09-17T08:24:56.564Z · EA · GW

Thanks, I also could relate to the general pattern. For example during my PhD I really tried hard to find and work on things that seem most promising and give it my all cause I want to do it as good as I can, but this was pretty stressful and I think it noticeably decreased the fun and my ability to let simple curiosity lead my research.

Share the load of the project with others. Get some trusted individuals to work with you.

This is a big one for me. Working with others on projects is usually much more fun and motivating to me.

Comment by MaxRa on Great Power Conflict · 2021-09-16T06:18:49.113Z · EA · GW

Thucydide‘s Trap by Graham Allison features a scenario of escalating conflict between the US and China in the South Chinese Sea conflict that I found very chilling. Iirc the scenario is just like you mentioned, each side doing from her perspective legitimate moves, protecting dearly hold interests, drawing lines in the sand and the outcome is escalation to war. The underlying theme is conflicting dynamics when a reigning power is challenged by a rising power. You probably saw the book mentioned, I found it very worth reading. 

And you didn‘t mention cyber warfare, which is what pops into my mind immediately. I haven‘t looked into this, but I imagine that potential damage is very high while proper international peace-supporting and deescalating norms are much more lagging behind compared to physical conflicts.

Comment by MaxRa on Disentangling "Improving Institutional Decision-Making" · 2021-09-15T05:16:24.492Z · EA · GW

Really nice and useful exploration, and I really liked your drawings.

(a) Maybe the average/median institution’s goals are already aligned with public good

FWIW, I intuitively would’ve drawn the institution blob in your sketch higher, i.e. I’d have put fewer than (eyeballing) 30% of institutions in the negatively aligned space (maybe 10%?). In moments like this, including a quick poll into the forum to get a picture what others think would be really useful.

However, I don’t see a clear argument for how an abstract intervention that improves decision-making would also incidentally improve the value-alignment of an institution.

Other spontaneous ideas, besides choosing more representative candidates:

  • increased coherence of the institution could lead to an overall stronger link between its mandate and its actions
  • increased transparency and coherence could reduce corruption and rent-seeking

I know of efficient and technologically progressive institutions that seem extremely harmful, and of benign, well-meaning institutions that are slow to change and inefficient

Given what I said beforehand, I’d be interested in learning more about examples of harmful institutions that have generally high capacity.

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T19:47:45.388Z · EA · GW

Perfect, so he appreciated it despite finding the accompanying letter pretty generic, and thought he received it because someone (the letter listed Max Tegmark, Joshua Bengio and Tim O’Reilly, though w/o signatures) believed he’d find it interesting and that the book is important for the field. Pretty much what one could hope for.

And thanks for the work trying to get them to take this more seriously, would be really great if you could find more neuroscience people to contribute to AI safety.

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T14:21:25.421Z · EA · GW

Interesting anyway, thanks! Did you by any chance notice if he reacted positively or negatively to being send the book? I was a bit worried it might be considered spammy. On the other hand, I remember reading that Andrew Gelman regularly gets send copies of books he might be interested in for him to write a blurp or review, so maybe it's just a thing that happens to scientists and one needn't be worried.

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-05T09:35:33.335Z · EA · GW

Maybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-05T09:32:54.527Z · EA · GW

At least the novel the movie is based on seems to have had significant influence:

Kubrick had researched the subject for years, consulted experts, and worked closely with a former R.A.F. pilot, Peter George, on the screenplay of the film. George’s novel about the risk of accidental nuclear war, “Red Alert,” was the source for most of “Strangelove” ’s plot. Unbeknownst to both Kubrick and George, a top official at the Department of Defense had already sent a copy of “Red Alert” to every member of the Pentagon’s Scientific Advisory Committee for Ballistic Missiles. At the Pentagon, the book was taken seriously as a cautionary tale about what might go wrong.

https://www.newyorker.com/news/news-desk/almost-everything-in-dr-strangelove-was-true

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-05T08:24:32.435Z · EA · GW

Another idea is replicating something like Hilbert‘s speech in 1900 in which he lists 23 open maths problems, which seems to have had some impact in agenda setting for the whole scientific community. https://en.wikipedia.org/wiki/Hilbert's_problems

Doing this well for the field of AI might get some attention from AI scientists and funders.

Comment by MaxRa on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-05T08:02:41.856Z · EA · GW

I wonder if a movie about realistic AI x-risk scenarios might have promise. I have somewhere in the back of my mind that Dr. Strangelove possibly inspired some people to work on the threat of nuclear war (the Wikipedia article is surprisingly sparse on the topic of the movie’s impact, though).

Comment by MaxRa on When pooling forecasts, use the geometric mean of odds · 2021-09-03T09:09:27.986Z · EA · GW

Cool, that’s really useful to know. Can you also check how extremizing the odds with different parameters performs?

Comment by MaxRa on EA Forum feature suggestion thread · 2021-09-03T05:15:59.302Z · EA · GW

Yeah, just a feature which displays the comments from LessWrong crossposts would save me some clicking.

Comment by MaxRa on The Governance Problem and the "Pretty Good" X-Risk · 2021-08-30T18:13:00.354Z · EA · GW

If we create aligned superintelligence, how we use it will involve political institutions and processes. Superintelligence will probably be controlled by a state or a group of states. This is more likely the more AI becomes popularly appreciated and the more legibly powerful AI is created before the intelligence explosion.

 

It seems really useful to me to understand better how likely states will end up calling the shots. I wonder if there are potential options for big tech to keep sovereignty about AI. I'd suspect a company would prefer staying in control and will consider all options it has available. Just some random initial thoughts:

  • negotiate about moving headquarters with different countries to get as much autonomy as possible
  • somehow "diversify" across nations and continents and play them off against each other
  • construct advanced AI systems such that they only do a few narrow tasks that don't seem obviously urgent to be nationalized (something like Codex 3.0, or personal assistants, …) and internally assembling so much ability that they can resist nationalization?
Comment by MaxRa on The Governance Problem and the "Pretty Good" X-Risk · 2021-08-30T18:00:27.473Z · EA · GW

Really interesting post, thanks! Some random reactions.

"Pretty good" governance failure is possible. We could end up with an outcome that many or most influential people want, but that wiser versions of ourselves would strongly disapprove of. This scenario is plausibly the default outcome of aligned superintelligence: great uses of power are a tiny subset of the possible uses of power, the people/institutions that currently want great outcomes constitute a tiny share of total influence, and neither will those who want non-great outcomes be persuaded nor will those who want great outcomes acquire influence much without us working to increase it.

My first gut reaction is skepticism that this is a likely or stable state. The Earthly utopia scenario will likely not happen given that seemingly most explorations of humanity's future prominently feature its expansion to space. Additionally, I suspect that a large fraction of people who seriously start thinking about the longterm future of humanity fall into the camp that you consider "people/institutions that currently want great outcomes" If this is true, one might suspect that this will become a much stronger faction and an aligned AI will have to consider those ambitions, too?

Robin Hanson speculated that the debate between people who want to use our cosmic endownment and those who want to stay local might be the cultural debate of the future. He calls it becoming grabby vs. non-grabby. He worries that a central government will try to restrict grabby expansion because it would be nearly impossible to keep the growing civilization under its control:

If within a few centuries we have a strong world government managing capitalist competition, overpopulation, value drift, and much more, we might come to notice that these and many other governance solutions to pressing problems are threatened by unrestrained interstellar colonization. Independent colonies able to change such solutions locally could allow population explosions and value drift, as well as capitalist competition that beats out home industries. That is, colony independence suggests unmanaged colony competition. In addition, independent colonies would lower the status of those who control the central government.

So authorities would want to either ban such colonization, or to find ways to keep colonies under tight central control. Yet it seems very hard to keep a tight lid on colonies. The huge distances involved make it hard to require central approval for distant decisions, and distant colonists can’t participate as equals in governance without slowing down the whole process dramatically. Worse, allowing just one sustained failure, of some descendants who get grabby, can negate all the other successes. This single failure problem gets worse the more colonies there are, the further apart they spread, and the more advanced technology gets.

https://www.overcomingbias.com/2021/07/the-coming-cosmic-control-conflict.html

I'm kind of sceptical that the desire to have absolute control would be strong enough to stamp down any expansatory and exploratory ambitions. I suspect that humans and institutions will converge considerably towards the "making most of our endowment" stance. With the increasing wealth, we will learn more about how much value we will be able to create and how much more value is possible compared to our prosaic imaginations, so an aligned AI will also work towards helping us achieve those sooner or later. 

Comment by MaxRa on Gifted $1 million. What to do? (Not hypothetical) · 2021-08-30T08:58:40.263Z · EA · GW

Wow, that's really cool! One idea is to look at the reports from winners of donor lotteries. They are also more or less ordinary people who got to decide where to donate a lot of money and shared their process and learnings: https://forum.effectivealtruism.org/tag/donor-lotteries  

Comment by MaxRa on An Informal Review of Space Exploration · 2021-08-21T18:03:48.979Z · EA · GW

Thanks, I found this very interesting and well written and am glad you took a deeper look into it.

Comment by MaxRa on Growth and the case against randomista development · 2021-08-17T07:26:07.474Z · EA · GW

Just saw this on Marginal Revolution and wondered what people here make of it, e.g. if the recent slowdown or instability in major countries Nigeria, Ethiopia and South Africa is a noticeable update for them against the promise of economic growth work in Africa.

One of the saddest stories of the year has gone largely unreported: the slowdown of political and economic progress in sub-Saharan Africa. There is no longer a clear path to be seen, or a simple story to be told, about how the world’s poorest continent might claw its way up to middle-income status. Africa has amazing human talent and brilliant cultural heritages, but its major political centers are, to put it bluntly, falling apart.

https://marginalrevolution.com/marginalrevolution/2021/08/is-africa-losing-its-growth-window.html

Comment by MaxRa on Report on Running a Forecasting Tournament at an EA Retreat · 2021-08-16T15:42:31.356Z · EA · GW

@Simon_Grimm and me ended up also organizing a forecasting tournament. It went really well, people seemed to like it a lot, so thanks for the inspiration and the instructions! 

One thing we did differently

  • we hanged posters for each question in the main hallway because we thought it would make the forecasts more visible/present and it would be interesting to see what others write down on the poster as their forecast - I would likely do this again, even though hammering all the numbers into an excel sheet was some effort

Questions we used

1. Will the probability of Laschet becoming the next German chancellor be higher than 50% on Hypermind on Sunday, 3pm?

2. Will more than 30 people make a forecast in this tournament?

3. Will one person among the participants do the Giving What We Can pledge during the weekend?

4. During scoring on Sunday before dinner, will two randomly chosen participants report having talked to each other during the weekend? Needs to be more than „Hi“!

5. At the end of Sunday‘s lunch, will there be leftover food in the pots?

6. At Sunday‘s breakfast at 9am, will more than 10 people wear an EA shirt?

7. Will a randomly chosen participant have animal suffering as their current top cause?

8. Will a randomly chosen participant have risks associated with AI as their top cause?

9. At breakfast on Sunday at 9am, will more than half have read at least half of Doing Good Better?

10. Will there be more packages of Queal than Huel at Sunday‘s breakfast?

11. Will there be any rain up until Sunday, 5pm?

12. Will you get into the top 3 forecasters for all other questions except this one?

13. Will participants overall overestimate their probability of getting into the top three?

14. Will more people arrive from the South of Germany than from the North?

15. On average, did people pay more than the standard fee of 100€?

Comment by MaxRa on Analyzing view metrics on the EA Forum · 2021-08-12T05:55:01.460Z · EA · GW

Cool! Just in case you have the data quickly at hand, I’d‘ve been interested in more than just the top three articles, maybe you could add the top ten? Also, maybe minutes would be the more intuitive unit, compared to something like 2600 seconds.

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T22:02:17.628Z · EA · GW

I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T18:04:17.915Z · EA · GW

Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g. 

  • epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset
  • values like exploring neglected and important topics with a focus on having an altruistic impact? 

And then maybe being involved in hiring the people who have shown promise and fit?

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:24:46.915Z · EA · GW

Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism.

Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at the publication, he felt at times that it was important to challenge what he called the “dominant sensibility” in the “young-college-graduate bubble” that now sets the tone at many digital-media organizations.

https://www.theatlantic.com/ideas/archive/2020/11/substack-and-medias-groupthink-problem/617102/

Also, the idea of course is not at all dependent on him, I suppose there would be other great candidates, Yglesias just came to mind because I really like his work. 

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T16:44:44.821Z · EA · GW

Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T16:37:39.058Z · EA · GW

In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T07:42:40.606Z · EA · GW

Build up an institution that does the IGM economic experts survey with every scientific field, with paid editors, additionally probabilistic forecasts, monetary incentives for the experts maybe. https://www.igmchicago.org/igm-economic-experts-panel/

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T23:10:20.682Z · EA · GW

Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T23:00:29.387Z · EA · GW

Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.

Comment by MaxRa on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T22:45:35.169Z · EA · GW

Funding a „serious“ prediction market.

Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.

Comment by MaxRa on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-06T11:48:01.105Z · EA · GW

Great comment!

I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.

I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?

Comment by MaxRa on How to Train Better EAs? · 2021-08-06T07:26:36.794Z · EA · GW

I used to listen to the podcast of a former Navy SEAL and he argues that the idea of obedient drones is totally off for SEALs, and I got the impression they learn a lot of specialized skills for strategic warfare stuff. Here an article he wrote about this (haven’t read it myself): https://www.businessinsider.com/navy-seal-jocko-willink-debunks-military-blind-obedience-2018-6

Comment by MaxRa on [3-hour podcast]: Joseph Carlsmith on longtermism, utopia, the computational power of the brain, meta-ethics, illusionism and meditation · 2021-07-28T13:22:54.629Z · EA · GW

Really enjoyed listening to this. I relate a lot with your perspective on grounding value in our experiences and found Joseph's pushbacks really stimulating. 

Comment by MaxRa on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T20:58:53.909Z · EA · GW

Is there already a handy way to compare computation costs that went into training? E.g. compared to GPT3, AlphaZero, etc.?

Comment by MaxRa on What novels, poetry, comics have EA themes, plots or characters? · 2021-07-25T10:17:59.097Z · EA · GW

I also really enjoyed the unofficial sequal, Significant Digits. http://www.anarchyishyperbole.com/p/significant-digits.html

It's easy to make big plans and ask big questions, but a lot harder to follow them through.  Find out what happens to Harry Potter-Evans-Verres, Hermione, Draco, and everyone else once they grow into their roles as leaders, leave the shelter of Hogwarts, and venture out into a wider world of intrigue, politics, and war.  Not official.

"The best HPMOR continuation fic." -Eliezer Yudkowsky

Comment by MaxRa on Books and lecture series relevant to AI governance? · 2021-07-23T14:15:55.442Z · EA · GW

Thanks a lot for compiling this, I'm thinking about switching my career into AI governance and the lists in your Google Doc seem super useful!

Comment by MaxRa on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T10:01:14.263Z · EA · GW

Thanks!

Comment by MaxRa on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T04:33:47.111Z · EA · GW

Cool questions! I am a bit hesitant to update much:

  • they don’t seem to be too active, e.g. few comments, interest count at around 10 (can you see the number of unique forecasters somehow?)
  • the people doing the forecasts are probably EA adjacent and if they did something akin to a formal analysis, they would share it with the EA community, or at least in the comments, as it seems relatively useful to contribute this
Comment by MaxRa on On what kinds of Twitter accounts would you be most interested in seeing research? · 2021-07-18T16:25:49.891Z · EA · GW

I’ve seen the claim that economist Alex Tabarrok was significantly ahead of the curve on COVID issues. Would be interesting to see how he did or did not reach and convince the people involved in policy making. https://www.twitter.com/ATabarrok

Comment by MaxRa on Increasing personal security of at-risk high-impact actors · 2021-07-18T14:48:17.063Z · EA · GW

Just saw this, which seems like a good step in the intended direction:

Canada will become one of the first countries to offer a dedicated, permanent pathway for human rights defenders, and will resettle up to 250 human rights defenders per year, including their family members, through the Government-Assisted Refugees Program.

https://www.canada.ca/en/immigration-refugees-citizenship/news/2021/07/minister-mendicino-launches-a-dedicated-refugee-stream-for-human-rights-defenders.html

Comment by MaxRa on Building my Scout Mindset: #1 · 2021-07-18T10:18:07.007Z · EA · GW

Seconded, I‘d really like to read more of stream of thought inspections like this. Seems like a great practice and also like a cool way to understand other people‘s thought processes around difficult topics.

Comment by MaxRa on People working on x-risks: what emotionally motivates you? · 2021-07-05T22:30:53.336Z · EA · GW

Good question. I think I’m maybe quarterway there to be internally/emotionally driven to do what I can to prevent the worst possible AI failures, but re this

Say I'm afraid of internalizing responsibility for working on important, large problems

I always thought it would be a great thing if my emotional drives would line up more with the goals that I deliberately thought through to be likely the most important. It would feel more coherent, it would give me more drive and focus on what matters, and downregulate things like some social motivations that I don’t endorse fully. I suppose one might be worried that it’s overwhelming, but that hasn‘t been a thing for me so far. I wonder if humans mostly deal okay with great responsibilities, which is my spontaneous impression.

(btw, I really enjoyed reading your PhD retrospective, nice to see your name pop up here! I’m doing a PhD in CogSci and could relate to a lot)

Comment by MaxRa on Some AI Governance Research Ideas · 2021-07-05T11:18:34.078Z · EA · GW

Yeah. What I thought is that one might want to somehow use a term that also emphasizes the potentially transformative impact AI companies will have, as in „We think your AI research might fit into the reference class of the Manhattan Project“. And „socially beneficial“ doesn‘t really capture this either for me. Maybe something in the direction of „risk-aware“, „risk-sensitive“, „farsighted“, „robustly beneficial“, „socially cautious“…

Edit: Just stumbled upon the word word „stewardship“ in the most recent EconTalk episode from a lecturer wanting to kindle a sense of stewardship over nuclear weapons in military personnel.

Comment by MaxRa on A do-gooder's safari · 2021-07-05T08:50:32.778Z · EA · GW

There is a poll on the Effective Altruism Polls Facebook group on the question "With which archetype(s) from Owen's post "A do-gooder's safari" do you identify the most?"

https://www.facebook.com/groups/477649789306528/posts/1022081814863320/

Comment by MaxRa on Mauhn Releases AI Safety Documentation · 2021-07-03T15:55:24.623Z · EA · GW

I think it's great  that  you're trying to lead by example and I think concrete ideas of how companies can responsibly deal  with the potential for leading the development of advanced or even  transformative AI systems is really welcome in my view. I skimmed three of your links and thought that all sounded basically sensible, though like it will probably all look very different from this, and that I never want to put so much responsibility on anything called an "Ethics Board". (But I'm very basic in my thinking around strategic and government questions around AI, so...). 

One question I had was if you  think it's desirable that AGI will be developed and implemented by a single company, or a group of companies. I think it's probable, but wondered whether there are better  institutions from which an AI Safety person would try to push the AI landscape (e.g. governments, or non-profits, NGOs, international governmental bodies, ...).

Also, your alignment plan sounds like something that still  requires mostly basic research, so I wondered  whether you already have some concrete ideas of concrete research projects  to make  progress here.

Alignment through Education: educate AI systems, just like we educate our children, to allow AI systems to learn human values, e.g. through trial and error.

Also, not  sure why you didn't get feedback here so far, maybe consider crossposting it to lesswrong.com, too.

Comment by MaxRa on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T21:37:20.694Z · EA · GW

In case suggestions for new cards are still useful, just saw another useful number:

Q: What percentage of people across Europe think the world is getting better? [2015] A: +/- 5% Source: https://ourworldindata.org/optimism-pessimism

Comment by MaxRa on What are some key numbers that (almost) every EA should know? · 2021-06-28T07:56:25.840Z · EA · GW

Cool, really looking forward to add them to my Anki! 

Re: How many big power transitions ended in war

I had the work from Graham Allison in mind here, not sure how set in stone it is but I had the impression it is a sufficiently solid rough estimate:

(2) In researching cases of rising powers challenging ruling powers over the last 500 years, Allison and the Thucydides Trap Project at Harvard University found 12 of 16 cases resulted in war. 

Re: roughly how much they value their own time

I would do a card where people are able to fill in their number, e.g. something like

"Given my current schedule, to get me to do a task that I don't intrinsically value for one hour I'd need to by payed ___ €/$/..."

And you might put Clearer Thinking's calculator as a link, so people can calculate it if they don't already know about this idea: https://programs.clearerthinking.org/what_is_your_time_really_worth_to_you.html

Comment by MaxRa on The unthinkable urgency of suffering · 2021-06-27T07:50:41.274Z · EA · GW

I'm also  a bit surprised, if I'm not mistaken the post had negative karma at one point. People of course downvote for other reasons than controversy, e.g. from the forum's voting norms section:

“I didn’t find this relevant.”
“I think this contains an error.”
“This is technically fine, but annoying to read.”

But I'd be sad if people get the impression that posts like this that reflect on altruistic motivations would not be welcome.

Comment by MaxRa on The unthinkable urgency of suffering · 2021-06-27T07:41:19.239Z · EA · GW

Thanks for writing these words, they rekindled my deeply held desire to prevent intense suffering. It really is weird, the desire also quickly fades in the background for me. In my case, one part is probably that my work and thinking nowadays is more directed at preserving what is good about humanity instead of preventing the worst suffering, which was more of my focus when I thought more about global poverty and animal suffering.

Comment by MaxRa on WANBAM mentee applications are open until the end of July!  · 2021-06-26T19:52:27.776Z · EA · GW

It's Women and Non-Binary Altruism Mentorship . I also couldn't find it on the website but googling did it. 

Comment by MaxRa on What are the 'PlayPumps' of cause prioritisation? · 2021-06-26T09:21:14.231Z · EA · GW

Great share. Really hurt to read, oh man.

Here some more details from the article that I found interesting, too:

Nonetheless, by the 1980s finding fault with high-yield agriculture had become fashionable. Environmentalists began to tell the Ford and Rockefeller Foundations and Western governments that high-yield techniques would despoil the developing world. As Borlaug turned his attention to high-yield projects for Africa, where mass starvation still seemed a plausible threat, some green organizations became determined to stop him there. "The environmental community in the 1980s went crazy pressuring the donor countries and the big foundations not to support ideas like inorganic fertilizers for Africa," says David Seckler, the director of the International Irrigation Management Institute.

Environmental lobbyists persuaded the Ford Foundation and the World Bank to back off from most African agriculture projects. The Rockefeller Foundation largely backed away too—though it might have in any case, because it was shifting toward an emphasis on biotechnological agricultural research. "World Bank fear of green political pressure in Washington became the single biggest obstacle to feeding Africa," Borlaug says. The green parties of Western Europe persuaded most of their governments to stop supplying fertilizer to Africa; an exception was Norway, which has a large crown corporation that makes fertilizer and avidly promotes its use. Borlaug, once an honored presence at the Ford and Rockefeller Foundations, became, he says, "a tar baby to them politically, because all the ideas the greenies couldn't stand were sticking to me."

Borlaug's reaction to the campaign was anger. He says, "Some of the environmental lobbyists of the Western nations are the salt of the earth, but many of them are elitists. They've never experienced the physical sensation of hunger. They do their lobbying from comfortable office suites in Washington or Brussels. If they lived just one month amid the misery of the developing world, as I have for fifty years, they'd be crying out for tractors and fertilizer and irrigation canals and be outraged that fashionable elitists back home were trying to deny them these things."