Posts

What success looks like 2022-06-28T14:30:37.358Z
Where would we set up the next EA hubs? 2022-03-16T13:37:21.242Z
Idea: Red-teaming fellowships 2022-02-02T22:13:28.566Z
EA Analysis of the German Coalition Agreement 2021–2025 2022-01-24T13:25:14.388Z
Research idea: Evaluate the IGM economic experts panel 2022-01-18T18:42:17.678Z
EA megaprojects continued 2021-12-03T10:33:53.467Z
I scraped all public "Effective Altruists" Goodreads reading lists 2021-03-23T20:28:30.476Z
Funding essay-prizes as part of pledged donations? 2021-02-03T18:43:03.329Z
What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z

Comments

Comment by MaxRa on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-29T03:41:53.514Z · EA · GW

Yeah, that's a fair worry and would be worth looking out for, though I spontaneously don't feel like it's among the most significant sources of the skew.

Some evidence that influential EAs have broader and less obviously biased views on what skills are most urgently needed is the career aptitudes advice from Holden Karnovsky, where he encourages EAs to significantly skill up in e.g. communication, politics, founding and running orgs, and community building.

Comment by MaxRa on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-28T02:01:09.958Z · EA · GW

While I didn't downvote, I think your interpretation of the reasoning of the downvotes comes across as uncharitable, like they'd be thinking "Hey, my parents are wealthy and Caucasian, and I think it's great that a majority of attendees here have the same background as me". Or later you write as if the downvotes were meant to help with denying the fact of overrepresentation. In reality, I think all EAs I know would all else equal prefer if the broader movement had more diverse backgrounds. And there are people who work on reaching out to people who don't hear about EA through the existing outreach channels, or to people who would become involved with more refined communication and support, e.g. Magnify Mentoring, or translators.

And Jordan's initial comment doesn't contribute so much, imo:

  • admissions to EA Global are not intended to be based on cred or committment, but on how much people would get out of the conference in terms of doing the most good, or how much they'd be able to help others with that (though I think this was never communicated particularly well) (some discussion about this here)
  • the gender ratio of EA is iirc around 70/30. Quick googling tells me that this is the same ratio as in Philosophy graduates and STEM workers, backgrounds that are fairly naturally overrepresented in EA due to the nature of the EA project and the careers the movement is focussing on most strongly. So I wouldn't agree that it's ridiculous. 
Comment by MaxRa on EA on nuclear war and expertise · 2022-09-20T18:19:47.131Z · EA · GW

Appreciate you explaining the downvote. While a more legible argument than "I don't trust X because of what I perceive to be a long pattern of bad behavior I'm not going to specify much" would be much more useful, I still find this more useful than not commenting at all, so others have at least a pointer to investigate further themselves.

I suppose the downside of purely ad hominem arguments is that it often just smears the target for too often unjustified reasons. But for me a charitable interpretation is that the author of the ad hominem wants to be helpful/informative and just doesn't have the time (or maybe legible or non-confidential information) to do more than say they don't trust the person. 

Comment by MaxRa on CEA Ops is now EV Ops · 2022-09-13T15:40:24.814Z · EA · GW

I had the same thought only with Tyler Cowen's Emergent Ventures, which is an organisation that is even fairly closely associated with EA (e.g. I personally know two EAs who are among their fellows).

Comment by MaxRa on "Long-Termism" vs. "Existential Risk" · 2022-09-11T10:54:06.879Z · EA · GW

Thanks for explaining, really interesting and glad so much careful thinking is going into communication issues! 

FWIW I find the "meme" framing you use here offputting. The framing feels kinda uncooperative, as if we're trying to trick people into believing in something, instead of making arguments to convince people who want to understand the merits of an idea. I associate memes with ideas that are selected for being easy and fun to spread, that likely affirm our biases, and that mostly without the constraint whether the ideas are convincing upon reflection, true or helpful for the brain that gets "infected" by the meme.

Some support for this interpretation from the Wikipedia introduction:

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution.[8] Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme's reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[9]

Comment by MaxRa on The Happiness Maximizer: Why EA is an x-risk · 2022-09-04T14:08:17.775Z · EA · GW

I listened to your article the other night and enjoyed it, thanks for submitting. :) I also listened to this article by Holden Karnofsky and (without listening to both super carefully) I had the impression you both covered fairly similar ground and you might be interested in the more active discussion over there. https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous

Comment by MaxRa on Open EA Global · 2022-09-03T09:47:53.881Z · EA · GW

Damn, that really sucks. :| Thanks for sharing.

Adding my three related cents:

  • I personally would very likely have felt really sad about being rejected from EAG as well, and knowing this played a role in me not being particularly excited about applying in the past.
  • A good friend of mine who's like a role model highly-engaged EA was told a year or so ago by a very senior EA (they knew each other well-ish) that he shouldn't take for granted being admitted to EAG, which IIRC felt pretty bad for him, as if he's still not doing "enough".
  • Another good friend of mine from my local chapter got rejected from one of the main local community events in Germany due to capacity limitations a few years ago, and that felt very bad to me and IIRC he said he was at least a little sad.
    • (IIRC the admission process afterwards switched to being fairly inclusive and adding a lottery in case of capacity limitations.)
Comment by MaxRa on Open EA Global · 2022-09-01T19:39:41.504Z · EA · GW

Cool, thanks for the extremely quick responses! :)

Comment by MaxRa on Open EA Global · 2022-09-01T18:49:13.022Z · EA · GW

Thanks a lot for taking the time to elaborate!

Two points of feedback on how EA Global is currently presented a little bit more like an event for the EA community:

The conference is called "EA Global" and is universally billed as the place where EAs meet one another, learn more about the movement, and have a good time together.” It’s possible we should rename the event, and I agree this confusion and reputation is problematic, but I would like to clarify that we don’t define the event like this anywhere

This is the headline description from https://www.eaglobal.org :

EA Global is the conference series for the effective altruism community.

Our events are designed for community members who already have a solid understanding of effective altruism but would like to make new connections, discuss ideas, develop their skills, or move into new roles.

I think from this description I personally interpret more Scott's caricature than EAG being intended as a high bar networking event:

  • "for the effective altruism community" -> makes it sound like it's a community event, which I'd expect to be inclusive
  • "community members who already have a solid understanding" -> does not sound particularly exclusive to me
  • "make connections, discuss ideas, develop skills" -> sounds somewhat vague and general for me, and "make connections" sounds to me like "connect to your fellow EAs"

Secondly, the first picture series on the website also makes it look to me more like a community event and less like a professional networking event. Half of them are photos of large groups and speakers. Only one of the pictures seems to be a 1-to-1 conversation.

Comment by MaxRa on I'm building an EA-University / Institute (and I need your help) · 2022-08-29T16:35:28.826Z · EA · GW

Thanks for engaging!

  1. Engaging more with EA Leipzig sounds reasonable, and it seems also like a very reasonable move to go to EAGx Berlin if you want to get more actively involved in EA projects generally.
  2. I think the comparison between IT and EA is a little off as EA is much smaller and younger than IT, so any new project will be a much bigger share of overall EA-inspired activity and have a much higher chance to be the first EA exponent the people encounter.
Comment by MaxRa on I'm building an EA-University / Institute (and I need your help) · 2022-08-29T13:28:20.800Z · EA · GW

Hi Benjamin! I think it's great that you want to support more people to work on the most pressing problems, and that you shared your plan about your project here for feedback. Thanks!

I have a lot of reservations after skimming this article, here three major and two somewhat less major ones:

  • I think leading a large effort of introducing and teaching EA principles requires deep familiarity with the EA philosophy. It worries me that you don't mention your background with EA much in the reasons you'd be qualified, except your involvement with the Transhumane Partei Deutschlands, which (from my experience as a German community builder) doesn't seem particularly involved with the German EA community.
  • Relatedly, as Yi-Yang mentions, it worries me that you didn't immidiately consider reputational risks to the EA movement in your "What could go wrong" section. Advertising this project as "The first EA-University" (as you've done in the German EA Slack) will likely catch attention and be taken as representative of the EA movement. This is a consideration that I think would be very salient to people who have spend sufficient time engaging with the EA movement to be well-equiped to lead such an effort.
  • The four initial courses seem relatively odd choices to what I would expect from an effort to make people in their ~30s familiar with effective altruism: "Foundations of AI, How to manage the digital transformation, How to work in the world of “work 4.0 / new work” and How to manage innovation in times of emerging technologies" 
    • This sounds more like a selection of courses for tech entrepreneurs. I suppose these are topics you are more familiar with / interested in, but I think it would be misleading to call something based on this "EA university".
  • It's weird to me that you call this a university, as you mention in the comments that it's only supposed to last up to 16 weeks.
  • You write "As I own my own consulting company for ~ 2 years now, I know how to build a company." I looked you up on LinkedIn and it seems like the company is only you and one relative, who started working at your company 4 months ago and immidiately went on a sabbatical. This made your statement feel misleading and overconfident to me, as I expect people to expect more than that level of experience and success when reading "I know how to build a company".
Comment by MaxRa on Effective Thesis - Activities and Impact Evaluation [June 2021-June 2022] · 2022-08-27T11:03:21.754Z · EA · GW

Thanks for the update, really cool to see how fast you’re growing! Cheers to another great year!

Comment by MaxRa on Why scale is overrated: The case for increasing EA policy efforts in smaller countries · 2022-08-24T12:32:11.658Z · EA · GW

Thanks for the article, pretty interesting and really clearly written.

Minor aside, I found it interesting that (IIRC) this wrinkle of the history of abolition in Britian was not mention in William MacAskill's What We Owe The Future, where he goes into some more detail on the history of abolition in UK.

However, for a long time the slave trade lobbiest managed to defeat the abolitionists bills. It was not until the war with France and Napoleon’s attempt to strike back against a slave uprise in Saint-Domingue, that the british nationalistic ideals prevailed and the public finally called for a ban on the exportation of slaves. This bill got passed because the war with France made the slave trade economically risky and the public did not want to be associated with french morals.

Might be that the relevance of the conflict with France is rated as less important by other scholars?

Comment by MaxRa on Erich_Grunewald's Shortform · 2022-08-23T10:57:00.739Z · EA · GW

The optimal solution is neither to allow no one in nor to allow everyone in, but somewhere in between.

I feel somewhat icky about the framing of "allowing people into EA". I celebrate everyone who shares the value of improving the lives of others, and who wants to do this most effectively. I don't like the idea that some people will be not allowed to be part of this community, especially since EA is currently the only community like it. I see the tradeoff more in who we're advertising towards and what type of activities we're focussing on as a community, e.g. things that better reflect what is most useful, like cultivating intellectual rigor and effective execution of useful projects.

Comment by MaxRa on What success looks like · 2022-08-22T06:58:40.768Z · EA · GW

Glad you liked it, and thanks for pointer, it is on my reading list for quite some time now. :)

Comment by MaxRa on How and when should we incentivize people to leave EA bubbles and explore? · 2022-08-18T14:46:40.937Z · EA · GW

Hmm, interesting. Is it really true that EAs are not exploring non-EA ideas sufficiently, and aren't taking jobs outside of EA sufficiently? 

I feel like the 80,000 Hours job board is stuffed with positions from non-EA orgs. And while me and a lot of my friends are highly-engaged EAs, I feel like we all fairly naturally explore(d) a bunch outside of EA. As you said, EA is not that big and there's so much other useful and interesting stuff to interact with. People study random things, people read vast literatures that are not from EA, have friends and family and roommates that are not EA. A datapoint might be EA podcasts, that I feel are interviewing non-EAs in at least half of the episodes?

Your suggestions kind of feel like unnecessary exercises from the top-down, like "let's make X higher status", or "let's nudge new people towards X or Y". I feel like naturally people do what interests them, and that's so far going well? But plausible that I'm off, for example because I have spend very little time in the central EA hubs.

Comment by MaxRa on How technical safety standards could promote TAI safety · 2022-08-09T19:26:09.104Z · EA · GW

I think that's a valid worry and I also don't expect the standards to end up specifying how to solve the alignment problem. :P I'd still be pretty happy about the proposed efforts on standard setting because I also expect standards to have massive effects that can be more or less useful for 
a) directing research in directions that reduce longterm risks (e.g. pushing for more mechanistic interpretability),  
b) limiting how quickly an agentic AI can escape our control (e.g. via regulating internet access, making manipulation harder), 
c) enabling strong(er) international agreements (e.g. shared standards could become basis for international monitoring efforts of AI development and deployment).

Comment by MaxRa on What success looks like · 2022-08-08T15:16:42.042Z · EA · GW

Hey Matthijs :) Glad you found it interesting! 

Oh cool, just quickly skimmed the doc, that looks super useful. I'll hopefully find time to take a deeper look later this week.

Comment by MaxRa on Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination · 2022-08-04T08:42:13.094Z · EA · GW

Thanks again for writing this up! Just a random thought, have you considered what happens when you loosen this assumption:

Background assumption: Deploying unaligned AGI means doom. If humanity builds and deploys unaligned AGI, it will almost certainly kill us all. We won’t be saved by being able to stop the unaligned AGI, or by it happening to converge on values that make it want to let us live, or by anything else.

I'm thinking about scenarios where humanity is able to keep the first 1 to 2 generations of AGI under control (e.g. by restricting applications, by using sufficiently good interpretability to detect most deception, due to very gradual capability increases).

Some spontaneous thoughts what pillars might be additionally interesting then:

  • Coordination, but focussed more on labs sharing incidents, insights, tools
  • Humanity's ability to detect and fight power-seeking agents
    • Generic state capacity
    • Generic international cooperation
    • Cybersecurity to prevent rogue agents getting access to resources and weapons, to prevent debilitating cyberattacks
    • Surveillance capabilities
    • Robustness against bioweapons
Comment by MaxRa on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-16T14:41:24.327Z · EA · GW

That said, AI forecasting more broadly - that considers when particular AI capabilities might arise - can be more useful than examining timelines alone, and seems quite useful overall.

+1. My intuition was that forecasts on more granular capabilities would happen automatically if you want to further improve overall timeline estimates. E.g. this is my impression of what a lot of AI timeline related forecasts on Metaculus look like.

Comment by MaxRa on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-26T11:42:11.098Z · EA · GW

Nice, super interesting. Some very scattered thoughts:

  1. Scale shift seems significant to me.
    1. It would be really surprising if increased health, material comfort, increased leisure not all lead to increased well-being, right?
    2. A theme in some 20th century history podcasts I listened to: It's pretty astonishing how a new generation fully blanks out the horrors that happened only a few decades ago. Kinda points to your pet theory and people having a different reference class for the "least happy a person could realistically be"
    3. Also anecdotally, a few people I know from LMIC grew up watching a lot of US movies and shows (there's probably some selection bias here as those people ended up living in Western countries), which plausibly affects what type of life seems normal or adequate to them?
  2. People don't only value well-being (or they underrate it?) and use their wealth for other things
    1. As you said relative wealth seems a big factor. 
      1. I somewhat buy the story that (given basic needs like sufficient diet and safety are met) relative wealth evolutionarily determined a lot about e.g. who you were able to mate with. Robin Hanson's main thesis in The Elephant in the Brain also points in this direction: a lot of our motivation is driven by signalling to others that we're better companions than others.
    2. Financial safety seems another desire that probably can swallow up a ton of money.
  3. People also do not seem super skilled at using their wealth to increase their well-being (yet, growth mindset!)
    1. I somewhat buy the Buddhist story that human psychology is to a large part driven by somewhat futile attempts at trying to avoid unpleasantness. E.g. I kinda sympathize with classic criticisms of consumerism that it doesn't bring lasting joy, that it's kinda a nice but short-lasting rush to get shiny new things, etc.
    2. Relatedly, I have the vague impression that it's somewhat recent that wellbeing is given a much more central position among educated & wealthier people? For example I imagine this group of people to spend more time in meditation retreats today compared to 30 years ago?
      1. Do you, or anybody, happen to know whether there are longitudinal surveys that ask "What do you most value in life?". Maybe then one could see what people use their wealth for?

It's kinda obvious, but I wanted to point out anyway that many of your suggestions for increasing well-being also seems to require significant levels of wealth to pull off:

In some sense, this is the story we all seem to accept: that we do need resources, but only up to a point, and after that point we're just showing off. Hence, we should focus on how society is organised, as opposed to how wealthy it is.

More concretely, in his 2021 book, An Economist’s Lessons on Happiness, Easterlin suggests that job security, a comprehensive welfare state, getting citizens to be healthy, and encouraging long-term relationships would increase average wellbeing. All of those seem fairly plausible to me. [...]

We should also take mental health and palliative care more seriously […] We could also consider improved air quality, reduced noise, more green and blue space (blue spaces being water), and getting people to commute smaller distances

Comment by MaxRa on EA Dedicates · 2022-06-24T09:43:22.605Z · EA · GW

For me, thinking of relationships and hobbies in an instrumental way takes away from how much joy and energy and meaning etc. I get from them. So in practice I expect most "EA dedicates" should instrumentally just live a life of a "non-dedicate", i.e. to value their relationships with their parents, siblings, partners and friends for their own sake.

Other things make this distinction messy:

  • How strongly various psychological needs are expressed for an individual will have strong effects for how their most sustainable "EA dedicate" life looks like. For example 
    • the need for meaning,
    • the need for feeling connected to others, for feeling love,
    • the need for fun.
  • How strongly you wish to found a family probably also is not under your control.
  • Your stamina, e.g. I'd be surprised if I ever be able to productively work 80 hours for more than one week, so I'll probably never look like I'm sacrificing too much.
  • Plausibly somewhat innate character traits like risk-aversion, agreeableness, openness to experience, neuroticism will have a strong effect on what lifestyles you can sustainably live or even just explore without draining a lot of energy.
  • Plausibly how financially independent has a lot of psychological effects that affect how much of an "EA dedicate" you can look like. E.g. I heard that Maslow's hierarchy of needs is very disputed, but it also seems true that helping others is very commonly given less weight by our motivational systems than making sure that we are personally safe etc. 

There is probably a distinction where some EAs would or wouldn't push the button that turns them into an omniscient utility maximizer who would always just take the action that is doing the most good. I would push this button because the lives and the suffering and the beauty that are at stake are so much more important than me and my other values. But in practice I think I will probably never need the distinction between EA dedicates and non-dedicates.

Comment by MaxRa on Disruptive climate protests in the UK didn’t lead to a loss of public support for climate policies · 2022-06-21T21:21:00.989Z · EA · GW

Thanks, interesting topic and glad you looked into this! (Just read the summary and skimmed the rest.) My spontaneous reaction to the results was that only days after the protest might be a little too soon to observe a backlash? 

Comment by MaxRa on How accurate are Open Phil's predictions? · 2022-06-16T10:22:37.277Z · EA · GW

Thanks for sharing, super interesting!

The organization-wide Brier score (measuring both calibration and resolution) is .217, which is somewhat better than chance (.250). This requires careful interpretation, but in short we think that our reasonably good Brier score is mostly driven by good calibration, while resolution has more room for improvement (but this may not be worth the effort). [more]

Another explanation for the low resolution, besides the limited time you spend on the forecasts, might be that you chose questions that you are most uncertain about (i.e. that you are around 50% certain about resolving positively), right?

This is something I noticed when making my own forecasts. To remove this bias I sometimes use a dice to chose the number for questions like 

By Jan 1, 2018,the grantee will have staff working in at least [insert random number from a reasonable range] European countries

Comment by MaxRa on Breaking Up Elite Colleges · 2022-06-12T14:58:10.249Z · EA · GW

I suppose all your points would be satisfied as long the breaking up of colleges happens in a to me pretty reasonable way e.g. by not forcing the new colleges to stay small and non-elite? I understood the main benefit of this to be to remove the current possibly suboptimal college administrations  and to replace them with better management that avoids current problems. 

Comment by MaxRa on What We Owe the Past · 2022-06-04T12:29:22.635Z · EA · GW

I had a somewhat related random stream of thoughts the other day regarding the possibility of bringing past people back to life to allow them to live the life they would like.

While I'm fairly convinced of hedonistic utilitarianism, I found the idea of "righing past wrongs" very appealing. For example allowing a person that died prematurely to live out the fulfilled life that this person would wish for themself, that would feel very morally good to me.

That idea made me wonder if it makes sense to distinguish between persons who were born, and persons that could have existed but didn't, as it seemed somewhat arbitrary to distinguish based on random fluctuations that led to the existence of one kind of person over the other. So at the end of the stream of thought I thought "Might as well spend some infinitely small fraction of our cosmic endowment on instantiating all possible kinds of beings and allow them to live the life they most desire." :D 

Comment by MaxRa on Types of information hazards · 2022-06-02T23:28:18.775Z · EA · GW

Thanks for sharing the summary, I wasn’t aware of many of these. 

Comment by MaxRa on Bibliography of EA writings about fields and movements of interest to EA · 2022-06-01T16:01:26.359Z · EA · GW

Amnesty International seems like another case that would be worth understanding better:

  • cosmopolitan, secular, broad and somewhat abstract principles
  • strong presence as university groups (at least in Germany)
  • 10 million "supporters" according to the Wikipedia article
  • sobering  reports of "toxic culture" in the main offices (bullying, sexism & racism) despite what I assume to be well-meaning people
Comment by MaxRa on AI Alternative Futures: Exploratory Scenario Mapping for Artificial Intelligence Risk - Request for Participation [Linkpost] · 2022-05-31T12:41:30.614Z · EA · GW

Nice, thinking more about possible AI risk scenarios seems super important to me, thanks for working on this!

I'm super unfamiliar with your methodology, do you have a good example where this process is applied to a similar situation (sorry if I didn't spot this in the text)?

Comment by MaxRa on EA, Psychology & AI Safety Research · 2022-05-31T11:14:57.477Z · EA · GW

Thanks for sharing this list, a bunch of great people! I have a background in cognitive science and am interested in exploring the strategy of understanding human intelligence for designing aligned AIs.

Some quotes from Paul Christiano that I read a couple months ago on the intersection.

From The easy goal inference problem is still hard:

The possible extra oomph of Inverse Reinforcement Learning comes from
an explicit model of the human’s mistakes or bounded rationality. It’s 
what specifies what the AI should do differently in order to be 
“smarter,” what parts of the human’s policy it should throw out. So it 
implicitly specifies which of the human behaviors the AI should keep. 
The error model isn’t an afterthought — it’s the main affair.

and

It’s not clear to me whether or exactly how progress in AI will make 
this problem [of finding any reasonable representation of any reasonable 
approximation to what that human wants] easier. I can certainly see how enough progress in 
cognitive science might yield an answer, but it seems much more likely 
that it will instead tell us “Your question wasn’t well defined.” What 
do we do then?

From Clarifying “AI alignment”:

“What [the human operator] H wants” is even more problematic [...]. Clarifying what this expression means, and how to operationalize it in a way that could be used to inform an AI’s behavior, is part of the alignment problem. Without additional clarity on this concept, we may not be able to build an AI that tries to do what H wants it to do.

Comment by MaxRa on Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey) · 2022-05-29T18:48:21.157Z · EA · GW

Random idea: I wonder if it would be useful to let people make forecasts about key results before conducting studies like this (e.g. by asking a few people directly or putting the question on Metaculus or Manifold Markets), so that we better understand how informative the study was without succumbing to hindsight bias.

Comment by MaxRa on Some unfun lessons I learned as a junior grantmaker · 2022-05-28T14:50:14.862Z · EA · GW

One solution could be to have a document with something like „The 10 most common reasons for rejections“ and send it to people with a disclaimer like „We are wary of giving specific feedback because we worry about [insert reasons]. The reason why I rejected this proposal is well covered among the 10 reasons in this list and it should be fairly clear which ones apply to your proposal, especially if you would go through the list with another person that has read your proposal.“

Comment by MaxRa on Notes on "The Myth of the Nuclear Revolution" (Lieber & Press, 2020) · 2022-05-27T08:26:32.726Z · EA · GW

Thanks for the summary! :) 

Nuclear stalemate predictions: (4) Reduced competition for strategic territory

While reading this part I wondered whether the book neglects economic concerns other than trade-routes and natural resources. In my head the cold war was in large part a competition between two economic systems, so I imagine having more profitable trading relationships should've been really valuable, too:

increased wealth -> increased stability and power, increased attraction of the political system / winning the "ideological battle", increased living standards

Comment by MaxRa on NegativeNuno's Shortform · 2022-05-19T12:42:16.988Z · EA · GW

Possible solution: I imagine some EAs would be happy to turn a rambly voice message about your complaints into a tactful comment now and then.

Comment by MaxRa on Norms and features for the Forum · 2022-05-16T11:43:07.602Z · EA · GW

Nice! Related to summary and limitation boxes in the editor, maybe the forum could offer post templates for different kinds of posts. For example a template for project proposals could involve a TLDR, summary, theory of change, cost-benefit estimate, next steps, key uncertainties, requests for feedback/co-founders. Other template candidates might be cause area explorations, criticism of EA, question posts, book reviews, project reviews.

Comment by MaxRa on Norms and features for the Forum · 2022-05-16T11:19:08.046Z · EA · GW

Nice! Related to summary and limitation boxes in the editor, maybe the forum could offer post templates for different kinds of posts. For example a template for project proposals could involve a TLDR, summary, theory of change, cost-benefit estimate, next steps, key uncertainties, requests for feedback/co-founders. Other template candidates might be cause area explorations, criticism of EA, question posts, book reviews, project reviews. 

Edit: A mvp version of this might be suggesting a "role model" post for different content categories.

Comment by MaxRa on What are some high-EV but failed EA projects? · 2022-05-13T21:49:21.909Z · EA · GW

See: How many EAs failed in high risk, high reward projects? https://forum.effectivealtruism.org/posts/JtE2srazz4Yu6NcuQ/how-many-eas-failed-in-zhigh-risk-high-reward-projects

Comment by MaxRa on EA and the current funding situation · 2022-05-11T13:42:09.066Z · EA · GW

I think I'm less worried about the risk of increased deception.

you won't have the time for long 10-hour conversations when you hang out in the evening.

The analogy breaks down somewhat because these number of 10-hour conversations are also scaling with size of the movement, right? And I think it's relatively discernible whether somebody actually cares about doing good when you talk to them a lot. I don't think you need to be a particularly senior EA for noticing altruistic and impact-driven intentions.

we could actually bring any measurable fraction of humanity's ingenuity and energy to bear on preventing humanity's extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.

 Additionally I'm also less worried because I think most people actually also care about doing good and doing things efficiently. EA will still select for people who are less motivated to work in industry, where I expect wages to still be higher for somebody capable enough to scheme a great grant proposal.

Comment by MaxRa on The Future Fund’s Project Ideas Competition · 2022-05-04T13:25:11.633Z · EA · GW

Thanks, I think that's a really interesting and potentially great idea. I'd encourage you to post it as a short stand-alone post, I'd be interested in hearing other people's thoughts.

Comment by MaxRa on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-27T15:10:18.393Z · EA · GW

Thanks for writing this, I think that attempts of getting more people to work on AI Safety seem pretty worth exploring. 

One thought that came to my mind was that it would be great if we could identify AI researchers who have the most potential to contribute to the most bottlenecked issues in AI safety research. One skill seems to be something like "finding structure/good questions/useful framing in a preparadigmatic field". Maybe we could also identify researchers from other technical fields who have shown to be skilled at this type of work and convince them to give AI safety a shot. Furthermore it would maybe help with scouting junior research talent.

Comment by MaxRa on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-26T16:15:55.749Z · EA · GW

Maybe an "honorable mentions" category.

If they already get a price, I wouldn't call it "honorable mentions" because that unnecessarily diminishes it in my eyes. Just have anything that seems that would get at B- in school be in the same category as the 250$ price?

Ah, I think my worry is that it feels difficult for me to find a standard to rate that actually tracks quality.

Ah, interesting, I have the opposite intuition!:D I completely agree that you shouldn't give advice about the length of the distillations, but the criteria you mention here just seem really useful and like I'd be surprised if e.g. you find something clearly presented and accessible, and I wouldn't.

  • Depth of understanding
  • Clarity of presentation
  • Rigor of work
  • Concision/Length (longer papers will need to present more information than shorter papers)
  • Originality of insight
  • Accessibility

And I feel like somebody who has spend like ~40 hours reading and discussing AI Safety material (e.g. as part  AGI Safety Fundamentals course) could do a reasonably coherent job at rating the understanding and rigor. Originality seems maybe the trickiest, as you probably have to have some grasp of what ideas/framings are already in the water and which aren't.

Comment by MaxRa on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-25T09:48:47.970Z · EA · GW

You probably already have seen that the contest was featured on AstralCodexTen, so you might get more obviously good submissions than you have prices for and it would kinda feel like a wasted opportunity to not clearly signal (i.e. with money) to those authors that their work is highly appreciated and that we would love for them to do more of this work.

Comment by MaxRa on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-25T09:43:27.202Z · EA · GW

I think I will try to add more texts and find more readers, as you suggest. 

I've been thinking of going into working on creating contests in the future as a potentially serious work project

Nice and nice! :) 

Do you have any ideas of clear cutoffs that would retain quality (for future contests if nothing else)? 

Hmm, is your worry that distillations that in hindsight seem to be fairly sub-optimal (e.g. with major mistakes or confusing explanations) end up receiving the lowest tier price because there is some noise introduced by the people who rate the distillations? I think this might happen only rarely, for maybe 2 in 100 distillations? I think your list of scoring criteria already goes a long way giving raters a good idea for what solid work looks like. The money for the lowest tier would also not be a lot, maybe 200$. Giving a price to in-hindsight subpar quality work would maybe reduce the prestige of the price a little bit, but I think it's a fairly junior price anyway that mostly encourages and rewards initial solid efforts. Also you still would have the higher tiers for especially good work which would lose little prestige.

Comment by MaxRa on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-24T15:24:51.435Z · EA · GW

Really cool, just last week I was thinking about whether the alignment community should (massively) scale up prizes with relatively low barriers to entry!

Having you considered making this bigger? E.g. with more prices and a more active outreach to other universities?

  • I initially thought that ideally every contribution that clears a certain bar should be rewarded accordingly, that way there's less uncertainty about payoffs and more people will contribute
  • I think you likely could find more texts to recommend, but even duplicated distillations are still valuable for getting students into thinking about alignment research and identifying particularly promising candidates
  • Evaluation time is a likely bottleneck, but probably you could find a handful of e.g. AGI Safety Fundamentals alumni to volunteer a few hours, or many more if you offer compensation for helping out
Comment by MaxRa on I burnt out at EAG. Let's talk about it. · 2022-04-23T21:58:28.975Z · EA · GW

Hmm, I associate retreats with being relaxing and with a lot of down-time for reflection, very different from conferences.

Comment by MaxRa on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T10:48:24.087Z · EA · GW

Thanks for sharing your experience!

I have a relatively mild version of imposter syndrome and a "hack" that helped once or twice is being upfront about how I think about my abilities.  If people still want to work with me I feel like the expectations are at least more in line with reality and nobody will be particularly disappointed. And then I might as well make the best out of it.

Oh, and another hack that comes to mind is making predictions about my performance. After writing down "Okay, I got to application round 2. Maybe there's a 10% chance that I'll get to round 3, and a 1% chance that I'll get a job offer..." I feel more decoupled from the whole process, like I'm doing an experiment and am being #reasonable no matter what happens because I'm making predictions. xD 

Comment by MaxRa on Ought's theory of change · 2022-04-20T11:23:30.283Z · EA · GW

Thanks a lot for elaborating, makes sense to me.

I was fuzzy about what I wanted to communicate with the term "careful", thanks for spelling out your perspective here. I'm still a little uneasy about the idea that generally improving the ability to plan better will also make sufficiently many actors more careful about avoiding problems that are particularly risky for our future. It just seems so rare that important actors care enough about such risks, even for things that humanity is able to predict and plan for reasonably well, like pandemics.

Comment by MaxRa on FTX/CEA - show us your numbers! · 2022-04-18T18:02:45.923Z · EA · GW

Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world.

That's really interesting to me because I'm currently thinking about potential recruitment efforts at CS departments for AI safety roles. I couldn't immediately find a source for the numbers you mention, do you remember where you got them from?

Comment by MaxRa on Ought's theory of change · 2022-04-17T13:44:59.860Z · EA · GW

Cool, thanks for sharing, I'm a big fan of Elicit! Some spontaneous thoughts:

We want AI to be more helpful for qualitative research, long-term forecasting, planning, and decision-making than for persuasion, keeping people engaged, and military robotics.

Are you worried that your work will be used for more likely regretable things like

  • improving the competence of actors who are less altruistic and less careful about unintended consequences (e.g. many companies,  militaries and government insitutions), and
  • speeding up AI capabilities research, and speeding it up more than AI safety research?

I suppose it will be difficult to have much control over insights you generate and it will be relatively easy to replicate your product if you make it publicly available?

Have you considered deemphasizing trying to offer a commercially successful product that will find broad application in the world, and focussing more strongly on designing systems that are safe and aligned with human values?

Regarding the competition between process-based vs. outcome-based machine learning

Today, process-based systems are ahead: Most systems in the world don’t use much machine learning, and to the extent that they use it, it’s for small, independently meaningful, fairly interpretable steps like predictive search, ranking, or recommendation as part of much larger systems. [from your referenced LessWrong post]

My first reaction was thinking that today's ML systems might not be the best comparison, and instead you might want to include all information processing systems, which include human brains. I guess human brains are mostly outcome-based systems with processed-based features:

  • we're monitoring our own thinking and adjust it if it fails to live up to standards we hold, and 
  • we communicate our thought processes for feedback and to teach others

But most of it seems outcome-based and fairly inscrutable?

Comment by MaxRa on This innovative finance concept might go a long way to solving the world's biggest problems · 2022-04-10T01:10:31.511Z · EA · GW

Thanks for writing this up, I've been super interested in this since Matt Levine started discussing asset managers like BlackRock having an impact through their climate related investing strategies. It would be so great if this would turn out to be a mechanism to coordinate patient and safe AI development among AI companies and governments.

Random things:

  • recent working paper finding that big asset managers are currently voting against environmentally friendly actions [Tweet] (I suppose it's likely that with discount rates & predominant investment in regions relatively less affected by climate change, this might be profit maximizing even as a relatively universal owner) 

A fine-gained analysis shows that the combined voting decisions of the Big Three are more likely to lead to the failure of environmental resolutions and that, whether they succeed or fail, these resolutions tend to be narrow in scope and piecemeal in nature

  • somebody mentioned that it might be surprising that those big money asset managers didn't seem to get much involved in Covid, e.g. by making investments in vaccine research and rollouts, as they internalized a big chunk of the economic fallout
  • I wondered how much less promising this strategy is for coordinating with Chinese firms, as I have the superficial impression that investors have much less influence there