Posts

Selecting a 'better' threshold for p-values being 'statistically significant' 2022-01-07T11:23:42.943Z
Apply for UK Labour Party Policy Advisor Roles to Shadow Chancellor and Shadow Home Secretary 2022-01-01T17:57:01.808Z
Technocracy vs populism (including thoughts on the democratising risk paper and its responses) 2021-12-29T03:08:50.394Z
Does anyone know of any work that investigates whether private schools add value to society vs only change *who* attains socioeconomic success? 2021-12-19T21:55:51.645Z
Does anyone have a list of summer internship opportunities that are a particularly good fit for EAs? 2021-12-02T18:45:40.869Z
One-year masters degrees related to biosecurity? 2021-10-29T13:30:39.459Z
Effective altruism merchandise to advertise the movement 2021-10-27T11:46:33.403Z
Is there any work on how best to protect young / emerging democracies from becoming autocracies? 2021-10-25T12:04:58.508Z
The expected value of funding anti-aging research has probably dropped significantly 2021-09-05T15:32:11.677Z
EA cause areas are just areas where great interventions should be easier to find 2021-07-17T12:16:42.918Z
What are good institutes to conduct a research-based Masters in ageing research? 2021-06-23T15:56:19.570Z
Should someone start a grassroots campaign for USA to recognise the State of Palestine? 2021-05-11T15:29:10.555Z
Are global pandemics going to be more likely or less likely over the next 100 years? 2021-05-06T23:48:19.019Z
Has anyone done any work on how donating to lab grown meat research (https://new-harvest.org/) might compare to Giving Green's recommendations for fighting climate change? 2021-04-28T12:02:00.999Z

Comments

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-04T12:20:38.768Z · EA · GW

Yes I did, apologies, just corrected it.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T20:09:44.657Z · EA · GW

Thanks for your comment. 

I have updated away of considering technocracy vs populism to be a crucial consideration based on arguments that EAs using expertise to influence policymakers are mostly replacing other expert opinion and not public opinion.

I think the best example of EA activity coming into conflict with public opinion would be the campaign against the decrease in the UK's foreign aid budget

And here's a public poll on this question, where 66% supported the decrease.

To clarify, I'm not criticising the campaign, I'm quite strongly in favour of more technocratic decision-making and of more foreign aid.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T20:02:20.634Z · EA · GW

Thanks for pointing out the assumptions. I was aware of them but thought that my statement was true despite them, and didn't want to lengthen the post too much. In the future I will add assumptions like these as a footnote, so that people who disagree on the assumptions can think about how that should affect their views on the post as a whole.

I agree that the negative connotations of 'technocracy' is probably a good explanation of why proponents of expert-opinion based policy don't use the word that often. 

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T19:56:15.609Z · EA · GW

Thanks for your comment.

I think the issue of trust is very interesting when thinking about technocracy vs populism in the longer term. 

However, I think the risk of the population rejecting decisions is only significant if decisions are extremely technocratic, and this would only a concern if we conclude that extremely technocratic decisions are ideal. I think we are unlikely to conclude this. 

But if we do conclude that extremely technocratic decisions are ideal, I think the ideal approach would be to seek to increase population trust in experts, and aim for a gradual increase in technocracy corresponding to increasing population trust. But it is certainly possible that population trust can't be increased enough to accept an extreme level of technocracy.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T19:10:58.067Z · EA · GW

Update: Having read another comment, it seems likely that expert opinion most replaces other expert opinion in the context of policymaking. That changes my mind on whether technocracy vs populism is a crucial consideration, since it is only relevant to 'promoting evidence-based policy', a very minor EA cause area.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T19:08:23.176Z · EA · GW

Thanks for your insight. 

Assuming you're right and experts who seek to influence policy do mostly just replace other expert opinion, then the "let's use our expertise to influence policymakers" aspect of EA does not meaningfully make decision-making more technocratic, making the debate between technocracy vs populism only be relevant to EA in the context of 'promoting evidence based policy', but not to the major EA cause areas. That changes my mind and makes me think the technocracy vs populism debate is not a crucial consideration for EA, since it is only important for a minor EA cause area.

If you anyone else reading this has also worked in government and has an opinion on whether experts seeking to influence policymakers mostly replace the opinion of other experts, I'd be interested to hear it!

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T18:59:16.776Z · EA · GW

Thanks!

Comment by freedomandutility on Hits-based development: funding developing-country economists · 2022-01-02T18:58:14.480Z · EA · GW

I wonder how this idea relates to this initiative on educational migration to sponsor visas for Ugandan students studying BA programs in Germany. 

Perhaps they should focus on Economics BA programs so that Uganda can benefit from return migration later on? (also AFAIK, economics graduates earn more than graduates from other programs, so the students' incomes and remittances would both also be higher)

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T18:41:16.882Z · EA · GW

Thanks for the clarification. I agree that this would be a good explanation for why the term 'technocracy' doesn't come up that often in EA.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T18:39:19.463Z · EA · GW

I agree that populism as a tool for dealing with moral uncertainty has obvious weaknesses (thank you for explaining some of these in detail), but in my view the weaknesses are not large enough for a systematic exploration of this question to be not worth the time. 

I also agree that other EAs viewing these weaknesses as too severe would be a good explanation for why this hasn't been done yet.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T18:33:47.749Z · EA · GW

I think examples and better wording might help:

With overseas aid budgets, the set of plausible policy options, such as decreasing and increasing the budget by different amounts, has a large range of expected values, and the uncertainty surrounding the expected value of each policy option is low. For this, I think more technocratic approaches are preferable.

With income tax rates, the set of plausible policy options, such as decreasing and increasing income tax rates by different amounts, has a smaller range of expected values, and the uncertainty surrounding the expected value of each policy option is high. For this, I think more populist approaches are preferable.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-02T18:20:32.395Z · EA · GW

Thank you for explaining all of this.

I think we are disagreeing in a general sense about the usefulness of imprecise and unreliable, but systematically obtained answers to big questions, when trying to answer smaller sub-questions. If we think these answers are less useful, we are less likely to decide that 'technocracy vs populism in general' is a crucial consideration. If we think these answers are more useful, we are more likely to decide that 'technocracy vs populism in general' is a crucial consideration.

I do agree the conclusion of Acemoglu's paper (admittedly, it is too long for me to read) is only weak evidence in favour of more technocracy, but if other papers were able to identify more natural experiments and came to similar conclusions, in theory I think that could generate enough evidence for 'more technocracy' (or 'more populism') to be a sufficiently strong prior / heuristic to be useful when looking at individual cases, which is why I still think 'technocracy vs populism' is a crucial consideration.

Comment by freedomandutility on An EA case for interest in UAPs/UFOs and an idea as to what they are · 2021-12-31T01:01:57.370Z · EA · GW

Thanks for the interesting post! 

 

One disagreement I have:

"Indeed, from this perspective perhaps we should have a strong prior that von Neumann probes have visited Earth."

I don't think this should be strong prior. 

The probability of von Neumann probes visiting earth seems heavily constrained by the age of the universe, the time taken for intelligent / superintelligent life to emerge elsewhere, the distance of this life from us, the accelerating expansion of the universe, the speed of replication of von Neumann probes (including the abundance of viable resources) and the speed of travel of von Neumann probes.

Comment by freedomandutility on Democratising Risk - or how EA deals with critics · 2021-12-30T00:40:45.235Z · EA · GW

Strong upvote from me - you’ve articulated my main criticisms of EA.

I think it’s particularly surprising that EA still doesn’t pay much attention to mental health and happiness as a cause area, especially when we discuss pleasure and suffering all the time, Yew Kwang Ng focused so much on happiness, and Michael Plant has collaborated with Peter Singer.

Comment by freedomandutility on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T16:59:02.860Z · EA · GW

Thanks for your well thought-out comment.

 

"I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats.” Same!

You're correct in reading my post as "technocracy vs populism is a crucial consideration".


I think social science is unlikely to offer us a good, general  answer to technocracy vs populism, but I think it can offer us a better answer than we currently have, because I feel that we have mostly skipped attempting to take a scientific approach to the question, but nonetheless have accepted 'more technocracy than the status quo' as the answer.

Also, I am confident that social science can offer us useful heuristics for when we look at specific cases.

For example, Scott’s article (thank you for linking it) looks at some positive examples of historical policy changes that (he claims) were mostly technocratic. 

I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other. 

We could also look specifically at how public opinion and expert opinion may have differed at the time as policymakers approached these decisions, to work out if more technocracy or more populism has a better track record under the conditions of a large disagreement between public and expert opinion. 

 

Also, based on the pros and cons of technocracy and populism that I outlined, it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values. 

I think part of what makes existential risk studies so difficult is that these heuristics don’t help, because existential risk studies involves both extremely high uncertainty and plausible policy options with an extremely large range of expected values. 

Possibly, these situations most suit a 'third' approach, where experts lobby the public rather than policymakers directly. If this is successful, public and expert opinion could become very similar, and the more similar they are, the more similar technocratic and populist approaches become, meaning that striking the right balance between them matters considerably less. (This would mean that Nick Bostrom and Toby Ord were way ahead of me by publishing Superintelligence and The Precipice).

 

I like that EA actively thinks about the risks associated with moral uncertainty, but I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests, and I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself. I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.  (I think the focus of moral uncertainty has generally been on experts themselves trying to account for various moral theories when forming  opinions).

 

Also, to clarify, I am not arguing against more technocracy. I think it's entirely reasonable for EAs to conclude that more technocracy is better than the status quo even after considering the risks, but I think it's important for this conclusion to be made in a "scientific / rational / systematic / evidence and careful reasoning " way. Currently, I don't think this is generally the case even if EAs do think about moral uncertainty, for the reasons that I outlined in the paragraph above this one.


I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.

 

Finally, thanks for all the links!

Comment by freedomandutility on Democratising Risk - or how EA deals with critics · 2021-12-29T03:56:15.545Z · EA · GW

Ah okay.

I think I interpreted this as ‘pressure’ to not publish, and my definition of ‘shutting down ideas’ includes pressure / strong advice against publishing them, while yours is restricted to forcing people not to publish them.

Comment by freedomandutility on Democratising Risk - or how EA deals with critics · 2021-12-29T03:17:34.638Z · EA · GW

I agree with most of what you say other than it being reasonable for some people to have acted in self-interest. 

While I do think it is unavoidable that there will be attempts to shut down certain ideas and arguments out of the self-interest of some EAs, I think it's important that we have a very low tolerance of this.

Comment by freedomandutility on Who are the most well known credible people who endorse EA? · 2021-12-27T20:10:55.567Z · EA · GW

Ezra Klein?

Comment by freedomandutility on Improving science: Influencing the direction of research and the choice of research questions · 2021-12-20T21:43:29.010Z · EA · GW

"There is a possibility that it (a more explicit discussion regarding values and prioritization in science) could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with."

Strongly agree with the risk of backfiring, and I think this is more likely than things going well.

I think if we promoted explicitly value-driven science or discussion of it, the values that drive research priorities are more likely to become 'social justice values' than effective altruist values, leading to a focus on unsystematically selected,  crowded and intractable cause areas, such as outcome inequalities amongst ethnic groups and sexes in rich English-speaking democracies.  This is because these are the values more likely to be held by the people setting research priorities, not effective altruist values. I also think a change in this direction would be very difficult to reverse.

I think a better idea would be to selectively and separately campaign for research priorities to shift in predefined directions (i.e - one campaign for more focus on the problems affecting the global poor, another campaign for future generations and another campaign for animals).

Comment by freedomandutility on evelynciara's Shortform · 2021-12-19T21:53:20.591Z · EA · GW

I think this is a great idea. A related idea I had is a competition for "intro to EA" pitches because I don't currently feel like I can send my friends a link to a pitch that I'm satisfied with.

A simple version could literally just be an EA forum post where everyone comments an "intro to EA" pitch under a certain word limit, and other people upvote / downvote.

A fancier version could have a cash prize, narrowing down entries through EA forum voting, and then testing the top 5 through online surveys. 

I think in a more general sense, we should create markets to incentivise and select persuasive writing on EA issues aimed at the public.

Comment by freedomandutility on 80,000 Hours wants to talk to more people than ever · 2021-12-17T23:48:45.996Z · EA · GW

What would you say to people who aren’t sure about what specific aspects of career planning they want advice with? Would you suggest spending more time solo thinking about things first?

Comment by freedomandutility on What is good? · 2021-12-17T23:44:35.478Z · EA · GW

I think if you ask “Why?” enough times, any goal is based on a subjective opinion without underlying rationale (a terminal value). “I want to do the most good possible” is just the subjective opinion without underlying rationale that I like the most, so I set some of my goals based on this.

Comment by freedomandutility on High School Seniors React to 80k Advice · 2021-12-17T23:38:31.451Z · EA · GW

In my opinion, objection 4 is a result of some people taking the (good) idea that “all people are equal” and developing the (bad) intuition that “all professions / causes are equally important”.

Subsequently, they’re offended by EA ideas of some career paths / causes being higher impact than others, because “professions / causes aren’t equally important” starts to sound like “people aren’t equally important” to them.

I have noticed this line of thinking with 1 friend but I don’t know how prevalent it is. We could consider adding clarifying statements like “people in lower impact careers do not have less intrinsic value as human beings” to EA careers advice. But my guess is that it would not be worth it, because I think people who’s intuitions are against prioritising between careers and cause areas are very unlikely to ever be influenced by EA ideas.

Comment by freedomandutility on Has anything in the EA global health sphere changed since the critiques of "randomista development" 1-2 years ago? · 2021-12-03T15:00:23.349Z · EA · GW

I second the need to focus on growth in LMICs, partly on the grounds of more money better translating to happiness for poorer people than for richer people.

But also, it seems like HICs benefit from more think tanks and people working on policy specific to their country, whereas LMICs seem to have fewer think tanks based in their own countries working on policy specific to that country, but I might be wrong about the numbers of think tanks and the benefits they provide.

Comment by freedomandutility on EA megaprojects continued · 2021-12-03T14:50:01.673Z · EA · GW

Even then it would seem preferable to me to fund something like a “department of AI safety” at an existing university, since the department (staff and graduates) could benefit from the university’s prestige. I assume this is possible since FHI and GPI exist.

Comment by freedomandutility on EA megaprojects continued · 2021-12-03T13:14:37.976Z · EA · GW

Compared to the other ideas here, I think the benefits of an explicitly EA university seem small (compared to the current set-up of EA institutes at normal universities, EAs doing EA-relevant degrees at normal universities and EA university societies).

Are there other major benefits I’m missing other than more value-alignment + more co-operation between EAs?

One downside of EA universities I can think of is that it might slow movement growth since EAs will be spending less time with people unfamiliar with the movement / fewer people at normal universities will come across EA.

Comment by freedomandutility on Does anyone have a list of summer internship opportunities that are a particularly good fit for EAs? · 2021-12-03T11:16:01.980Z · EA · GW

Thanks that looks really good! 😃

Comment by freedomandutility on Does anyone have a list of summer internship opportunities that are a particularly good fit for EAs? · 2021-12-02T23:51:41.758Z · EA · GW

I'm in college, looking at global health / pandemics / biomedical research, but I thought it might be useful to have general list of EA relevant summer opportunities for EAs in college.

Comment by freedomandutility on What Small Weird Thing Do You Fund? · 2021-11-25T17:15:50.921Z · EA · GW

This opinion is mine and doesn't represent EA:

I would fund Soch (https://youtube.com/c/SochYoutube), a Vox-like Indian YouTube channel which sometimes presents a technocratic / academic perspective on Indian politics and news so that they can do more of that, and AltNews, an Indian anti-fake-news fact-checking website to expand their work.

This would be with the idea of decreasing populism in India.

I think this is suited to small donors because the effects of these seem ridiculously hard to measure or estimate, even with back of the envelope calculations. But I guess I could also consider funding small experiments to see how effective these mediums are at informing people.

Comment by freedomandutility on Don’t wait – there’s plenty more need and opportunity today · 2021-11-24T19:01:22.943Z · EA · GW

I agree with most of your comment.

However, given that GiveWell want to use a bar of 5-7x GiveDirectly, I think accounting for a study that at best will demonstrate that GiveDirectly is 2.6 times more effective than previously thought, will not influence GiveWell’s decision to wait for better opportunities, since it still doesn’t meet the 5-7x GiveDirectly bar.

Comment by freedomandutility on New Effective Thesis services and opportunities to get involved · 2021-11-15T19:29:20.812Z · EA · GW

Hi, amazing to see that Effective Thesis is expanding its services! 

Personally I think Effective Thesis could become one of the highest impact EA initiatives, since hundreds of thousands of theses and dissertations are written every year, and steering these towards pressing global problems seems to have very high expected value.

I may start an EA society at my university in the future, and I was wondering whether you actively collaborate with university EA societies to promote Effective Thesis to students, and the extent to which you do this?

Comment by freedomandutility on Is there any work on how best to protect young / emerging democracies from becoming autocracies? · 2021-10-29T13:21:01.798Z · EA · GW

Thanks!

Comment by freedomandutility on Initial thoughts on malaria vaccine approval · 2021-10-13T13:28:55.359Z · EA · GW

Thanks for the reply, that answers my question perfectly :)

Comment by freedomandutility on Initial thoughts on malaria vaccine approval · 2021-10-10T13:14:43.196Z · EA · GW

Apologies if I’ve missed this in the post, but I don’t think it discusses a potential decrease in the marginal value of LLINs and SMC due to RTS,S, instead focusing on a comparison between LLIN and SMC vs RTS,S.

Do GiveWell intend to explore the effect on marginal value at a later point in time / in more detail? It seems plausible to me that despite LLIN and SMC being more cost effective than RTS,S, a decrease in their marginal value could mean that donors would prefer to donate to other GiveWell top charities over AMF.

Comment by freedomandutility on remittances:wave as immigration:startup x? · 2021-10-08T16:46:12.865Z · EA · GW

I’m not very well versed on what good methods would be to increase migration, but I think there’s need for an international organisation that advocates for / researches policy change towards more lenient immigration policies, focused on making it easier to migrate from the poorest to the richest countries.

For example, such an org could try to identify which rich country would be the best within which to push for more lenient immigration rules.

I hope to do a post about this at some point after having given the idea more thought.

Comment by freedomandutility on The expected value of funding anti-aging research has probably dropped significantly · 2021-09-06T14:13:04.476Z · EA · GW

In my opinion, the public seems to dislike the idea of rejuvenation biotechnology, but doesn't dislike it enough that public opinion would significantly hamper the progress of this field.

I think the billionaire space race may be a good example of the public disliking weird stuff that billionaires are doing, but public opinion not significantly impacting their ability to do the weird stuff.

I am also not too worried about bad PR keeping good scientists away since I think high salaries should help to overcome their fears / misunderstandings surrounding anti-ageing research.

Comment by freedomandutility on The expected value of funding anti-aging research has probably dropped significantly · 2021-09-06T13:51:19.648Z · EA · GW

Thanks for your comment.

I'm agnostic  (EDIT) I personally do not think funding certain types of research within anti-ageing research could still have similar EV to EA priorities despite the EV being lower than it was before, but I think this is plausible.

I'm also hopeful that Altos Labs is more open and collaborative than Calico Labs.

While I'm seeing some criticism of the idea that billionaires want to live longer, I think it's unlikely to be widespread enough or draw enough attention to noticeably damage Altos Labs, or cause much further damage to anti-ageing research in general.

Comment by freedomandutility on The expected value of funding anti-aging research has probably dropped significantly · 2021-09-06T13:46:48.618Z · EA · GW

Yes you're right, now that I think about Harrison's comment, I think both a) "the industry is already/now getting lots of money from billionaires, so the marginal value of donating additional money is smaller" and b) donating money to anti-ageing research will lead to billionaires donating less money to anti-ageing research.

Comment by freedomandutility on The expected value of funding anti-aging research has probably dropped significantly · 2021-09-06T06:54:04.621Z · EA · GW

The first! (And not the second). I’m not 100% sure if ‘subsidising billionaires’ is the correct term but I mean that money donated towards aging is probably going to be donated by billionaires anyway.

Comment by freedomandutility on More EAs should consider “non-EA” jobs · 2021-08-20T11:42:34.240Z · EA · GW

Same! I think neglectedness is more useful for identifying impactful “just add more funding” style interventions, but is less useful for identifying impactful careers and other types of interventions since focusing on neglectedness systematically misses high leverage careers and interventions.

Comment by freedomandutility on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-19T20:48:26.676Z · EA · GW

Technological developments in the biotech / pharma industry are notoriously expensive, and my (fairly subjective) impression is that the industry is riddled with market failures.

Especially when applied to particularly pressing problems like pandemic prevention / preparedness, infectious diseases in LMICs, vaccines, ageing and chronic pain, I think EA for-profits and non-profits in this industry could absorb 100 million dollars of annual funding while providing high expected value in terms of social impact.

Comment by freedomandutility on What is the closest thing you know to EA that isn't EA? · 2021-08-15T15:03:12.756Z · EA · GW

FWIW, I do think Reddit neoliberalism has important differences to EA (mainly that it has a strong preference for free markets and deregulation), but I think this is still compatible with considering Reddit neoliberalism to be “close to EA but not EA”.

Comment by freedomandutility on How students, groups, and community members can use funding · 2021-08-13T09:13:42.450Z · EA · GW

Hi, thank you for your post!

As a student involved with some community building work and some other voluntary EA-aligned work, I’m still a bit reluctant (perhaps irrationally so) to apply for “converting energy to time” funding and thought that I’d share what I think my reservations are.

  1. I think it feels too self-centred to consider things like healthy ready meals and Ubers for me to be worth EA funding when this could theoretically go to AMF instead.

  2. I’m worried that I won’t end up using the time saved for EA work.

  3. I’m worried that getting funding will make me feel a stronger external obligation towards EA work than I’d like (over say, just relaxing).

  4. I’m personally a little paranoid about EA falling into a meta trap (https://forum.effectivealtruism.org/posts/J3gZxFqsCFmzNosNa/ea-risks-falling-into-a-meta-trap-but-we-can-avoid-it) which slightly biases me against funding + asking to be funded for meta work.

Having thought about this, I think it could make sense for me to try spending more for a while to convert money into time, then determine how much of this time I’m using for EA work, and then try to work out what a reasonable amount of funding to request would be based on this.

Comment by freedomandutility on Should someone start a grassroots campaign for USA to recognise the State of Palestine? · 2021-07-18T12:21:14.143Z · EA · GW

That’s great to hear! I too am quite skeptical about finding many good interventions in this area for the reasons you describe, I think most good interventions here would be along the lines of “improving the efficiency with which resources are being used” rather than “adding more resources”

Comment by freedomandutility on EA cause areas are just areas where great interventions should be easier to find · 2021-07-18T10:20:47.967Z · EA · GW

Hi, thanks for providing those reasons, I can totally see the rationale!

One general point I'd like to make is if a proposed intervention is "improving the efficiency of work on cause X", a large amount of resources already being poured into cause X should actually increase the EV of the proposed intervention (but obviously, this is assuming that the work on cause X is positive in expectation, and as you say, some may not feel this way about some pro-Palestinian activism).

Comment by freedomandutility on EA cause areas are just areas where great interventions should be easier to find · 2021-07-18T10:15:19.993Z · EA · GW

I think the interventions would be very specific to the domain. I mentioned an intervention to direct pro-Palestinian activism towards a tangible goal, and with redirecting western anti-racism work towards international genocide prevention, this could possibly be done by getting western anti-racism organisations to partner with similar organisations in countries with greater risk of genocides, which could lead to resource / expertise sharing over a long period of time.

Comment by freedomandutility on EA cause areas are just areas where great interventions should be easier to find · 2021-07-18T09:00:12.135Z · EA · GW

Yep exactly that!

Comment by freedomandutility on EA cause areas are just areas where great interventions should be easier to find · 2021-07-17T17:22:52.617Z · EA · GW

So in both of the examples provided, EAs would be funding / carrying out interventions that improve the effectiveness of other work, and it is this other work that would improve well-being / preserve lives in expectation.

Because I suspect that these interventions would be relatively cheap, and because this other work would already have lots of resources behind it, I think these interventions would slightly improve the effectiveness with which a large amount of resources are spent, to the extent that the interventions could compare with GW top charities in terms of expected value.

Comment by freedomandutility on EA cause areas are just areas where great interventions should be easier to find · 2021-07-17T15:38:08.788Z · EA · GW

Thanks for the suggestion, I've added an attempt at this to the post

Comment by freedomandutility on What are good institutes to conduct a research-based Masters in ageing research? · 2021-07-16T21:08:15.098Z · EA · GW

Thank you!