Posts

Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z
Understanding cause-neutrality 2017-03-10T17:43:51.345Z
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z

Comments

Comment by Stefan_Schubert on Khorton's Shortform · 2021-04-21T11:48:40.214Z · EA · GW

Or "fill the universe/galaxy with life".

Comment by Stefan_Schubert on Ben Garfinkel's Shortform · 2021-04-19T09:02:35.934Z · EA · GW

Interesting ideas. Some similarities with qualitative research, but also important differences, I think (if I understand you correctly).

Comment by Stefan_Schubert on Can the moral circle be expanded through economic security and/or political trust? · 2021-04-13T11:05:05.688Z · EA · GW

I think one should distinguish between whether wealthier countries are more progressive, and whether wealthier individuals within a country are more progressive.

Wealthier countries do seem more progressive, on plausible definitions on those notions (but that leaves the issue of causality).

Whether wealthier individuals are more progressive than their compatriots is a tricky issue. One factor is education, which is associated with both wealth and progressive views. See this interesting paper by Piketty.

Comment by Stefan_Schubert on How much does performance differ between people? · 2021-03-26T11:09:41.497Z · EA · GW

However, my memory is that for a while there was some more specific work in psychology that was allegedly identifying properties that predicted team success better than the individual abilities of its members, which then largely didn't replicate.

Woolley et al (2010) was an influential paper arguing that individual intelligence doesn't predict collective intelligence well. Here's one paper criticising them. I'm sure there are plenty of other relevant papers (I seem to recall one paper providing positive evidence that individual intelligence predicted group performance fairly well, but can't find it now).

Comment by Stefan_Schubert on How much does performance differ between people? · 2021-03-26T00:37:21.350Z · EA · GW

Fwiw, I wrote a post explaining such dynamics a few years ago.

Comment by Stefan_Schubert on Proposed Longtermist Flag · 2021-03-25T10:13:20.001Z · EA · GW

An alternative is to just have the hourglass as a symbol/logo, and not a flag. There is an EA symbol (the lightbulb) but no flag.

Also, one might consider making the hourglass less stylised, and to drop the X-risk symbolism. Longtermism isn't intrinsically tied to X-risk. One approach would be to strictly focus on the long time duration, and drop associations with X-risk, space colonisation, and so on. It depends on how one conceives of longtermism.

Comment by Stefan_Schubert on Some quick notes on "effective altruism" · 2021-03-24T23:17:21.764Z · EA · GW

I agree that changing names is hard and costly (you can't do it often), something that definitely should be taken into account.

Comment by Stefan_Schubert on Some quick notes on "effective altruism" · 2021-03-24T19:44:25.516Z · EA · GW

To some extent, I think that what those who dislike effective altruism dislike isn't that term, but rather the set of ideas it expresses. As such, replacing it with another term that's supposed to express broadly the same set of ideas (like "priorities" or "global priorities") might make less of a difference than one might think at first glance (though it likely makes some difference).

What might make a greater difference, for better or worse, is choosing a term that expresses a quite different set of ideas. E.g. I think that people have substantially different reactions to the term "longtermism".

Comment by Stefan_Schubert on Proposed Longtermist Flag · 2021-03-24T19:29:50.249Z · EA · GW

Another consideration is that one may want the flag or symbol to have relatively direct temporal associations (one way or the other), since longtermism concerns time. It seems to me that Ryan's suggestion doesn't have that; at least not very directly - it's more about us being small relative to the vastness of the universe, which  is something spatial rather than temporal.

Greg's suggestion has stronger and more direct temporal associations, I'd say.

Generally, it's of course not very straightforward to represent something temporal visually.

Comment by Stefan_Schubert on Proposed Longtermist Flag · 2021-03-24T12:10:28.639Z · EA · GW

To me it seems that longtermism is a quite simple idea. In a relevant sense it's just one idea or value. And it seems to me that a longtermist flag should capture or express that simplicity. Therefore, I might favour a flag with just one symbol and two colours, or so. 

That's similar to the utilitarian flag. Utilitarianism is simple, and the flag is correspondingly simple (or broadly so).

Another example of correspondence between the simplicity/complexity of the flag and the values it expresses is the French Tricolour. One interpretation of it (not the only one, but let's ignore that) is that the three colours stand for Liberty, Equality, and Brotherhood.

Comment by Stefan_Schubert on Name for the larger EA+adjacent ecosystem? · 2021-03-19T01:26:12.877Z · EA · GW

My sense is that the forecasting community overlaps more with the PEARL communities, e.g. the fact-checking does.

Comment by Stefan_Schubert on Name for the larger EA+adjacent ecosystem? · 2021-03-18T16:11:37.900Z · EA · GW

Another adjacent community you might want to mention is the forecasting community.

Comment by Stefan_Schubert on Is Democracy a Fad? · 2021-03-16T17:15:10.449Z · EA · GW

That's an interesting consideration.

I just came across a paper that argued that pre-historic hunter-gatherers likely on average lived in less egalitarian societies than previously thought (though there was substantial variation). 

Many researchers assume that until 10-12,000 years ago, humans lived in small, mobile, relatively egalitarian bands composed mostly of kin. This “nomadic-egalitarian model” informs evolutionary explanations of behavior and our understanding of how contemporary societies differ from those of our evolutionary past. Here, we synthesize research challenging this model and propose an alternative, the diverse histories model, to replace it. We outline the limitations of using recent foragers as models of Late Pleistocene societies and the considerable social variation among foragers commonly considered small-scale, mobile, and egalitarian. We review ethnographic and archaeological findings covering 34 world regions showing that non-agricultural peoples often live in groups that are more sedentary, unequal, large, politically stratified, and capable of large-scale cooperation and resource management than is normally assumed. These characteristics are not restricted to extant Holocene hunter-gatherers but, as suggested by archaeological findings from 27 Middle Stone Age sites, likely characterized societies throughout the Late Pleistocene(until c. 130 ka), if not earlier. These findings have implications for how we understand human psychological adaptations and the broad trajectory of human history.

See also this Twitter thread and this Aeon article. I don't know what the consensus of the field is, however.

Comment by Stefan_Schubert on What Makes Outreach to Progressives Hard · 2021-03-16T13:18:25.479Z · EA · GW

There might be a risk that some view the (very) long-run future as a "luxury problem", and that focusing on that, rather than short-term problems in your own country, reveals your privilege. (That attitude may be particularly common concerning causes like AI risk.) My guess is that people are less likely to have such an attitude towards someone who is focusing on global poverty. 

Comment by Stefan_Schubert on Why do so few EAs and Rationalists have children? · 2021-03-15T10:18:50.360Z · EA · GW

Of course - I'm not suggesting otherwise. My point is just to say that you can cut other forms of spending as well, just as you can cut spending on raising a child.

Comment by Stefan_Schubert on Why do so few EAs and Rationalists have children? · 2021-03-15T10:03:56.101Z · EA · GW

Sure, there are multiple ways of reducing these costs. But the same could be said about consumption among people who don't have children. So I'd say that raising children is relatively expensive compared with other forms of consumption.

Comment by Stefan_Schubert on Why do so few EAs and Rationalists have children? · 2021-03-14T22:56:06.219Z · EA · GW

One study  found that raising a child on average cost £10,822 per year in the UK 2014. I don't know how they calculated this, however. It looks like they didn't deduct child benefits from the cost, which one presumably should. 

 

Comment by Stefan_Schubert on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2021-03-09T15:17:31.453Z · EA · GW

Figure 2 looks at the top two parties, but the legend to Figure 1 doesn't say it's restricted to the top two parties. And Figure 1 also shows decreasing polarisation in Germany. However, I haven't looked into this research in depth.

Comment by Stefan_Schubert on The ten most-viewed posts of 2020 · 2021-01-20T14:23:27.442Z · EA · GW

Sounds great, and like the right call.

Comment by Stefan_Schubert on The ten most-viewed posts of 2020 · 2021-01-15T01:40:16.633Z · EA · GW

I'd be interested in the total number of pageviews and unique pageviews per year for the whole forum, plus yearly growth (unless there already is a post with that info).

Comment by Stefan_Schubert on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-25T20:32:19.521Z · EA · GW

Thanks for this interesting piece.

> To illustrate, consider that the 10 million Ashkenazi Jews living today descended from a population of just 350 individuals who lived between 600 and 800 million years ago (Commun, 2014).

Should this be "600 to 800 years ago"?

Comment by Stefan_Schubert on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-12-20T16:56:07.709Z · EA · GW

It seems that there haven't been that many major insights in macrostrategy/global priorities research recently.

One potential negative conclusion from that, that might seem natural, is that recent macrostrategy/global priorities research has been lacking in quality. 

But a more positive conclusion is that early macrostrategy/global priorities research had high quality, and that most of the major insights were therefore quickly identified. 

On this view, the recent lack of insights isn't a sign of recent lack of research quality, but rather a sign of early high research quality.

In my view, the positive conclusion is more warranted than the negative conclusion.

Comment by Stefan_Schubert on My mistakes on the path to impact · 2020-12-08T02:15:05.414Z · EA · GW

Thanks, David, for that data.

There was some discussion about the issue of EA intellectual stagnation in this thread (like I say in my comment, I don't agree that EA is stagnating).

Comment by Stefan_Schubert on My mistakes on the path to impact · 2020-12-08T02:09:10.178Z · EA · GW

I guess it depends on what topics you're referring to, but regarding many topics, the bar for being seen as an expert within EA seems substantially higher than 100 hours.

Comment by Stefan_Schubert on Long-Term Future Fund: November 2020 grant recommendations · 2020-12-03T14:11:37.009Z · EA · GW

It says that:

Richard is also applying for funding from other sources, and will return some of this grant if his other applications are successful.

Comment by Stefan_Schubert on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-27T09:54:06.496Z · EA · GW

Similarly in the UK, the relatively authoritarian May was replaced with the much more libertarian Johnson. 

I'm not sure everyone would agree that that leadership was a change in a less authoritarian direction. At any rate, I think the default view would be that it says little about global trends in levels of authoritarianism. Also May seems quite different from the leaders and parties that Haydn discusses in that section.

I think it would have been better if you had given an argument for this view, instead of just stating it (since it's likely far from obviously true to most readers).

Comment by Stefan_Schubert on List of EA-related email newsletters · 2020-10-23T10:35:49.737Z · EA · GW

The Long-termist's Field Guide,  newsletter from BBC journalist Richard Fisher.

Comment by Stefan_Schubert on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T12:05:49.758Z · EA · GW

Some argue, however, that partisan TV and radio was helped by the abolition of the FCC fairness doctrine in 1987. That amounts to saying that polarisation was driven at least partly by legal changes rather than by technological innovations.

Obviously media influences public opinion. But the question is whether specific media technologies (e.g. social media vs TV vs radio vs newspapers) cause more or less polarisation, fake news, partisanship, filter bubbles, and so on. That's a difficult empirical question, since all those things can no doubt be mediated to some degree through each of these media technologies.

Comment by Stefan_Schubert on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T09:40:27.852Z · EA · GW

This study looked at nine countries and found that polarisation had decreased in five. The US was an outlier, having seen the largest increase in polarisation. That may suggest that American polarisation is due to US-specific factors, rather than universal technological trends.

Here are some studies suggesting the prevalence of technology-driven echo chambers and filter bubbles may be exaggerated.

Comment by Stefan_Schubert on Open and Welcome Thread: October 2020 · 2020-10-06T23:11:47.262Z · EA · GW

Yeah, this has been discussed before. I think that it should not be possible to strongly upvote one's own comments.

Comment by Stefan_Schubert on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T13:18:57.650Z · EA · GW

Interesting. It may be worth noting how support for consequentialism is measured in this paper.

In our first study, we use a self-report measure of consequentialist (vs. deontological) thinking to examine participant responses to a range of morally questionable actions (beyond sacrifice), many of which people are likely to encounter in real life (e.g., lying, breaking a promise, engaging in malicious gossip, or breaking the law) .
[Study 2] ... a series of moral dilemmas—analogous to trolley/footbridge problems—that were either congruent or incongruent in terms their representation of deontological and consequentialist principles.
[W]e caution that our inferences are warranted for consequentialism, but perhaps not for utilitarianism. We have shown that intellect predicts moral judgments based upon a consideration of consequences (Study 1) and the acceptability of instrumental harm in increasing aggregate welfare (Study 2). Neither of these capture additional aspects of utilitarianism concerned with impartial maximization of the greater good (see Kahane et al., 2018). Future research might thus extend our present focus to explore the role of personality in predicting multiple dimensions of utilitarianism (e.g., impartiality versus instrumental harm; Kahane et al., 2018) and, indeed, different forms of consequentialism (e.g., those grounded in hedonistic versus non-hedonistic conceptions of the good) and deontology (e.g., agent-centered versus patient-centered).
Comment by Stefan_Schubert on RyanCarey's Shortform · 2020-09-30T14:05:42.001Z · EA · GW

A quite obvious point that may still be worth making is that the balance of the considerations will look very different for different people. E.g. if you're able to have a connection with a top university while being a professor elsewhere, that could change the calculus. There could be numerous idiosyncratic considerations worth taking into account.

Comment by Stefan_Schubert on Suggestions for Online EA Discussion Norms · 2020-09-24T19:52:49.847Z · EA · GW

The extraordinary value of ordinary norms by Emily Tench is a bit related. Several of the norms she covers concern good discussions and adjacent issues.

Comment by Stefan_Schubert on Stefan_Schubert's Shortform · 2020-09-19T12:40:29.278Z · EA · GW

Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).

Comment by Stefan_Schubert on Stefan_Schubert's Shortform · 2020-09-19T12:13:14.944Z · EA · GW

On encountering global priorities research (from my blog).


People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.

This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.

But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.

First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.

Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.

Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.

For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.

Thanks to Ryan Carey for comments.

Comment by Stefan_Schubert on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-18T18:44:42.037Z · EA · GW

I'd say most PhD students don't publish in the Journal of Philosophy or other journals of a similar or better quality (it's the fourth best general philosophy journal according to a poll by Brian Leiter).

This blog post seems to suggest it has an acceptance rate of about 5%.

Comment by Stefan_Schubert on Long-Term Future Fund: September 2020 grants · 2020-09-18T13:48:48.257Z · EA · GW

Yes. Also, regarding this issue:

you could find someone with a similar talent level ... who could produce many more videos

It seems that the Long-Term Future Fund isn't actively searching for people to do specific tasks, if I understand the post correctly. Instead, it's reviewing applications that come to them. (It's more labour-intensive to do an active search.) That means that it can be warranted to fund an applicant even if it's possible that there could be better candidates for the same task somewhere out there. (Minor edits.)

Comment by Stefan_Schubert on How do political scientists do good? · 2020-09-15T23:19:02.195Z · EA · GW

Great suggestions.

Tyler John and Will MacAskill also have this paper, "Longtermist Institutional Reform" (in the forthcoming book The Long View, edited by Natalie Cargill).

Comment by Stefan_Schubert on Are social media algorithms an existential risk? · 2020-09-15T21:09:22.688Z · EA · GW

There are some studies suggesting fake news isn't quite the problem some think.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3316768

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3107731

There are also a number of papers which are sceptical of there being pervasive social media "echo chambers" or "filter bubbles".

http://eprints.lse.ac.uk/87402/

https://www.sciencedirect.com/science/article/abs/pii/S0747563216309086

Cf also this recent book by Hugo Mercier, which argues that people are less gullible than many think.

I don't know this literature well and am not quite sure what conclusions to draw. My impression is, however, that some claims of the dangers of fake news on social media are exaggerated.

Cf also my comment on the post on recommender systems, relating to other effects of social media.

Comment by Stefan_Schubert on Stefan_Schubert's Shortform · 2020-09-15T16:06:11.470Z · EA · GW

I've written a blog post on naive effective altruism and conflict.


A very useful concept is naive effective altruism. The naive effective altruist fails to take some important social or psychological considerations into account. Therefore, they may end up doing harm, rather than good.

The standard examples of naive effective altruism are maybe lies and theft for the greater good. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.

First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.

Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.

Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.

*

These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.

Comment by Stefan_Schubert on How have you become more (or less) engaged with EA in the last year? · 2020-09-11T17:36:35.989Z · EA · GW

In contrast to some of the responses here, I think that EA has become more intellectually sophisticated in recent years. It's true that there were many new ideas at the beginning. But it feels a bit unfair to just look at the number of new ideas, given that it's easier at the start - when there's more low-hanging fruit.

Relatedly, it seems to me that EA organisations also are getting more mature and skilled. There are several new impressive organisations, and others have expanded considerably.

Comment by Stefan_Schubert on Asking for advice · 2020-09-09T18:19:17.458Z · EA · GW

Maybe one option would be to both send the Calendly and write a more standard email? E.g.:

"When would suit you? How about Tuesday 3pm or Wednesday 4pm? Alternatively, you could check my Calendly, if you prefer."

Maybe some find that overly roundabout.

Comment by Stefan_Schubert on Asking for advice · 2020-09-09T14:27:58.329Z · EA · GW

I think that for many , it's primarily the act of sending a calendly link that is off-putting (for social, potentially status-related, reasons), rather than the experience of interacting with the software. My hunch is that people don't have the same aversion to, e.g. Doodle, which is more symmetric (it's not that one person sends their preferences to the other, but everyone lists their preferences). (But you may be different.)

Comment by Stefan_Schubert on Misha_Yagudin's Shortform · 2020-09-07T23:59:45.009Z · EA · GW

Fwiw, I started reading this book but found it long-winded and not carefully argued so put it aside.

Comment by Stefan_Schubert on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-03T11:06:05.992Z · EA · GW
I don't actually believe the naively extrapolated future is the most plausible outcome—more on that later—but I do think if you asked most people what they expect the world to look like a thousand years from now, they'd predict something like it.

In a recent paper, we asked participants:

Suppose that humanity does not go extinct, but survives for a very long time. How good do you think that the world will become in that future, compared with the present world? 

Results:

Participants believed that provided that humanity will not go extinct, the future is going to be roughly as good as the present (1 = much worse than the present world, 4 = about as good as the present world, 7 = much better than the present world; M = 4.48, SD = 1.57)...
Comment by Stefan_Schubert on Some history topics it might be very valuable to investigate · 2020-08-28T14:02:17.639Z · EA · GW

Thanks, yes I'd be interested.

Comment by Stefan_Schubert on Some history topics it might be very valuable to investigate · 2020-08-28T11:12:09.498Z · EA · GW
"Will humanity achieve its full potential, as long as existential catastrophe is prevented?"
I think an argument in favour of "Yes" is that it might be highly likely that, if we don’t suffer an existential catastrophe, there will be positive trends across the long-term future in all key domains.

That there will be positive trends doesn't necessarily entail that humanity (or some other entities) will achieve its full potential, however. It's possible that the future will be better than the present, without humanity achieving its full potential. And the value difference between such a future and a future where humanity achieves its full potential may be vast.

I agree that there is an historical argument for positive future trends, but it seems that one needs additional steps to conclude that humanity will achieve its full potential.

Comment by Stefan_Schubert on A curriculum for Effective Altruists · 2020-08-28T09:57:58.233Z · EA · GW

Julia Wise provided a list of EA syllabi and teaching materials here.

(RSP = Future of Humanity Institute's Research Scholars Programme.)

Comment by Stefan_Schubert on How can good generalist judgment be differentiated from skill at forecasting? · 2020-08-22T10:44:01.134Z · EA · GW

Cambridge Dictionary defines judgement as:

the ability to form valuable opinions and make good decisions

Forecasting isn't (at least not directly) about decision-making (cf. instrumental rationality) but just about knowledge and understanding (epistemic rationality).

A bit tangential, but may still be of interest: a recent paper argued that there are two competing standards of good judgement: rationality and reasonableness.

Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... Normative theories of judgment either focus on rationality (decontextualized preference maximization) or reasonableness (pragmatic balance of preferences and socially conscious norms). ... ay rationality is reductionist and instrumental, whereas reasonableness integrates preferences with particulars and moral concerns.
Comment by Stefan_Schubert on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-21T19:44:41.471Z · EA · GW

It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks' overviews of the AI Alignment landscape. I don't think I have seen anything quite like that, but please correct me if I'm wrong.