AMA: Future of Life Institute's EU Team 2022-01-31T17:14:30.461Z
Effective Altruism Stipend: A Short Experiment by EA Estonia 2020-06-12T10:48:22.428Z
New Report Claiming Understatement of Existential Climate Risk 2019-06-09T16:47:46.592Z
How to Make Short EA Videos? 2019-05-22T09:15:39.457Z
How to Get the Maximum Value Out of Effective Altruism Conferences 2019-04-24T07:57:40.440Z
Bjørn Lomborg About Prioritization on Jordan Peterson's Podcast 2019-03-21T13:05:04.267Z
What Are Some Disagreements in the Area of Animal Welfare? 2019-03-11T14:12:16.332Z
What Courses Might Be Most Useful for EAs? 2019-02-02T09:04:40.640Z
Community Builders, Watch EAG Videos with Your Members 2018-11-10T16:18:31.082Z
Reading group guide for EA groups 2018-03-12T18:59:08.793Z


Comment by Risto Uuk (Risto_Uuk) on AMA: Future of Life Institute's EU Team · 2022-02-03T08:01:59.028Z · EA · GW

Thank you for the questions. Regarding emotions-based advertisement, you might find our recent EURACTIV (a top EU policy media network) op-ed about AI manipulation relevant and interesting: The EU needs to protect (more) against AI manipulation. In it, we invite EU policymakers to expand the definition of manipulation and also consider societal harms from manipulation in addition to individual psychological and physical harms. And here's a bit longer version of that same op-ed. 

Comment by Risto Uuk (Risto_Uuk) on AMA: Future of Life Institute's EU Team · 2022-02-02T10:30:14.212Z · EA · GW

Thank you, these are some really big questions! Most of them are beyond what we work on, so I'm happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now. 

I'll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still don't think much about the longer-term implications of AI and don't think that AI progress can have such significant impact (negative or positive) that we do or don't think that it is reasonable to focus on it right now. That said, I don't think that there is necessarily a very big gap between us in practice. For example, many people who are interested in bias, discrimination, fairness, and other issues that are already prevalent, can also be concerned about more general purpose AI systems that will become more available on the market in the future, as these systems can present even bigger challenges and have more significant consequences in terms of bias, discrimination, fairness, etc. In the paper On the Opportunities and Risks of Foundation Models, it was stated that, "Properties of the foundation model can lead to harm in downstream systems. As a result, these intrinsic biases can be measured directly within the foundation model, though the harm itself is only realized when the foundation model is adapted, and thereafter applied."

Comment by Risto Uuk (Risto_Uuk) on AMA: Future of Life Institute's EU Team · 2022-02-02T10:23:49.995Z · EA · GW

Thank you, a lot of great questions. In response to question (3), some of our work focuses on EU member states as well. Because we are a small team, our ability to cover many member states is limited, but hopefully, with the new hire we can do a lot more on this front as well. If you know anybody suitable, please let us know. For example, we have engaged with Sweden, Estonia, Belgium, Netherlands, France, and a few other countries. Right now, the Presidency of the Council of the EU is held by France, next up are Czechia and Sweden, so work at the member state level in these countries is definitely important. 

Comment by Risto Uuk (Risto_Uuk) on AMA: Future of Life Institute's EU Team · 2022-02-02T10:21:46.503Z · EA · GW

Regarding your 2nd question, I think it is an important argument and it's good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.  

Comment by Risto Uuk (Risto_Uuk) on AMA: Future of Life Institute's EU Team · 2022-02-02T10:20:30.460Z · EA · GW

Thank you for the questions. I think that the biggest bottleneck right now is that very few people work on the issues we are interested in (listed here). We are trying to contribute to this by hiring a new person, but the problems are vast and there's a lot more room for additional people. Another issue is lack of policy research that would consider the longer-term implications but would at the same time be very practical. We are happy that in addition to the Future of Life Institute, a few other organizations such as Centre for the Governance of AI, Centre for Long-Term Resilience, and some other ones are contributing more here or starting to do so. I'm not sure about the next 5-10 years, so I'll leave it to someone else who might have some tentative answers. 

Comment by Risto Uuk (Risto_Uuk) on Should you work in the European Union to do AGI governance? · 2022-01-31T19:07:49.358Z · EA · GW

If anyone reading this post thinks that the arguments in favor outweigh the arguments against working on EU AI governance, then consider applying for the EU Policy Analyst role that we are hiring for at the Future of Life Institute: If you have any questions about the role, you can participate in the AMA we are running:

Comment by Risto Uuk (Risto_Uuk) on What is the EU AI Act and why should you care about it? · 2021-09-13T13:22:05.650Z · EA · GW

Thank you for writing this summary!

I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you'd like to discuss the proposal or have suggestions for the website. We'd like it to be a good resource for the general public but also for people interested in the regulation more closely. 

Comment by Risto Uuk (Risto_Uuk) on Why do EAs have children? · 2021-03-16T13:20:50.455Z · EA · GW

Yeah, I feel that too. My daughter is just 1 year and 9 months. We are constantly high-fiving and fist-pumping.

Comment by Risto Uuk (Risto_Uuk) on Why do EAs have children? · 2021-03-15T09:10:46.027Z · EA · GW

Because (i) my wife wanted to have a child and I thought it would strenghten our relationship, (ii) I assumed my child was likely to become a happy person and possibly an EA, (iii) I'd potentially have a very close friend for life.

Comment by Risto Uuk (Risto_Uuk) on Database of existential risk estimates · 2021-02-26T15:50:19.031Z · EA · GW

Existential risks are not something they have worked on before, so my project is a new addition to their portfolio. I didn't mention this but I intend to have a section for other risks depending on space. The reason climate change gets prioritized in the project is that arguably the EU has more of a role to play in climate change initiatives compared to, say, nuclear risks. 

Comment by Risto Uuk (Risto_Uuk) on Database of existential risk estimates · 2021-02-24T11:39:16.747Z · EA · GW

Thanks for this database! I'm currently working on a project for the Foresight Centre (a think-tank at the Estonian parliament) about existential risks and the EU’s role in reducing them. I cover risks form AI, engineered pandemics, and climate change. For each risk, I discuss possible scenarios, probabilities, and the EUs role. I've found a couple of sources from your database on some of these risks that I hadn't seen before. 

Comment by Risto_Uuk on [deleted post] 2020-10-03T14:08:22.361Z

The same is the case with the effective altruism course at the LSE titled Effective Philanthropy: Ethics and Evidence. The reason for that was that the teacher Luc Bovens moved to work for another institution. I don't know about UCL.

Comment by Risto Uuk (Risto_Uuk) on A tool to estimate COVID risk from common activities · 2020-08-30T07:45:07.631Z · EA · GW

It would also be more informative to assess risks of death from COVID-19. 'Micromorts' normally stand for one-in-a-million chance of death because the word is combined from micro and mortality. If 1000 μCoV were a thousand-in-a-million chance of death, then engaging in activities with such a risk would be quite reckless indeed. That would be about similar to climbing quite high mountains and doing a couple of base-jumps.

I have calculated COVID-19 risks for myself in the context of Estonia where I am currently. My numbers right now are about: risk of getting COVID-19: 1^-4 and risk of dying of COVID-19: 4^-6 (about 4 micromorts). These are probably overestimates as I'm young, healthy, and very cautious and I'm using nasal swab data rather than antibody data, which indicate about 10 times larger infection rate than the nasal swab data (meaning 10 times smaller death rate in Estonia). These numbers are of course smaller in Estonia than in the Bay Area.

Another interesting topic here is what counts as too risky? I think that my risk threshold is about traveling 10 km by motorbike, which is about 1 micromort. I would engage in such activities once in a while, but in general 1 micromort seems too large in the context of activities that are easily substitutable. Can't ride a motorbike for entertainment? Easy, just play some less risky sport and get as much pleasure.

Comment by Risto Uuk (Risto_Uuk) on Should you do a PhD? · 2020-07-26T11:56:33.747Z · EA · GW

"You should not do a PhD just so you can do something else later. Only do a PhD if this is something you would like to do, in itself."

Why do you think this is the case? For example, I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs and that proportion is greater for senior research roles and more academic think-tanks. This does not account for the unmeasurable benefits of PhDs such as being taken more seriously in policy discussions. Isn't it possible that 4-6 years of PhD work gives you more impressive career capital than the same amount of experience progressing from more junior roles to slightly more senior ones?

Comment by Risto Uuk (Risto_Uuk) on Effective Altruism Stipend: A Short Experiment by EA Estonia · 2020-06-20T08:36:45.488Z · EA · GW

This post was actually published in 2018 for the first time, but for some reason I wasn't able to share the link with some people as it showed up as a draft. I resubmitted it and it has received some interest from the community again.

I think that the longer term evidence right now indicates that the impact of this was lower than the short-term evidence made me anticipate. I expected to have several highly engaged new members in the EA community longer term, but currently it appears that these people are only weakly involved with effective altruism. Hence, I would say that the cost-effectiveness of this project was not high. But there are some indirect effects this might have had related to marketing and reaching more people indirectly, which I don't have a good understanding of.

Comment by Risto Uuk (Risto_Uuk) on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-04T13:00:40.399Z · EA · GW

Why did you decide to move from Global Priorities Institute to 80,000 Hours?

Comment by Risto Uuk (Risto_Uuk) on Local EA Group Organizers Survey 2019 · 2019-11-17T14:04:51.095Z · EA · GW

Estonia actually has two local groups, one in Tallinn and the other in Tartu.

Comment by Risto Uuk (Risto_Uuk) on Understanding and evaluating EA's cause prioritisation methodology · 2019-10-16T21:03:55.205Z · EA · GW

Do you think there's more useful research to be done on this topic? Are there any specific questions you think researchers haven't yet answered sufficiently? What are the gaps in the EA literature on this?

Comment by Risto Uuk (Risto_Uuk) on Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations · 2019-08-23T14:01:53.022Z · EA · GW

It actually might be more complicated than what you say here, alexherwix. If a research analyst role at the Open Philanthropy Project receives 800+ job applications, then you might reasonably think that it's better for you to continue building a local community even if you were a great candidate for that option.

In addition, for the reasons that you mention, every possible local community builder might be constantly looking for new job options in the EA community making someone who doesn't do that a highly promising candidate. Furthermore, being a community builder is actually a surprisingly difficult job.

Another consideration is that preparation and training for a specific job at an EA organization and gaining skills leading a local group might be quite different. It might suit you more to do tasks related to community building in a local context.

Comment by Risto Uuk (Risto_Uuk) on Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations · 2019-08-23T13:56:49.687Z · EA · GW

This is slightly relevant, in a recent 80,000 Hours' blog post they suggest the following for people applying for EA jobs:

We generally encourage people to take an optimistic attitude to their job search and apply for roles they don’t expect to get. Four reasons for this are that, i) the upside of getting hired is typically many times larger than the cost of a job application process itself, ii) many people systematically underestimate themselves, iii) there’s a lot of randomness in these processes, which gives you a chance even if you’re not truly the top candidate, and iv) the best way to get good at job applications is to go through a lot of them.
Comment by Risto Uuk (Risto_Uuk) on Strategy-development for EA groups: Lessons learned from EA Denmark · 2019-08-19T19:37:24.179Z · EA · GW

You can decide it by asking who wants to be the leader of a particular activity (the way that your group did) as well as inquire what resources and capital people have available to successfully lead that activity. Sometimes people have the motivation to lead activities, but they don't actually have the necessary resources to do it successfully yet.

Agreed on the failure-mode thinking. I guess if you only take the best-case scenario into consideration, then you forget to assess the risks involved. On the other hand, I'm not sure it should be included in this initial brainstorming session or later when a possible activity is selected as a top candidate.

Comment by Risto Uuk (Risto_Uuk) on Strategy-development for EA groups: Lessons learned from EA Denmark · 2019-08-17T19:06:38.584Z · EA · GW

So here are some of the main takeaways from this for me:

  • Involve the main volunteers/group members in the strategy development process.
  • Use the strategy template made available by CEA.
  • Share EA Denmark's list of project ideas with other community builders.

We recently had a several-hour strategy meeting. I can attest to that when community members participate in the task of developing a strategy they understand better what's going on and they feel more motivated as they are actually responsible for the vision now. And they can come up with wonderful ideas that you hadn't thought of!

We have also used a simple three-dimension thinking tool for deciding what projects/activities to focus on. Every participant scores activities on some scale according to how many resources the activity requires, what's the best outcome that it can result in, and how high is the personal fit of the leader for a particular activity.

Comment by Risto Uuk (Risto_Uuk) on Latest EA Updates for July 2019 · 2019-07-29T05:58:29.795Z · EA · GW

Great overview as always. I think Open Philanthropy Project's Funding for Study and Training Related to AI Policy Careers should be up here as well:

This program aims to provide flexible support for individuals who want to pursue or explore careers in AI policy1 (in industry, government, think tanks, or academia) for the purpose of positively impacting eventual societal outcomes from “transformative AI,” by which we mean potential future AI that precipitates a transition at least as significant as the industrial revolution ...
Comment by Risto Uuk (Risto_Uuk) on How Europe might matter for AI governance · 2019-07-14T18:08:13.378Z · EA · GW

I think this accusation is uncalled for. There is more statistics in the report and I linked to it, including things like citation impact. But a comprehensive overview of European AI research is, of course, very welcome.

Comment by Risto Uuk (Risto_Uuk) on How Europe might matter for AI governance · 2019-07-13T06:38:12.218Z · EA · GW

For what it's worth, according to Artificial Intelligence Index published in 2018:

Europe has consistently been the largest publisher of AI papers — 28% of AI papers on Scopus in 2017 originated in Europe. Meanwhile, the number of papers published in China increased 150% between 2007 and 2017. This is despite the spike and drop in Chinese papers around 2008.

(I'd post the graphs here, but I don't think images can be inserted into comments.)

Comment by Risto Uuk (Risto_Uuk) on Advice for an Undergrad · 2019-07-03T06:09:42.105Z · EA · GW

Here's an article by 80,000 Hours literally titled "Advice for undergraduates". It does not answer all of your questions, but hopefully it helps a little bit.

Comment by Risto Uuk (Risto_Uuk) on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T10:25:57.095Z · EA · GW

William MacAskill says the following in a chapter in The Palgrave Handbook of Philosophy and Public Policy:

As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis. So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good.

But then he continues to highlight various normative commitments, which indicate that it is, in addition to being a question, an ideology:

The project is: • Maximizing. The point of the project is to try to do as much good as possible. • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models. • Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals. • Impartial. Everyone’s welfare is to count equally.
Comment by Risto Uuk (Risto_Uuk) on [Link] Ideas on how to improve scientific research · 2019-06-21T08:09:44.642Z · EA · GW

Open Philanthropy Project's link doesn't work.

Comment by Risto Uuk (Risto_Uuk) on Effective Altruism London Landscape · 2019-05-18T05:45:02.764Z · EA · GW

Thank you for writing this! This is a useful overview of active groups for me, because I intend to move to London in September to study at LSE and now need to think about ways to engage with the community there.

Comment by Risto Uuk (Risto_Uuk) on EA Still Needs an Updated and Representative Introductory Guidebook · 2019-05-12T14:32:03.579Z · EA · GW

In addition, what do you think should be updated in Doing Good Better?

Comment by Risto Uuk (Risto_Uuk) on EA Still Needs an Updated and Representative Introductory Guidebook · 2019-05-12T14:30:57.969Z · EA · GW

Your link referring to bdixon and climate change leads to Joey's post "Problems with EA representativeness and how to solve it". Can you share the post that discusses how Doing Good Better appears to underrate the degree of warming of climate change?

Comment by Risto Uuk (Risto_Uuk) on EA Research Organizations Should Post Jobs on · 2019-05-03T20:45:42.189Z · EA · GW

I found the part about philosophers being well-suited to many aspects of EA research especially interesting. You said this:

Contrary to popular stereotypes, philosophers often excel at quantitative thinking. Many philosophy PhDs have an undergraduate background in math or science. For subfields of philosophy like formal epistemology, population ethics, experimental philosophy, decision theory, philosophy of science, and, of course, logic, a strong command of quantitative skills is essential. Even beyond these subfields, quantitative acumen is prized. In analytic philosophy in particular, papers with a lot of math and formalism are more likely to be taken seriously than comparable papers explained informally.

Do you have any data about philosophy PhDs often having an undergraduate background in math or science? I, for example, have chosen a lot of courses in mathematical economics, data analysis, and social science research methodology to support my philosophy degree, but this is very uncommon in my experience. However, this depends a lot on the region and surely USA and UK are different than continental Europe on this matter.

Comment by Risto Uuk (Risto_Uuk) on How to Get the Maximum Value Out of Effective Altruism Conferences · 2019-04-25T04:37:21.458Z · EA · GW

Can you expand on 3a and 3b? I guess 3b justifies 3a, but is that all? Watching and discussing a video with your local group appears to me to be more valuable than asking one question at a talk, but I may be missing some important benefits that you are aware. I would also add that these are not mutually exclusive. I have heard that some people struggle to set time to watch talks on their own, that is also something to consider.

Comment by Risto Uuk (Risto_Uuk) on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T08:56:52.388Z · EA · GW

You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications?

Comment by Risto Uuk (Risto_Uuk) on EA London Community Building Lessons Learnt - 2018 · 2019-03-19T18:44:18.976Z · EA · GW
Realising that attendance and events are just part of a community, and potentially not the most important part

Agreed. Research and study groups, for example, seem to be a lot more useful than events. First and foremost, participants commit to longer term attendance in advance so you don't need to try to persuade them to participate every time. I dislike having to personally invite people to come to events. I assume that they don't care about EA enough if they don't come at a mere FB invitation.

Regarding attendance, we just recently organized a public AI safety event which was attended by roughly 80 people. When an ex community-builder heard that, he congratulated us on that as it sounded big success to him. Of course, it was nice to have that many people come to the event but compared to some more in-depth projects we had going on I didn't feel as accomplished.

That said, how do you get feedback from your community with respect to online-based content? Your newsletter, for example, could easily be much more valuable than events and even other in-person activities, but as far as I'm aware very few people actually communicate how much value they receive to authors and content creators. For instance, you probably didn't know this but I find useful content for EA Estonia's newsletter every month from EA London's newsletter.

Comment by Risto Uuk (Risto_Uuk) on The case for building expertise to work on US AI policy, and how to do it · 2019-02-07T06:23:15.128Z · EA · GW

If you're a thoughtful American interested in developing expertise and technical abilities in the domain of AI policy, then this may be one of your highest impact options, particularly if you have been to or can get into a top grad school in law, policy, international relations or machine learning. (If you’re not American, working on AI policy may also be a good option, but some of the best long-term positions in the US won’t be open to you.)

What do you think about similar type of work within the European Union? Could it potentially be a high-impact career path for those who are not Americans?

Comment by Risto Uuk (Risto_Uuk) on EA Boston 2018 Year in Review · 2019-02-06T06:28:32.242Z · EA · GW

This post increased my interest in visiting the Boston area. Unfortunately, I cannot come to the EAGx this year, but perhaps another time. I'm quite surprised that you'd have the issue of brain drain as the area seems to be a very impressive place with top universities, lots of people interested in EA, and even a few great EA-aligned organizations. Do you have other ideas besides a full-time paid community builder to improve that?

Comment by Risto Uuk (Risto_Uuk) on You Should Write a Forum Bio · 2019-02-01T06:55:17.113Z · EA · GW

Nice idea. I wrote my bio in third-person like you did even though on my website I have it in first-person: Usually, I feel weird about the third-person narrative when I'm the one who is talking about me, but it feels right for the forum.

Comment by Risto Uuk (Risto_Uuk) on Cost-Effectiveness of Aging Research · 2019-01-31T12:09:38.995Z · EA · GW
As an application of this model, the Global Priorities Project estimates that research into the neglected tropical diseases with the highest global DALY burden (diarrheal diseases) could be 6x more cost-effective, in terms of DALYs per dollar, than the 80,000 Hours recommended top charities.

What are 80,000 Hours' recommended top charities? I think you mean some other organization here.

Comment by Risto Uuk (Risto_Uuk) on What has Effective Altruism actually done? · 2019-01-19T20:13:32.379Z · EA · GW

It would be nice if someone updated it regularly and had a note about when it was last updated on the top of the page. For example, according to Julia Wise there were 3855 Giving What We Can members at the beginning of 2019, whereas the number here is outdated with 1800+ members.

Comment by Risto Uuk (Risto_Uuk) on A guide to effective altruism fellowships · 2019-01-19T19:44:38.911Z · EA · GW

Let’s face it. Long-termism is not very intuitively compelling to most people when they first hear of it. Not only do you have to think in very consequentialist terms, you also have to be extremely committed to acting and prioritizing on the basis of fairly abstract philosophical arguments. In my view, that’s just not very appealing - sometimes even off-putting - if you’ve never even thought in terms of cost-effectiveness or total-view consequentialism before.

I agree. Because of this, the 2nd edition of the EA handbook doesn't seem appealing at all as an EA introduction. I don't want to hijack this thread, but along these lines, what do you think about the following content as an introduction to effective altruism?:

Week 1:

  • MacAskill's intro: “How can you do the most good?” (14 pages)
  • MacAskill's 1st chapter: “Just how much can you achieve?” (11 pages)
  • Addition: “Famine, Affluence, and Morality”: (15 pages)

Week 2:

  • MacAskill's 2nd chapter: “How many people benefit, and by how much?” (14 pages)
  • MacAskill's 3rd chapter: “Is this the most effective thing you can do?” (12 pages)
  • Addition: “How can we do the most good for the world”: (12 min)

Week 3:

  • MacAskill's 4th chapter: “Is this area neglected?” (12 pages)
  • MacAskill's 5th chapter: “What would have happened otherwise?” (12 pages)
  • Addition: “Prospecting for Gold”:

Week 4:

  • MacAskill's 6th chapter: “What are the chances of success and how good would success be?” (21 pages)
  • Addition: Introductions to expected value theory:

Week 5:

  • MacAskill's 7th chapter: “What charities make the most difference?” (24 pages)
  • Addition: Read one review from here: and skim GiveWell's methodology:

Week 6:

  • MacAskill's 8th chapter: “How can consumers make the most difference?” (19 pages)
  • Addition: “Conscious consumerism is a lie. Here’s a better way to help save the world”:

Week 7:

  • MacAskill's 9th chapter: “Which careers make the most difference?” (32 pages)
  • Addition: Explore 80,000 Hours' career guide:

Week 8:

  • MacAskill's 10th chapter: “Which causes are most important?” (17 pages)
  • Addition: Explore the list of the most pressing problems:

Week 9:

  • MacAskill's conclusion: “What should you do right now?” and “The five key questions of effective altruism” (8 pages)
  • Addition: Reflect on the stipend

We are about to run our stipend with this content in mind. Compared to your reading list, I feel that the content we have planned is more beginner-level. What do you think? What seems to be missing in terms of EA basics?

Comment by Risto Uuk (Risto_Uuk) on A guide to effective altruism fellowships · 2019-01-19T19:44:17.258Z · EA · GW

Thank you for writing this summary!

  • Altruism: Passionate about helping others
  • Effectiveness: Ambitious in their altruism, with a drive to do as much good as they can. Potential to be aligned with the central tenets of EA.
  • Potential: Excited to dedicate their career to doing good or to donate a significant portion of their income to charity
  • Open-mindedness: Open-minded and flexible, eager to update their beliefs in response to persuasive evidence
  • Enthusiasm: Willing and able to commit ~3-4 hours per weekFit: How good a fit are they with the fellowship format? Will they be good in discussions? Will they do good work for the Impact Challenge?"

I appreciate that you explicitly listed all the traits you were looking for in the applicants. We have done that more intuitively, but it's very useful to make them explicit. These traits align well with my intuitions for what we also look for in applicants.

Comment by Risto Uuk (Risto_Uuk) on The Global Priorities of the Copenhagen Consensus · 2019-01-08T21:09:26.893Z · EA · GW

I subscribe to CCC's newsletter and these are the latest stories in the newsletters:

  • The climate debate needs less hyperbole and more rationality
  • The media got it wrong on the new US climate report
  • Don't panic over U.N. climate change report
  • Don't blame global warming for hurricane damages
  • The Paris climate treaty fails to fight global warming

I just wanted to provide more context on what they are focusing on.

Comment by Risto Uuk (Risto_Uuk) on EA syllabi and teaching materials · 2019-01-03T14:16:28.498Z · EA · GW

If you were to organize an effective altruism course around William MacAskill's book Doing Good Better, what additional readings would you give to students to fill in the holes of the book?

Comment by Risto Uuk (Risto_Uuk) on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:53:44.529Z · EA · GW

This might be slightly off-topic, but you may have some insight into it. If a donor donates money to, for example, global health s/he can find pretty concrete numbers about impact based on GiveWell's estimates or information from specific organizations such as AMF. How can someone donating money to Meta justify those donations quantitatively and via concrete indicators?

Comment by Risto Uuk (Risto_Uuk) on EA Concepts: Inside View, Outside View · 2018-10-03T09:36:27.093Z · EA · GW

1. I prefer "we".

2. I'm not sure what kind of references you are supposed to add here. Should they be accessible to everyone or can books, etc. be included as well? If the latter, then I'd add Daniel Kahneman's book Thinking Fast and Slow to the list. There are good parts about these concepts in the book. (e.g. Kindle version location 4220)

3. To me, it seems that the definitions of "inside view" and "outside view" are not clear enough, whereas the examples are very good. had nice slides about this, however, I'm not able to find their material to share here. Anyway, their definitions and explanations are the following:

  • Inside view: focus on the unique qualities of the case at hand.
  • Outside view: connect the case at hand to a reference class and rely on base rate information.
  • Reference classes refer to similar events from the past.
  • Base rates are relative frequencies of an outcome given a defined set. For example, the chance of selecting a red card from a deck of cards if 50%.
Comment by Risto Uuk (Risto_Uuk) on RPTP Is a Strong Reason to Consider Giving Later · 2018-10-03T08:54:36.487Z · EA · GW

You didn't mention anything about (a) the risk of becoming less altruistic in the future, (b) increasing your motivation to learn more about effective giving by giving now, and (c) supporting the development of the culture of effective giving. How much the giver learns over time isn't the only consideration. I'm referring to this forum post by listing these other considerations:

Comment by Risto Uuk (Risto_Uuk) on Ten Commandments for Aspiring Superforecasters · 2018-04-27T11:32:00.126Z · EA · GW

I feel that the book contains too much fluff and even these commandments, despite appearing useful, seem to lack enough specificity to be useful. Does anyone have other book recommendations or guidelines for improving one's forecasting and probabilistic thinking? At the end of the day, it's important to actually practice forecasting and thinking probabilistically, but specific information for how to do that would be useful. E.g. how do you actually determine 40/60 and 45/55 or even 43/57 probabilities?

Comment by Risto Uuk (Risto_Uuk) on Reading group guide for EA groups · 2018-04-26T05:18:28.176Z · EA · GW

Thanks for putting it on EA Groups Resource Map! I think it'd be better if the link was to the Google Docs document rather than to this forum post, because we might edit it in the future.

Comment by Risto Uuk (Risto_Uuk) on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:28:47.224Z · EA · GW

If someone can't apply right now due to other commitments, do you expect there to be new roles for generalist research analysts next year as well? What are the best ways one could make oneself a better candidate meanwhile?