Posts

Impact of Charity Evaluations on Evaluated Charities' Effectiveness 2021-01-25T13:24:59.265Z
Is Earth Running Out of Resources? 2021-01-02T20:08:59.452Z
Requests on the Forum 2020-12-22T10:42:51.574Z
What are some potential coordination failures in our community? 2020-12-12T08:00:25.858Z
On Common Goods in Prioritization Research 2020-12-10T10:25:10.275Z
Does Qualitative Research improve drastically with increasing expertise? 2020-12-05T18:28:55.162Z
Summary of "The Most Good We Can Do or the Best Person We Can Be?" - a Critique of EA 2020-11-28T07:41:28.010Z
Proposal for managing community requests on the forum 2020-11-24T11:14:18.168Z
Prioritization in Science - current view 2020-10-31T15:22:07.289Z
What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? 2020-10-21T04:44:57.757Z
Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z
EA is risk-constrained 2020-06-24T07:54:09.771Z
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z
What is the size of the EA community? 2019-11-19T07:48:31.078Z
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z
Off-Earth Governance 2019-09-06T19:26:26.106Z
edoarad's Shortform 2019-08-16T13:35:05.296Z
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z

Comments

Comment by edoarad on Impact of Charity Evaluations on Evaluated Charities' Effectiveness · 2021-01-27T17:51:57.604Z · EA · GW

Thank you!

I've searched and found this post describing it. The summary:

Evidence Action is terminating the No Lean Season program, which was designed to increase household food consumption and income by providing travel subsidies for seasonal migration by poor rural laborers in Bangladesh, and was based on multiple rounds of rigorous research showing positive effects of the intervention. This is an important decision for Evidence Action, and we want to share the rationale behind it.  

Two factors led to this, including the disappointing 2017 evidence on program performance coupled with operational challenges given a recent termination of the relationship with our local partner due to allegations of financial improprieties. 

Ultimately, we determined that the opportunity cost for Evidence Action of rebuilding the program is too high relative to other opportunities we have to meet our vision of measurably improving the lives of hundreds of millions of people. Importantly, we are not saying that seasonal migration subsidies do not work or that they lack impact; rather, No Lean Season is unlikely to be among the best strategic opportunities for Evidence Action to achieve our vision.

Comment by edoarad on Everyday Longtermism · 2021-01-27T09:07:16.177Z · EA · GW

In this 2017 post Emily Tench talks about "The extraordinary value of ordinary norms", as (I think) she did while in an internship at CEA and where she got feedback and comments from Owen and others. 

Comment by edoarad on How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time · 2021-01-27T07:20:11.595Z · EA · GW

I love that you decided to take a well-thought-out career risk to work on something more meaningful and impactful. Thanks for doing that, and thanks for sharing it in detail! 

I may have missed it, but I'm curious about how you view the career capital you gain from working on EA Philippines. Do you think that this might allow you to have additional Plan Bs outside of direct EA work?

Comment by edoarad on Why "cause area" as the unit of analysis? · 2021-01-26T14:40:25.938Z · EA · GW

(thanks! fixed) 

Comment by edoarad on Khorton's Shortform · 2021-01-26T11:56:28.211Z · EA · GW

Thanks, that's good to hear enough people seem to be working on it :)

If you have some notes on it you can share, it would be nice if you could collect them and add them to a post together with these shortform posts so that it could be tagged and more discoverable 🙂 (no need to edit anything, and even this bottom line seems important) 

Comment by edoarad on Why "cause area" as the unit of analysis? · 2021-01-26T11:47:50.687Z · EA · GW

A contrasting approach is to choose the next steps in a career based on opportunities rather than causes, as Shay wrote:

Another important point that I wish to emphasize is that I was looking for promising options or opportunities, rather than promising cause areas. I believe that this methodology is much better suited when looking at the career options of a single person. That is because while some cause area might rank fairly low in general, specific options which might be a great fit for the person in question could be highly impactful (for example, climate change and healthcare [in the developed world] are considered very non-neglected in EA, while I believe that there are promising opportunities in both areas). That said, it surely is natural to look for specific options within a promising cause area.

Comment by edoarad on Why "cause area" as the unit of analysis? · 2021-01-26T11:45:13.455Z · EA · GW

Yea, sorry for trying to rush it and not being clear. The main point I took from what you said in the comment I replied to was something like "Early on in one's career, it is really useful to identify a cause area to work in and over time to filter the best tasks within that cause area". I think that it might be useful to understand better when that statement is true, and I gave two examples where it seems correct.

I think that there are two important cases where that is true:

  1. If the cause area is one where generally working toward it will improve understanding of the whole cause area and improve one's ability to identify and shift direction to the most promising tasks later on. 
    1. For example, Animal Welfare might arguably not be such a cause because it is composed of at least three different clusters which might not intersect much in their related expertise and reasons for prioritization (alternative proteins, animal advocacy and wild animal welfare). However, these clusters might score well on that factor as sub-cause areas. 
  2. If it is generally easy to find promising tasks within that cause area. 
    1. Here I mostly agree with the overlapping bell curves picture, but want to explicitly point out that we are talking about task-prioritization done by novices. 
Comment by edoarad on Why "cause area" as the unit of analysis? · 2021-01-26T10:32:00.053Z · EA · GW

This seems to be true if it is possible to gradually grow within a cause area, or if different tasks within a promising cause area are generally good. This might lead to a good working definition of cause areas

Comment by edoarad on Why "cause area" as the unit of analysis? · 2021-01-26T10:28:49.737Z · EA · GW

I really agree with this kind of distinction. It seems to me that there are several different kinds of properties by which to cluster interventions, including:

  1. Type of work done (say, Political Advocacy)
  2. Instrumental subgoals (say, Agriculture R&D (which could include supporting work, not just research)). (I'm not sure if it's reasonable to separate these from cause areas as goals)
  3. Epistemic beliefs (say, interventions supported by RCTs for GH&D)

 

(It seems harder than I thought to think about different ways to cluster. Absent of contrary arguments, I might purpose defining intervention areas as the type of work done)

Comment by edoarad on Khorton's Shortform · 2021-01-26T07:42:09.575Z · EA · GW

Do you have any updates here?

Comment by edoarad on Materials regarding RCTs and SWB · 2021-01-23T17:07:24.483Z · EA · GW

Happier Lives Institute has the sequence "Measuring Happiness", and conducted a meta-analysis on SWB outcomes of cash transfers.

About how to run an RCT, I searched at J-PAL (which are amazing) and found this list of resources on RCTs execution (which looks amazing and that probably contains what you want). 

Comment by edoarad on Megaproject Management · 2021-01-21T20:22:32.518Z · EA · GW

Of course!  :)

Comment by edoarad on Megaproject Management · 2021-01-21T06:01:09.303Z · EA · GW

Thank you for writing this! I'll also throw in the EconTalk episode with Flyvbjerg on the topic 🙂

Comment by edoarad on Training Bottlenecks in EA (professional skills) · 2021-01-19T12:30:00.019Z · EA · GW

Thank you, love it! 

You might be interested in Readwise (which can integrate with pocket and Kindle and others) - it collects pages and highlights and has a tagging system. Also, it has an automatic system for spaced repetition/recall. 

Comment by edoarad on Aidan O'Gara's Shortform · 2021-01-19T04:30:11.296Z · EA · GW

Ah! Richard Ngo had just written something related to the CAIS scenario :)

Comment by edoarad on Aidan O'Gara's Shortform · 2021-01-19T04:23:51.267Z · EA · GW

I like the no takeoff scenario intuitive analysis, and find that I also haven't really imagined this as a concrete possibility. Generally, I like that you have presented clearly distinct scenarios and that the logic is explicit and coherent. Two thoughts that came to mind:

Somehow in the CAIS scenario, I also expect the rapid growth and the delegation of some economic and organizational work to AI to have some weird risks that involve something like humanity getting pushed away from the economic ecosystem while many autonomous systems are self-sustaining and stuck in a stupid and lifeless revenue-maximizing loop. I couldn't really pinpoint an x-risk scenario here. 

 Recursive self-improvement can also happen within long periods of time, not necessarily leading to a fast takeoff, especially if the early gains are much easier than later gains (which might make more sense if we think of AI capability development as resulting mostly from computational improvements rather than algorithmic). 

Comment by edoarad on Training Bottlenecks in EA (professional skills) · 2021-01-17T20:17:35.806Z · EA · GW

Weirdly not that much off-topic, but I'm curious about what else are you doing to "improve at forming views on difficult amorphous topics"?

Comment by edoarad on How do you balance reading and thinking? · 2021-01-17T14:53:29.930Z · EA · GW

I just want to thank you for taking the time to make this sequence. I think that the format is clear and beautiful and I'm interested to learn more about EA researchers' approach to doing research.

Comment by edoarad on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T19:26:59.515Z · EA · GW

Cool, I'll check it out

Comment by edoarad on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T10:58:31.707Z · EA · GW

Exactly, a private hypothes.is group is one where you only see annotations from members of the same group, and only annotations that were tagged as annotations for that group. 

Definitely agree that doing something like that should be hooked up to the forum, and that it is a bit of a technical challenge. 

I am not sure if engagement is the right metric to use here, though. Not sure if it isn't. I'm also not sure if that's an important point so I'll just keep this in the back of my head and maybe something will come up in the future. 

Comment by edoarad on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T08:29:41.943Z · EA · GW

I think it was Aaron that raised a related suggestion - to add points for discussion of a post in the comment section.

Comment by edoarad on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T08:12:58.445Z · EA · GW

Daydreaming a little bit. 

Imagine that there was an EA Browser, that acts just like your favorite browser but also has an option of up/downvoting, tagging, and writing comments on any web page. 

Imagine all the people in the EA community are using that browser as they go through their day and casually upvote some webpages or write some comments.

(Imagine there's no spamming.. 🎶)

How would you design a forum feed based on those web annotations then? Probably have some default high bar (or quantity? or perhaps randomly??) on what goes to the main feed, and an option to view all web annotations.

This could be implemented rather easily by building a chrome add-on (or starting with using a private EA group for https://hypothes.is/ and feeding that to the forum).

I imagine that this would be surely useful, if people would use something like this, in that I don't see major drawbacks. That makes me think that this is a solvable design problem.

Comment by edoarad on Everyday Longtermism · 2021-01-15T07:46:28.369Z · EA · GW

💖

Comment by edoarad on How do you approach hard problems? · 2021-01-11T12:34:53.684Z · EA · GW

Another strategy that goes about the problem from the side is what Tiago Forte of Building a Second Brain calls The Slow Burn approach (9 min audio explanation). It's basically the approach of letting hard and motivating problems flow with you for a long period of time, collecting insights, ideas, resources, and different view points along the way. 

Richard Feynman supposedly gave the advice of always keeping in mind 12 favorite questions, and see if anything new that comes up shines a light on any of them.

You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”

In How to Take Smart Notes, the author discusses the Zettelkasten method. It is based on the method of a prolific social scientist (Niklas Luhmann) to research; Something like: Have a trusted system for storing and reviewing notes, and engage with whatever you find interesting (and keep everything in the system). Once in a while, some ideas will develop into something coherent which could be published. 

[This book] describes how [Luhmann] implemented [the tools of note-taking] into his workflow so he could honestly say: “I never force myself to do anything Idon’t feel like. Whenever I am stuck, I do something else.” A good structure allows you to do that, to move seamlessly from one task to another – without threatening the whole arrangement or losing sight of the bigger picture.

Comment by edoarad on Buck's Shortform · 2021-01-11T09:33:42.710Z · EA · GW

I tried searching the literature a bit, as I'm sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:

Applying a dual-process framework to the study of social preferences, we show in two studies that individuals with a more reflective/deliberative cognitive style, as measured by scores on the Cognitive Reflection Test (CRT), are more likely to make choices consistent with “mild” altruism in simple non-strategic decisions. Such choices increase social welfare by increasing the other person’s payoff at very low or no cost for the individual. The choices of less reflective individuals (i.e. those who rely more heavily on intuition), on the other hand, are more likely to be associated with either egalitarian or spiteful motives. We also identify a negative link between reflection and choices characterized by “strong” altruism, but this result holds only in Study 2. Moreover, we provide evidence that the relationship between social preferences and CRT scores is not driven by general intelligence. We discuss how our results can reconcile some previous conflicting findings on the cognitive basis of social behavior.

Also relevant is This Review (2016) by Rand:

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games. My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is not in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.

And This Paper (2016) on Belief in Altruism and Rationality claims that 

However, contra our predictions, cognitive reflection was not significantly negatively correlated with belief in altruism (r(285) = .04, p =.52, 95% CI [-.08,.15]).

Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest.

Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of "are they right?"), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation. 

In this shortform post, the most obvious point where I think that this becomes a problem is the example

For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have used a lot of abstract or personal examples where people in their day to day work are not taking time to think something through or seek negative feedback or update their actions based on (noticing when they) update their beliefs. 

Comment by edoarad on Quantifying the Value of Evaluations · 2021-01-11T08:15:30.234Z · EA · GW

In the summary you wrote 

I have greatly upped my estimate of how difficult it is to create really useful assessments

Do you mean useful assessments of evaluations or useful evaluations? 

Comment by edoarad on edoarad's Shortform · 2021-01-10T10:59:58.916Z · EA · GW

Fund projects, people, or organization? 
A thought that I keep coming back to.

An analysis of funding people over projects at the academia from Nintil.

Comment by edoarad on Progress Open Thread: January 2021 · 2021-01-10T08:56:30.969Z · EA · GW

Caution - negative outlook!

The IEA's annual report on access to electricity highlights that the pandemic had a huge negative impact on progress, and raise concerns about the potential for recovery. Furthermore, if the relevant SDG policies would continue as they are then they predict about 62% of people in sub-saharan africa with electricity (today we are at 48%). They suggest that further $35 billion per year is needed to get global worldwide access to electricity by 2030.

Comment by edoarad on Progress Open Thread: January 2021 · 2021-01-10T08:45:03.205Z · EA · GW

I like it, and look forward to see what you write on privacy (I'm feeling somewhat conflicted on this subject, and I'm curious as to how much gains we can have from privacy-preserving computation technologies). I want to encourage you to linkpost these to the forum! Is there a particular reason you aren't doing so?

Comment by edoarad on What does it mean to become an expert in AI Hardware? · 2021-01-09T06:19:17.104Z · EA · GW

Thank you so much for taking the time to publicly write this up! I'd love it if more people who are doing some research for their career planning would post their results.

Comment by edoarad on Practical ethics given moral uncertainty · 2021-01-08T10:59:47.004Z · EA · GW

The second is here (paywalled) and I am not sure what was the first. If you find it I can use the moderator privileges and edit the post to fix the links.

Comment by edoarad on Legal Priorities Research: A Research Agenda · 2021-01-07T07:48:14.752Z · EA · GW

This is amazingly comprehensive and I'm glad you took the time to make it accessible to people who are not involved with EA ideas.

Two interesting quotes from a brief glance:

Let us define “pleasure risks (p-risks)” as risks where an adverse outcome would prevent pleasure on an astronomical scale, vastly exceeding all pleasure that has existed on Earth so far

[This references Michael Dickens' post on "Disappointing Futures", and I love the term "p-risks" 😊]

At present, very few people are working on animal law from a longtermist perspective, and very few people are working on longtermism from a multi-species perspective. [...]

We believe this separation between animal law and longtermism is a mistake, in both directions. [...]

As a preliminary matter, a question remains as to whether incremental or fundamental legal change would plausibly do the most good for animals. We believe that the answer to this question, which hinges on many difficult empirical and normative judgments, is highly uncertain. We also doubt that these strategies are mutually exclusive; for instance, some incremental changes for animals might also help make fundamental changes more feasible. Thus, we believe the optimal approach will likely include a mixture of both strategies.

Comment by edoarad on Why EA meta, and the top 3 charity ideas in the space · 2021-01-07T06:16:15.318Z · EA · GW

I love all three ideas and I hope to see them come to life in the coming years :)

Regarding Exploratory altruism, I want to make explicit one (perhaps obvious) failure case - the explored ideas might not be adequately taken up by the community. 

There seem to have been many proposals made by people in the community but a lack of follow up with deeper research and action in these fields. Further down the line, Improving Institutional Decision Making has existed as a promising cause area for many years and there are various organizations working within that cause, but only recently begun an effort to improve coordination and develop a high-level research agenda. 

Both of these seem potentially good - it might make sense that most early ideas are discarded quickly, and it might make sense that a field needs a decade to find a common roof. However, these raise more opportunities for meta-work which might be better than focusing on generating a new cause-X, and might suggest that a lot of value for such an organization could come from better ways of engaging with the community. 

A different concern, related to "The Folly of EA Should", is that there could be too much filtering out of cause areas. I think that it might be the case that a set up like CE's funnel from 300 ideas to a few that are most promising might discourage the community from (supporting people who are) working in weeded-out causes, which might be a problem if we want to allow and support a highly diverse set of world-views and normative views. 

(I'm sure that these kinds of concerns would rise (or perhaps already had) while developing the report further and when the potential future founders would get to work on fleshing out their theory of change more explicitly, but I think that it might be valuable to voice these concerns publicly [and these kind of ideas are important to me to understand more clearly, so I want to write them up and see the community's reaction])

Comment by edoarad on 10 Habits I recommend (2020) · 2021-01-06T16:10:52.816Z · EA · GW

If you already use Roam and don't intend to use the phone for Anki (which might be a mistake because it's fun to use the app), then RoamToolkit is great. RemNote looks more promising if you want to structure your learning and memorization with it, but I haven't actually used it

Comment by edoarad on 10 Habits I recommend (2020) · 2021-01-06T10:40:23.103Z · EA · GW

Re Roam and Anki - Roam Toolkit allows for using spaced repetition in Roam and RemNote is a cool new tool to use the best of both worlds (I had some usability problems with it when I tried it, but they may be solved now). 

Comment by edoarad on Is Earth Running Out of Resources? · 2021-01-05T08:19:51.753Z · EA · GW

That's great, thank you! 

I've found this review (2015) of "critical metals" - roughly, those metals that are most needed and we'd likely to see a short in supply. And this recent review (2020) of studies on likely future (2050) demand for these metals. I'm not that sure whether these would be crucial in their impact on economic growth, even if they'd have limited supply; What would be the problem of having fewer jet engines?

 Regarding solar energy, I haven't checked the calculations or extrapolated to the future, but taking a look at this I feel optimistic. They say that the Sahara dessert, for example, would be enough space to supply energy to the whole world 20 times over (although they didn't take into account loss of energy in transition and naturally it's not that practical). 

I'd love it if someone would take a deeper dive into this topic. Toby Ord talks a bit about related issues of resource scarcity in The Precipice (Chapter 4, "Anthropogenic Risks", Section "Environmental Damage") and also thinks further research is needed (from an x-risk point of view, though).

Comment by edoarad on Idea: "SpikeTrain" for lifelogging · 2021-01-05T07:51:35.954Z · EA · GW

Sorry, my comment seems too harsh. The reason I think that this wouldn't be useful is that it is a suggestion/request for someone in the EA community to pretty much make a business for physical QS devices (which also seems to probably exist). If you would have written something similar, but concluded with specific suggestions of devices and how to use them it would have been awesome. 

I think that posts on QS, and self-improvement generally, would be awesome on the forum if they would give the readers ideas on how to improve themselves or their productivity or if the post writer is looking for an actionable answer for something. It might also be nice if a post would just serve as a vehicle to start a conversation around some aspect of QS. I think that this post seems to be a bit too much aimed at persuasion and doesn't generate anything actionable.

Comment by edoarad on Everyday Longtermism · 2021-01-05T05:43:51.284Z · EA · GW

Yea, I don't know. I think that it may even be worthwhile to linkpost every such journal article if you also write your notes on these and cross-link different articles, but I agree that it would be weird. I'm sure that there must be a better way for EA to coordinate on such knowledge building and management.

Comment by edoarad on Everyday Longtermism · 2021-01-04T13:28:57.934Z · EA · GW

Yea, I think the tag is great! I was surprised that I couldn't find a resource from the forum, not that the tag wasn't comprehensive enough :)

It might be nice if someone would collect resources outside the forum and publish each one as a forum linkpost so that people could comment and vote on them and they'd be archived in the forum. 

Comment by edoarad on Everyday Longtermism · 2021-01-04T11:29:53.841Z · EA · GW

Hmm. There are many studies on "friend of a friend" relationships (say this on how happiness propagates through the friendship network). I think that it would be interesting to research how some moral behaviors or beliefs propagate through the friendship networks (I'd be surprised if there isn't a study on the effects of a transition to a vegetarian diet, say). Once we have a reasonable model of how that works we could make a basic analysis of the impact of such daily actions. (Although I expect some non-linear effects that would make this very complicated)

Comment by edoarad on Everyday Longtermism · 2021-01-04T11:15:03.090Z · EA · GW

Thanks! I didn't see any post under that tag that had the type of argument I had in mind, but I think that this article by Brian Tomasik was what I intended (which I found now from the reading list you linked).

Comment by edoarad on Everyday Longtermism · 2021-01-04T11:04:49.341Z · EA · GW

Gidon Kadosh, from EA Israel, is drafting a post with a suggested pitch for EA :) 

Comment by edoarad on 10 Habits I recommend (2020) · 2021-01-02T20:10:33.621Z · EA · GW

🙂

Comment by edoarad on 10 Habits I recommend (2020) · 2021-01-02T18:52:08.494Z · EA · GW

I liked this list, thank you for posting it!

Inspired, I've downloaded Pocket, and ordered smart lightbulbs to wake up with and a strong light-therapy lamp to use in the morning. 

In EA Israel we have a weekly online co-working session and sometimes do so spontaneously. We started with using a Complice room (which I've now opened to everyone). Because some people didn't really like working in pomodoros and prefer video chat, we now have a Jitsi link we use (it is simpler than Zoom/Hangouts because the link is always on and there is no need to register).

Regarding social calls, I've recently started to manage my relationships with more intention. I basically have a list of people I want to keep in contact with or check up on, each one with an associated task scheduled sometime in the future. 

Comment by edoarad on Everyday Longtermism · 2021-01-02T11:29:38.520Z · EA · GW

Love it! And I love the series of posts you had written lately. 

I think that the suggestions here, and most of the arguments, should apply to "Everyday EA " which isn't necessarily longtermistic.  I'd be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.

Some further suggestions:

  1. Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn't find any suitable resource in a brief search)
  2. Take a strong stance against narrow moral circles. 
  3. Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
  4. Have a well-structured methodology for getting interested acquaintances more involved with EA.
  5. Help friends in EA/longtermism more. 
  6. Strengthen relationships with friends who have a high potential to be highly influential in the future.
Comment by edoarad on What are the best places to share one-off opportunities for impact? · 2020-12-29T08:10:29.576Z · EA · GW

I encourage you to post as many of these as you'd like. Similar to what I've written in Requests on the Forum, I think that this could be useful and wouldn't be too demanding of the readers' attention if you use correct tags (we might need to create something like "job opportunity", but for now "Get Involved" and "Community" might be enough) and a clear title. The tags allow people to remove these automatically from their feed.

You should definitely also post on the facebook group(s) and subreddit. Perhaps also consider notifying 80k or similar orgs when relevant.

Comment by edoarad on How modest should you be? · 2020-12-28T20:32:52.344Z · EA · GW

Thanks for writing this post. I found it interesting and I love that you suggest practical takeaways. Overall, my one-line takeaway is something like that suggested in Michael Plant's comment: "defer to the experts, except those that seem to have poor epistemics or unreasonable object-level beliefs". 

It seems to me like the arguments presented in section 2 leaves us with a slightly weaker version of the Object-level Reasons Restriction, but still keeps us very constrained in our use of object-level considerations. 

Let's model experts as having a knowledge base (that includes broad beliefs like "homeopathy can't work" and more detailed facts like particular ways in which serotonin interacts with melatonin) and some level of epistemic quality (how well they can derive new information from their knowledge base). I take your argument to basically be "we should consider their underlying knowledge base when assessing how much we should defer to them, and give a heavy penalty for unreasonable beliefs that relate to our object of inspection".  

  •  An expert that believes in homeopathy has a wrong model of how medicine works. We know this because there is an expert consensus against homeopathy (sort of). This means that his deduction of our statement of interest would potentially be clouded by false facts and intuitions.
    • My point here is that it is not exactly what I'd describe as an object-level claim. Or at least, something far enough away that we can find a different set of experts to check against or that we might be experts in ourselves (so again, acting from modesty).
Comment by edoarad on A list of EA-related podcasts · 2020-12-27T13:26:42.031Z · EA · GW

Two more podcasts:

Increments by Ben Chugg and Vaden Masrani

Vaden Masrani, a PhD student in machine learning at UBC and Ben Chugg, a research fellow at Stanford Law School, get into trouble arguing about everything except machine learning and law. Coherence is somewhere on the horizon. Love, bribes, suggestions, and hate-mail all welcome at incrementspodcast@gmail.com.

Clearer Thinking by Spencer Greenberg of Spark Wave.

Clearer Thinking is the brand-new podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, or wish you had more deep, intellectual conversations in your life, then we think you'll love this podcast!

Comment by edoarad on What's a good reference for finding (more) ethical animal products? · 2020-12-27T07:37:45.213Z · EA · GW

I'd be very interested if you could share your references. Especially about alpacas, as I've searched a bit and found only this rather horrible PETA footage :(

Also, can you specify exactly what is the type of reference you are looking for? Is it for clothing products? Only for wild-animal alternatives?

Comment by edoarad on edoarad's Shortform · 2020-12-24T15:45:14.862Z · EA · GW

A recent study shows how correction to misperceived norms leads to behavior change. From the abstract (edited): 
 

Through the custom of guardianship, husbands typically have the final word on their wives’ labor supply decisions in Saudi Arabia. We provide incentivized evidence that the vast majority of young married men in Saudi Arabia privately support women working outside the home, while they substantially underestimate the level of support for women working outside the home by other similar men – even men from their same social setting, such as their neighbors. We then show that randomly correcting these beliefs about others increases married men’s willingness to help their wives search for jobs.

I find it to lend support to efforts to show that giving effectively is a normal behavior, that there are many people that think animal suffering is important, etc.