What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z · score: 8 (4 votes)
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z · score: 18 (10 votes)


Comment by meerpirat on EA Meta Fund Grants – July 2020 · 2020-08-13T08:50:58.407Z · score: 6 (4 votes) · EA · GW

Anonymized, aggregate thoughts sound like the perfect solution, and thanks for the pointers!

Comment by meerpirat on EA Meta Fund Grants – July 2020 · 2020-08-12T19:40:16.305Z · score: 8 (3 votes) · EA · GW

Thanks for the work and the concise summaries! I’m really happy with the EA funds.

While risks and reservations for these organizations have been taken into account, we do not discuss them below in most cases.

Reading this I thought that it’s unfortunate that this really valuable information is not communicated as well. Or does this stem from the private and/or person-affecting nature of those reservations? For example, I think I have little intuition about why some projects are considered to have some downside risk and are therefore better not funded/undertaken. Reading more about these kinds of thoughts could be useful.

Comment by meerpirat on Why accelerating economic growth and innovation is not important in the long run · 2020-08-11T17:35:45.847Z · score: 4 (3 votes) · EA · GW

In case you missed it, Leopold Aschenbrenner wrote a paper on economic growth and existential risks, suggesting that future investments in prevention efforts might be a key variable that may in the long run offset increased risks due to increasing technological developments.

Comment by meerpirat on BenMillwood's Shortform · 2020-07-10T15:34:39.519Z · score: 1 (1 votes) · EA · GW

After reading this I thought that a natural next step for the self-interested rational actor that wants to short nuclear war would be to invest in efforts to reduce its likelihood, no? Then one might simply look at the yearly donation numbers of a pool of such efforts.

Comment by meerpirat on Will AGI cause mass technological unemployment? · 2020-06-29T06:36:30.440Z · score: 2 (2 votes) · EA · GW

Hmm, might the lawn mowing analogy break down with increasing speed difference and dependencies? Imagine if the lawn had to be ready for Tiger to play golf and Tiger being 1000 times faster than Joe.

Not sure if related, but I looked up Robin Hanson‘s predictions of the role of humans in the Age of Em, where brain emulations (ems) would become feasible and increasingly perform most of the economic activities on Earth. Summary of chapter 27 on the book website:

As humans are marginal to the em world, their outcomes are harder to predict. Humans can’t earn wages, but might become like retirees today, who we rarely kill or steal from. The human fraction of wealth falls, but total human wealth rises fast. Humans are objects of em gratitude, but not respect.

Unfortunately, I don’t recall how he derived at the conclusion, maybe somebody else can chime in.

Comment by meerpirat on Differential technological development · 2020-06-28T20:32:48.167Z · score: 0 (3 votes) · EA · GW

Thanks, I also think writing this was a good idea.

Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe.

This reminded me of arguments that economic growth on Earth would be necessarily diminished by limits of natural resources, which seems to forget that with increasing knowledge we will be able to do more with less resources. E.g. compare how much more value we can get out of a barrel of oil today compared to 200 years ago.

Comment by meerpirat on How should we run the EA Forum Prize? · 2020-06-25T08:25:12.612Z · score: 1 (1 votes) · EA · GW

Me too. Maybe a normal downvote by a very high karma member? And I also remember one instance where someone accidentally clicked on downvote without noticing.

Comment by meerpirat on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-19T15:14:57.871Z · score: 5 (3 votes) · EA · GW

Thanks for continuing the series, this is one of the most stimulating philosophical issues for me.

After the AI asks Bob if it should do what an ideally informed version of him would want, Bob replies:

Bob: Hm, no. [...] I don’t necessarily care about my take on what’s good. I might have biases. No, what I’d like you to do is whatever’s truly morally good; what we have a moral reason to do in an… irreducibly normative sense. I can’t put this in different terms, but please discount any personal intuitions I may have about morality—I want you to do what’s objectively moral.

I think that part paints a slightly misleading picture of (at least my idea of) moral realism. As if the AI shouldn't mostly study humans like Bob when finding out what is good in this universe, and instead focus on "objective" things like physics? Logic? My Bob would say:

Hm, kinda. I expect my idealized preferences to have many things in common with what is truly good, but I'm worried that this won't maximize what is truly good. I might, for example, carry around random evolutionary and societal biases that will waste astronomical resources for things of no real value, like my preference for untouched swaths of rainforest. Maybe start with helping us understand what what we mean with the qualitative feeling of joy, there might be something going on that you can work with, because it just seems like something that is unquestionably good. Vice versa with pain and sorrow and suffering, those seems undeniably bad. Of course I'm open to be convinced otherwise, but I expect there's a there there.
Comment by meerpirat on EA Forum feature suggestion thread · 2020-06-17T14:39:57.195Z · score: 13 (10 votes) · EA · GW

I'd like to have the option to make polls within a post. I recently wrote a short question post to see if an idea seems promising and I got a couple of upvotes and no comments. Having the option to get quick and cheap feedback from the community would've been useful.

Comment by meerpirat on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-15T21:40:32.957Z · score: 1 (1 votes) · EA · GW

One of the benefits of the proposed scheme is that it’s a costly signal that I expect to actually be not costly at all. And from the perspective of others it’s also a win-win („Either I win the bet and waste some time, or I lose a bit of money but will improve my productivity/wellbeing/etc“).

Comment by meerpirat on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-15T13:06:16.676Z · score: 3 (2 votes) · EA · GW

True. I expect this to matter less with the amount of money I had in mind (on the order of 50€), given that I expect marginal improvements in something like note-taking will seem like a big win to most EAs.

A friend of mine had the idea of donating the money to a preferred EA charity instead of paying out, which might further reduce those incentives (at least it would for my not-quite-there-yet lizard brain).

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-12T15:36:24.401Z · score: 1 (1 votes) · EA · GW
What does it entail when you say that your subjective experience "is real"?

Hmm, that my introspective observation of at time 14:04:38 EST corresponds to something that exists? (I notice feeling like talking in circles, sorry) Say I tell you that I just added two numbers in my head. I believe this is a useful description of some aspect of my cognitive processes, and it is possible to find shared patterns in other cognitive systems when they do addition.

I feel like Alice is uncharitable in this conversation. Lack of sharp boundaries is in my mind no strong argument for denying the existence of a claimed aspect of reality. Okay, this also feels uncharitable, but it felt like Alice was arguing that the moon doesn't exist because there are edge cases, like the big rock that orbits Pluto. I wished she would make the argument why, in this particular case, the observation of Bob does not correspond to anything that exists. Bob would say

I think I notice something that is real but hard to grasp, has a character of 'wanting to be ended', and which sounds a lot like what other people talk about when they are hurt. I've observed this "experience" many times now.

Then Alice would maybe say

I can relate to the feeling like there is something to be explained. Like the thing that you call "your experience" has certain features that correspond to something. For example like the different color labels for apples and bananas correspond to something real: a different mix of photons being emitted. I claim that what you call a qualitative experience does not correspond to anything real, there is no pattern of reality where it's useful to call it being conscious. Now let's go meditate for a year and you will realize that you were totally confused about this part of your observations.
Comment by meerpirat on Idea: statements on behalf of the general EA community · 2020-06-12T13:53:38.151Z · score: 2 (2 votes) · EA · GW

Those open letters could also be accompanied with the option to sign them and thereby signal the support of a larger group of people. Though then I think there is more traction with an open letter signed by relevant experts, like the Open Letter on Artificial Intelligence (would be interested to see data on this). So probably that would not be a particularly useful idea for using the voice of a particular community.

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T16:30:04.393Z · score: 3 (2 votes) · EA · GW

Hmm, okay, thanks for trying to help me understand. How is "reality doesn't come with labels" different from the earlier "all interpretations are wrong [but some are useful]"? I still struggle with understanding where my discomfort with the anti-realist stance comes from. I agree that categories are made by us. But what am I missing when I say

"There is a reality that is governed by simple laws that result in complex patterns. We try to understand this reality by imposing coarse and often misleading labels. Some concepts like elan vital are more misleading, some are less misleading, like atoms. For some concepts it's still very unclear what they correspond to, for example conscious experiences, but it seems premature to conclude that they are so misleading as being worth abandoning."
Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T15:37:20.814Z · score: 2 (2 votes) · EA · GW
The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in. This is not the same as saying that there exists a state of the world that is objectively to be disvalued.

But you would agree that this state in you brain can accurately be described as "A state of wanting to end or change something"? For me, I quickly go from

1) saying that something like this state corresponds to something real

2) saying that your subjective experience is real (that is, it exists in some form and is not just a delusion)


3) there exist states that apparently are connected to an experience (whatever that is) that is generally agreed as disvaluable.

Sorry if this is too confusing.

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T15:19:28.898Z · score: 1 (1 votes) · EA · GW

Thanks. Hmm, just to test if I got it right:

Realists: "There is a reality and some of our concepts, like consciousness or the color blue, directly map to features of this reality."

Non-Realists: "There is a reality, but we only observe a filtered and transformed reflection and (therefore?) all our concepts are unescapably laden with additional assumptions. Because of this, our concepts can never directly map to features of this reality and should never be called objective or real."

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-09T17:16:06.809Z · score: 3 (2 votes) · EA · GW
Some people equate anti-realism with the nihilistic sentiment “all interpretations are wrong.” However, the way I think about it, anti-realism is best summarized as follows:
Insofar as interpretations can be right or wrong, there can be more than one right interpretation.

Ah, I have just committed this error. Though also here, doesn't there need to exist something relative to which an interpretation is right? I'm confused.

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-09T17:08:53.480Z · score: 1 (1 votes) · EA · GW

I was confused by this sentence:

According to my anti-realist perspective, reality simply is, but interpretations always add something

Doesn't this concede the point that something exists (i.e. something is real) and the issue is rather that we can't put it into words without adding baggage that does not exist?

Comment by meerpirat on Cause Prioritization in Light of Inspirational Disasters · 2020-06-09T09:31:45.502Z · score: 2 (2 votes) · EA · GW

I agree that "inspirational" is still not optimal because of its positive connotation, but I think it is fair to say that stecas was trying to improve it and that the update successfully removed the possibility of understanding the title as a call to action (old title was something like "Cause Prioritization by Inspiring Disasters", where "Inspiring" was meant as an adjective, but could be understood as a gerund).

Some ideas:

  • "A model of how preventing enduring catastrophes could backfire"
  • "Would a Utilitarian go back in time and prevent a cautionary catastrophe?"
  • "Cause Prioritizaion in light of cautionary disasters"
Comment by meerpirat on Cause Prioritization in Light of Inspirational Disasters · 2020-06-08T22:11:09.512Z · score: 5 (4 votes) · EA · GW

Hmm, FWIW I didn‘t think for one second that the author suggested inspiring a disaster and I think that it’s completely fine to post a short argument that doesn‘t go full moral uncertainty. It’s not like the audience is unfamiliar with utilitarian reasoning and that a sketch of an original utilitarian argument should never be understood as an endorsement or call to action. No?

Comment by meerpirat on 2019 - Year in Review · 2020-06-08T16:23:50.485Z · score: 4 (3 votes) · EA · GW

Regarding the topic of potentially harmful mass media outreach:

As a consequence, we do not emphasize the EA brand in our activities and communications but rather focus on building a separate brand identity for effective giving.

I thought that this statement is in conflict with you also being

engaged in a project to update and redesign, which is the most prominent EA website for German speaking individuals.

This does sound like ES will be intimately linked to the EA movement, no?

Also, I'd be interested in hearing more about your positive experiences with German media. My rough impression from a few interaction about this topic with EAF some time ago was, that they were generally both very careful about and not very happy with the accuracy in media potrayals in Germany.

Comment by meerpirat on 2019 - Year in Review · 2020-06-08T15:29:51.746Z · score: 2 (2 votes) · EA · GW

Thanks for sharing! Just stumbled across a typo: Animal Welfare (Total) in the table should be 14.130,33 €

Comment by meerpirat on Cause Prioritization in Light of Inspirational Disasters · 2020-06-07T22:31:16.844Z · score: 4 (4 votes) · EA · GW

Thanks for making this case, and for directly putting your idea in a concrete model. I share the intuition that humanity (unfortunately) relies way too much on recent and very compelling experience to prioritise problems.

Some thoughts:

1) Catastrophes as risk factors: humanity will be weakened by a catastrophe and less able to respond to a potential x-risks for some time

2) In many cases we don't need the whole of humanity to realise the need for action (like almost everyone does with the current pandemic), but instead convincing small groups of experts is enough (and they can be convinced based on arguments)

3) Investments in field building and "practice" catastrophes might be very valuable for a cause like pandemic preparedness to get off the ground, and be worth the lack of buy-in of bigger parts of humanity

4) You may expect that, even without global catastrophes, humanity as a whole will come to terms with the prospect of x-risks in the coming decades. It might then not be worth it to accept a slight risk of fatally underestimating an unlikely x-risk.

Comment by meerpirat on Forecasting Newsletter: May 2020. · 2020-06-05T10:31:41.882Z · score: 2 (2 votes) · EA · GW

Thanks, I found some of the links super useful. I would totally love to see you continue this newsletter (and would support it e.g. on Patreon (though student budget)).

Comment by meerpirat on Increasing personal security of at-risk high-impact actors · 2020-05-29T12:53:41.023Z · score: 2 (2 votes) · EA · GW

Polling the community in the forum

While thinking about what kind of feedback I would find useful for this question (after a couple of people having upvoted and no comments so far), I would've found a cheap poll with options like "Definitely not worth looking into this further" or "Probably uninteresting, but an interested EA might look into this more" very useful here. I wonder if this was discussed before, seems like an easy to implement and useful feature for quick and dirty feedback. Maybe downsides could be giving a wrong impression of EA consensus due to selection effects (e.g. most informed EAs being less active readers of the forum), or less in-depth discussion because people that would otherwise have shared their thoughts now only participate in the poll?

Comment by meerpirat on Increasing personal security of at-risk high-impact actors · 2020-05-29T12:51:21.809Z · score: 2 (2 votes) · EA · GW

Some further thought:

Security threats in places less safe than Europe

I initially haven't thought about security threats of altruists in less safe places than my own. From my times at Amnesty International I know that activists and journalists in for example basically all of Latin America, Russia, Turkey, Saudi Arabia and China have to face severe risks to their personal safety. It might furthermore be less costly to hire security in lower-income places. I have no further insight here, but it might be fairly easy to find key people in such areas that do highly valuable work and only due to lack of funding can't afford the safety that would be efficient here.

Comment by meerpirat on Project Proposal: Gears and Aging · 2020-05-15T10:34:42.510Z · score: 1 (1 votes) · EA · GW

I think this is a good point. I wonder if there are examples were writing a textbook led to key insights.

I think that the data required to figure out the gears of most major human age-related diseases is probably already available, online, today. And I don’t mean that in the sense of “a superintelligent AI could figure it out”; I mean that humans could probably figure it out without any more data than we currently have.

I noticed that I have a vague sense that this is also true for AGI based on human cognition. I wonder if you think that polling a research community on questions like „Do we already know enough to derive X with a lot of smart effort?“ would give a good sense of tractability.

Comment by meerpirat on Effective Altruism and Free Riding · 2020-05-14T11:00:27.649Z · score: 7 (2 votes) · EA · GW

Your post paints a picture of differences in value where I only see differences in careful thinking. The general public supports local charities and animal shelters not because they have different values, but because they have not spend much time to think carefully about their altruistic aspirations. I think most people would find causes like poverty in developing countries and global catastrophic risks very much within their altruistic priorities if they would use tools like prioritization and cost-efficiency. Those are not EA-specific tools, those are tools that people already use for their personal lives.

Others have alluded to it, I just wanted to make this point into its own comment because this part of your essay seems so off to me.

Comment by meerpirat on EA Forum Prize: Winners for March 2020 · 2020-05-13T19:28:34.267Z · score: 11 (7 votes) · EA · GW

Thanks! I think the prices are a great idea and I'm glad there is so much great content that well deserves them.

I noticed that you stopped explaining why the individual people are part of the committee and you added one more person, and I got curious.

Comment by meerpirat on 162 benefits of coronavirus · 2020-05-12T22:46:47.860Z · score: 5 (3 votes) · EA · GW

Some weeks ago I stumbled across this collaborative Google doc where people brainstorm second and third order effects of the pandemic. I didn‘t think it was especially careful, but it contains a lot of ideas and areas and might offer some further relevant effects.

Comment by meerpirat on Cause-specific Effectiveness Prize (Project Plan) · 2020-05-12T15:48:37.308Z · score: 1 (1 votes) · EA · GW

I think I should have emphasized that this was just me non-native speakers gut reaction. Maybe a competition is more what I meant? I think about competitions in school where you had no choice but to take part (e.g. maths or reading or sports), or the competition from the movie Hunger Games. Somehow a prize sounds more like something I get for a bonus, while competing against others is to me more associated with "There is not enough for everyone, this is zero-sum, who is this evil person that would make charities compete against each other for their financial survival?".

Comment by meerpirat on Cause-specific Effectiveness Prize (Project Plan) · 2020-05-05T08:04:05.805Z · score: 5 (5 votes) · EA · GW

Hey guys! I think it's a cool idea, and I think it's great form to share such a concise summary for feedback. Just some uncertain comments off the cuff, because nobody started the conversation yet:

1) It seems like a resource intense project, e.g. working out reasonable metrics, evaluating them, reaching out, get enough funding. I'd be worried that a bad execution might lead to bad press. For example, somewhere in the back of my mind I remember a discussion of charity representatives in Germany that were very dismissive of the idea that their impact could be measured.

2) "Contest" vs. "Prize": Maybe their is less risk at bad press when it is framed as a prize. Just a feeling that nobody ever forces you to compete for a prize, but it's sometimes mandatory to take part in a contest.

3) Maybe you could try to connect interested researchers with charities and let them work out a way to measure their impact. Then prizes go out to the best reports/papers (the money should probably go to the charity, to incentivize them). I think there is already an existing research field around impact measurement, so you could worry less about counseling the charities and let the researchers work this out with them.

Comment by meerpirat on Update from the Happier Lives Institute · 2020-05-02T09:24:30.350Z · score: 4 (3 votes) · EA · GW

I overlooked your comment thread with Max Daniel in your launch post last year. Have you thought more about this? Is this a fair summary of June last year?

  • There seemed to have been strategic uncertainty about near- and long-term work
  • There is no consensus regarding population ethics, but Michael personally tends to lean to a person-affecting view which tends to focus the near-term
  • For broader appeal outside of EA, nearer-term work might be preferable

Relevant quotes:

I expect that HLI's primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term.

Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy [I suppose this is supposed to say "making happy people"]. In the end, we decided not to mention this. One reason is that, as noted above, it's not (yet) totally clear what HLI will focus on, hence we don't know what our colours are so as to be able to nail them to the mast, so to speak.

Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind.
Comment by meerpirat on Update from the Happier Lives Institute · 2020-05-02T09:07:51.398Z · score: 5 (4 votes) · EA · GW

Thanks for the update and the great work! As someone whose values resolve around some broad conception of well-being and happiness, I'm very happy that you are gaining so much traction.

As someone who also takes well-being and happiness in the long run very seriously, I wonder how you are thinking about this. I only found this quote from the last update:

80,000 Hours primarily focuses on the long-term; we intend to provide guidance to those who careers will focus on (human) welfare-maximisation in the nearer-term.

Is that a strategic decision because you think a long-term focus on happiness would also converge on reducing the risk of extinction?

Comment by meerpirat on Brief update on EA Grants · 2020-04-23T08:12:29.141Z · score: 5 (5 votes) · EA · GW

Thanks for the update. I hope you get well soon!

Comment by meerpirat on Is anyone working on a comparative COVID-19 policy response dataset? · 2020-04-11T07:41:17.192Z · score: 9 (6 votes) · EA · GW

Robert Wiblin shared this on Facebook this week, might be useful:

"Policy responses to the coronavirus are vast, varied in scope, and changing every day. COVID-19 Policy Watch summarises these measures and presents them in an accessible and comparable form. You can explore policies by country or topic."

Comment by meerpirat on Hiring Process and Takeaways from Fish Welfare Initiative · 2020-04-06T11:03:31.191Z · score: 4 (9 votes) · EA · GW

Thanks for summarizing your insights, I think it's great that you enable others to benifit from those learning opportunities.

Up to 8 hours is a long time for a test task, and is more than most people will be accustomed to. While two applicants gave us negative feedback about this, we think the insight we gained into the applicant’s output ability and desire for the job well outweighs this time cost.

Maybe I overread it, but did you think about compensating the applicants for the work they are putting into this? The OPP did this, giving me the impression that they value my time (and the time of EAs generally). I can imagine that this might be too costly for smaller orgs. Though you could set the price lower than them (around 300$ for 8 hours, IIRC). Already a 50$ Amazon gift card would have left me with the impression that an org thinks about my opportunity cost to spend 8 hours on a work test.

Comment by meerpirat on Virtual EA Global: News and updates from CEA · 2020-03-18T11:21:37.044Z · score: 7 (6 votes) · EA · GW

Such a cool idea, thanks for making this happen! :)

We are piloting the use of “virtual meeting rooms” for attendees to connect with each other via the Grip app. Attendees should have received a Grip invitation a while ago after having been accepted to EA Global; if you have not received an invitation, please contact us at

Does that mean that only people that were going to EAG SF can join the virtual meeting rooms?

Comment by meerpirat on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-09T15:56:11.206Z · score: 4 (4 votes) · EA · GW

The downside of "see ell are", as mentioned by JasperGeh, would be that, as I've understood, CEEALAR is supposed to be pronounced "see ale-are". So it would sound similar.

Comment by meerpirat on What Do Unconscious Processes in Humans Tell Us About Sentience? · 2020-03-04T16:15:04.635Z · score: 1 (1 votes) · EA · GW

Super interesting, I really like seeing this work being done.

I wonder if there is a meaningful difference between how you define consciousness:

‘conscious processes’ as those that meet the following conditions:
(i) They can be claimed by the individual to be intentional,
(ii) They can be reported and acted upon…
(iii) …with verifiable accuracy.

and conscious states that are associated with positive or negative experienced value. One example that came to my mind are dreams: Sometimes I remember having had very negative or positive experiences, but mostly I don‘t remember anything. I strongly suspect I still have those dreams (right?), but those states seem to involve no intentionality, they cannot be acted upon and have no connection to verifiability.

Another candidate process that just came to mind (very uncertain) that might be indicative for experiencing evaluative states is planning. You are mentally laying out paths into the future and need a flexible evaluation function that gives you feedback that guide your planning.

P.S.: Have you thought about posting it on the LessWrong forum? I think they are also a very informed crowd with respect to the topic of consciousness and might give you valuable feedback.

Comment by meerpirat on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-24T17:25:33.180Z · score: 13 (7 votes) · EA · GW

Thanks, I agree that my comment would be much more helpful if stated less ambiguously, and I also felt frustrated about the article while writing it (and still do). I also agree that we don't want to annoy such authors.

1) I interpreted your first commented to say it would not be a good use of resources to be critical of the author. I think that publically saying "I think this author wrote a very uncharitable and unproductive piece and I would be especially careful with him or her going forward" is better than not doing it, because it will a) warn others and b) slightly change the incentives for journalists: There are costs to writing very uncharitable things, such as people being less willing to invite you and giving you information that might be reported on uncharitably.

2) Another thing I thought you were saying: Authors have no influence on the editors and it's wasted effort to direct criticism towards them. I think that authors can talk to editors, and their unhappiness with changes to their written work will be heard and will influence how it is published. But I'm not super confident in that, for example if it's common to lose your job for being unhappy with the work of your editors, and there being little other job opportunities. On the other hand, there seem to be many authors and magazines that allow themselves to report honestly and charitably. So it seems useful to at least know who does and does not tend to do that.

Comment by meerpirat on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-23T18:34:24.144Z · score: 6 (5 votes) · EA · GW

Hmm, I agree that this might’ve happened, but I still think it is reasonable to hold both author and the magazine with its editors accountable for hostile journalism like this.

Comment by meerpirat on It's OK to feed stray cats · 2020-01-28T21:45:52.197Z · score: 2 (2 votes) · EA · GW

Thank you for writing this. I can relate well to the refreshing and restorative effect of small acts of kindness.

I think there are way too many narratives encouraging people to practice small acts of kindness that produce equally small benefits.

Thanks for helping me notice that I have one of those narratives floating around in my head without being questioned. Questioning it right now feels kind of sad, I really liked the idea that my small acts of considerateness will maybe potentially some day turn out to have been very important for the future of everything.

Comment by meerpirat on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T10:34:04.332Z · score: 4 (3 votes) · EA · GW

I am still confused about the 60% veg*ns who selected a meat choice. I found some further evidence for your hypothesis that many of those buy the meat for family members from Oklahoma State University's report on page 7:

Preceding the set of questions was the verbiage: “Imagine you are at the grocery store buying the ingredients to prepare a meal for you or your household. For each of the nine questions that follow, please indicate which meal you would be most likely to buy.”

Maybe if many of those veg*ns selected the hamburger, they confused it with a veggie burger? Though the 2013 veggie burgers didn't look at all like today's meaty veggie burgers, at least in Germany.

Comment by meerpirat on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-23T21:53:54.175Z · score: 11 (4 votes) · EA · GW

Some while ago, Peter McIntyre and Jesse Avshalomov compiled a list of concepts they deemed worth knowing. I can imagine that many are pretty well known within EA, but I’ll go out on a limb and say I woudn‘t be surprised if most EAs will find more than one useful new concept.

Comment by meerpirat on Institutions for Future Generations · 2020-01-20T21:48:29.622Z · score: 2 (2 votes) · EA · GW

That was fun to read and seems like a promising project. One thanks from me and one thanks on behalf of our descendants! Some ideas that came to my mind:

  • Global Index that rates countries on their contributions for future generations
  • I expected so see some form of prediction markets. I wonder if there are some ways to make them work for predictions that lief farther in the future.
  • A coalition of private organizations that e.g. think about best practices, analogue to Partnership on AI
  • Founding a newspaper/news site on future generations
  • Something like the Rotary club for future generations, where rich and influential people come together and discuss „how to profit most by serving future generations the best“
  • More out there: Funding of art projects that motivate the importance of future generations (e.g. movies and books)
Comment by meerpirat on EA Forum Prize: Winners for November 2019 · 2020-01-16T17:07:01.325Z · score: 4 (3 votes) · EA · GW

Even though I was pretty actively reading the forum in the last months I've missed one of the posts and all of the really great comments, so thanks a lot!

I'm wondering if there is some reasonable way to search for highly upvoted comments that were made after reading a post. The forum seems to keep track which comments were made after a user last opened the forum post, so maybe one could sort those comments by their upvotes? Or maybe by relative upvotes so it is not dominated by the most popular forum posts.

Comment by meerpirat on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2020-01-08T12:55:42.037Z · score: 2 (2 votes) · EA · GW

I just looked it up, you're right. Here the full quote:

F. Bailey Norwood and Jayson L. Lusk, Compassion, by the Pound: The Economics of Farm Animal Welfare (New York: Oxford University, 2011), 223.
Comment by meerpirat on Coordinating Commitments Through an Online Service · 2020-01-05T10:45:03.572Z · score: 4 (3 votes) · EA · GW

Congrats for your first post! I think it’s well written; I like the structure of „problem-proposed solution-possible issues“, you write short and clearly, and you stated what kind of input you want from the community.

It was useful for me that you provided the example of meat eating as a coordination problem. I would have found more examples even more useful for thinking about the potential applications where a coordination platform is among the most promising approaches (btw, I think for meat eating it is not among the most promising).

I like your idea, but I’m also worried about your 2nd issue: nobody will use it. It seems to me like people are just not motivated enough by being a part of improving the world. Meat eating seems like a case in point: there is already a veggie community you can be part of (at least in Germany in every bigger city) and the marginal impact you have doesn’t even depend that much on coordination. Still it’s a tiny movement.

I think it’s reasonable that you are trying to think about the landscape and bottlenecks of behavior change and coordination before moving to action. There is probably much more to learn. For example, I’ve read this short report about change platforms in the context of changing organizations, that seems to have some success stories and learned lessons that are also relevant for you. This might be a much more tractable pathway if there are smaller scale important coordination problems.

Comment by meerpirat on EA Survey 2019 Series: Cause Prioritization · 2020-01-02T23:11:32.565Z · score: 3 (3 votes) · EA · GW

Thanks for this. I like the ribbon plots!

Did you by chance look at cause prio differences between countries and saw anything interesting? I dimly remember there used to be a trend along the lines of a bit more animal welfare in continental Europe, global poverty in UK and x-risk in the US.