Posts

Comments

Comment by meerpirat on Hiring Process and Takeaways from Fish Welfare Initiative · 2020-04-06T11:03:31.191Z · score: 5 (4 votes) · EA · GW

Thanks for summarizing your insights, I think it's great that you enable others to benifit from those learning opportunities.

Up to 8 hours is a long time for a test task, and is more than most people will be accustomed to. While two applicants gave us negative feedback about this, we think the insight we gained into the applicant’s output ability and desire for the job well outweighs this time cost.

Maybe I overread it, but did you think about compensating the applicants for the work they are putting into this? The OPP did this, giving me the impression that they value my time (and the time of EAs generally). I can imagine that this might be too costly for smaller orgs. Though you could set the price lower than them (around 300$ for 8 hours, IIRC). Already a 50$ Amazon gift card would have left me with the impression that an org thinks about my opportunity cost to spend 8 hours on a work test.

Comment by meerpirat on Virtual EA Global: News and updates from CEA · 2020-03-18T11:21:37.044Z · score: 7 (6 votes) · EA · GW

Such a cool idea, thanks for making this happen! :)

We are piloting the use of “virtual meeting rooms” for attendees to connect with each other via the Grip app. Attendees should have received a Grip invitation a while ago after having been accepted to EA Global; if you have not received an invitation, please contact us at hello@eaglobal.org.

Does that mean that only people that were going to EAG SF can join the virtual meeting rooms?

Comment by meerpirat on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-09T15:56:11.206Z · score: 4 (4 votes) · EA · GW

The downside of "see ell are", as mentioned by JasperGeh, would be that, as I've understood, CEEALAR is supposed to be pronounced "see ale-are". So it would sound similar.

Comment by meerpirat on What Do Unconscious Processes in Humans Tell Us About Sentience? · 2020-03-04T16:15:04.635Z · score: 1 (1 votes) · EA · GW

Super interesting, I really like seeing this work being done.

I wonder if there is a meaningful difference between how you define consciousness:

‘conscious processes’ as those that meet the following conditions:
(i) They can be claimed by the individual to be intentional,
(ii) They can be reported and acted upon…
(iii) …with verifiable accuracy.

and conscious states that are associated with positive or negative experienced value. One example that came to my mind are dreams: Sometimes I remember having had very negative or positive experiences, but mostly I don‘t remember anything. I strongly suspect I still have those dreams (right?), but those states seem to involve no intentionality, they cannot be acted upon and have no connection to verifiability.

Another candidate process that just came to mind (very uncertain) that might be indicative for experiencing evaluative states is planning. You are mentally laying out paths into the future and need a flexible evaluation function that gives you feedback that guide your planning.

P.S.: Have you thought about posting it on the LessWrong forum? I think they are also a very informed crowd with respect to the topic of consciousness and might give you valuable feedback.

Comment by meerpirat on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-24T17:25:33.180Z · score: 13 (7 votes) · EA · GW

Thanks, I agree that my comment would be much more helpful if stated less ambiguously, and I also felt frustrated about the article while writing it (and still do). I also agree that we don't want to annoy such authors.

1) I interpreted your first commented to say it would not be a good use of resources to be critical of the author. I think that publically saying "I think this author wrote a very uncharitable and unproductive piece and I would be especially careful with him or her going forward" is better than not doing it, because it will a) warn others and b) slightly change the incentives for journalists: There are costs to writing very uncharitable things, such as people being less willing to invite you and giving you information that might be reported on uncharitably.

2) Another thing I thought you were saying: Authors have no influence on the editors and it's wasted effort to direct criticism towards them. I think that authors can talk to editors, and their unhappiness with changes to their written work will be heard and will influence how it is published. But I'm not super confident in that, for example if it's common to lose your job for being unhappy with the work of your editors, and there being little other job opportunities. On the other hand, there seem to be many authors and magazines that allow themselves to report honestly and charitably. So it seems useful to at least know who does and does not tend to do that.

Comment by meerpirat on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-23T18:34:24.144Z · score: 6 (5 votes) · EA · GW

Hmm, I agree that this might’ve happened, but I still think it is reasonable to hold both author and the magazine with its editors accountable for hostile journalism like this.

Comment by meerpirat on It's OK to feed stray cats · 2020-01-28T21:45:52.197Z · score: 2 (2 votes) · EA · GW

Thank you for writing this. I can relate well to the refreshing and restorative effect of small acts of kindness.

I think there are way too many narratives encouraging people to practice small acts of kindness that produce equally small benefits.

Thanks for helping me notice that I have one of those narratives floating around in my head without being questioned. Questioning it right now feels kind of sad, I really liked the idea that my small acts of considerateness will maybe potentially some day turn out to have been very important for the future of everything.

Comment by meerpirat on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T10:34:04.332Z · score: 2 (2 votes) · EA · GW

I am still confused about the 60% veg*ns who selected a meat choice. I found some further evidence for your hypothesis that many of those buy the meat for family members from Oklahoma State University's report on page 7:

Preceding the set of questions was the verbiage: “Imagine you are at the grocery store buying the ingredients to prepare a meal for you or your household. For each of the nine questions that follow, please indicate which meal you would be most likely to buy.”

Maybe if many of those veg*ns selected the hamburger, they confused it with a veggie burger? Though the 2013 veggie burgers didn't look at all like today's meaty veggie burgers, at least in Germany.

Comment by meerpirat on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-01-23T21:53:54.175Z · score: 11 (4 votes) · EA · GW

Some while ago, Peter McIntyre and Jesse Avshalomov compiled a list of concepts they deemed worth knowing. I can imagine that many are pretty well known within EA, but I’ll go out on a limb and say I woudn‘t be surprised if most EAs will find more than one useful new concept. https://conceptually.org/concepts

Comment by meerpirat on Institutions for Future Generations · 2020-01-20T21:48:29.622Z · score: 2 (2 votes) · EA · GW

That was fun to read and seems like a promising project. One thanks from me and one thanks on behalf of our descendants! Some ideas that came to my mind:

  • Global Index that rates countries on their contributions for future generations
  • I expected so see some form of prediction markets. I wonder if there are some ways to make them work for predictions that lief farther in the future.
  • A coalition of private organizations that e.g. think about best practices, analogue to Partnership on AI
  • Founding a newspaper/news site on future generations
  • Something like the Rotary club for future generations, where rich and influential people come together and discuss „how to profit most by serving future generations the best“
  • More out there: Funding of art projects that motivate the importance of future generations (e.g. movies and books)
Comment by meerpirat on EA Forum Prize: Winners for November 2019 · 2020-01-16T17:07:01.325Z · score: 4 (3 votes) · EA · GW

Even though I was pretty actively reading the forum in the last months I've missed one of the posts and all of the really great comments, so thanks a lot!

I'm wondering if there is some reasonable way to search for highly upvoted comments that were made after reading a post. The forum seems to keep track which comments were made after a user last opened the forum post, so maybe one could sort those comments by their upvotes? Or maybe by relative upvotes so it is not dominated by the most popular forum posts.

Comment by meerpirat on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2020-01-08T12:55:42.037Z · score: 2 (2 votes) · EA · GW

I just looked it up, you're right. Here the full quote:

F. Bailey Norwood and Jayson L. Lusk, Compassion, by the Pound: The Economics of Farm Animal Welfare (New York: Oxford University, 2011), 223.
Comment by meerpirat on Coordinating Commitments Through an Online Service · 2020-01-05T10:45:03.572Z · score: 4 (3 votes) · EA · GW

Congrats for your first post! I think it’s well written; I like the structure of „problem-proposed solution-possible issues“, you write short and clearly, and you stated what kind of input you want from the community.

It was useful for me that you provided the example of meat eating as a coordination problem. I would have found more examples even more useful for thinking about the potential applications where a coordination platform is among the most promising approaches (btw, I think for meat eating it is not among the most promising).

I like your idea, but I’m also worried about your 2nd issue: nobody will use it. It seems to me like people are just not motivated enough by being a part of improving the world. Meat eating seems like a case in point: there is already a veggie community you can be part of (at least in Germany in every bigger city) and the marginal impact you have doesn’t even depend that much on coordination. Still it’s a tiny movement.

I think it’s reasonable that you are trying to think about the landscape and bottlenecks of behavior change and coordination before moving to action. There is probably much more to learn. For example, I’ve read this short report about change platforms in the context of changing organizations, that seems to have some success stories and learned lessons that are also relevant for you. This might be a much more tractable pathway if there are smaller scale important coordination problems. https://www.mckinsey.com/business-functions/organization/our-insights/build-a-change-platform-not-a-change-program

Comment by meerpirat on EA Survey 2019 Series: Cause Prioritization · 2020-01-02T23:11:32.565Z · score: 3 (3 votes) · EA · GW

Thanks for this. I like the ribbon plots!

Did you by chance look at cause prio differences between countries and saw anything interesting? I dimly remember there used to be a trend along the lines of a bit more animal welfare in continental Europe, global poverty in UK and x-risk in the US.

Comment by meerpirat on Thoughts on doing good through non-standard EA career pathways · 2020-01-01T14:45:47.749Z · score: 3 (3 votes) · EA · GW

I found this post very useful to think about my own career, thanks for writing it up. My prospects also don't fall neatly into the top recommended paths, so I'd be interested in more discussion how to train my "good judgement".

Summarizing your ingredients of good judgment:

  1. Spotting the important questions (e.g. what do I need to learn to improve my decision the most?)
  2. Having good research intuitions (good quick guesses, think critically about evidence)
  3. Having good sense about how the world works and what plans are likely to work.
  4. Knowing when they’re out of their depth, knowing who to ask for help, knowing who to trust.

What do you think about participating in a forecasting platform, e.g. Good Judgement Open or Metaculus? It seems to cover all ingredients, and even be a good signal for others to evaluate your judgement quality. When I participated in GJO for a couple of months, I was demotivated by the lack feedback for the reasoning in my forecasts. I only could look at the reasoning of other forecasters and at my Brier score, of course.

P.S: Your thinking appears to be very clear and you appear rather competent, so I wonder if your bar of "good enough judgement" to reasonably pursue non-standard paths is too high. I also wonder if people whose judgement you trust would agree with your diagnosis that you wouldn't have good enough judgement for a non-standard path.

Comment by meerpirat on Effective Altruism Foundation: Plans for 2020 · 2019-12-30T23:22:44.571Z · score: 3 (3 votes) · EA · GW

Thank you for the update, and for your work.

Comment by meerpirat on Genetic Enhancement as a Cause Area · 2019-12-29T18:24:57.650Z · score: 2 (2 votes) · EA · GW

Thank you for writing this, I find the idea very interesting.

Your argument against worrying about negative consequences didn't pick me up.

What if increasing these supposedly positive traits results in negative consequences?
It’s important to avoid status quo bias. To quote Bostrom and Ord:
"Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias."

I take issue with the Reversal test because it seems very difficult to have a realistic picture of a world where a parameter like IQ is increased or decreased significantly. Therefore, the "status quo bias" might be a reasonable attitude if we have very little understanding of the dynamics of our system and there is little chance of reversing the intervention (e.g. you mention a slippery slope dynamic where once genetic enhancement is used everyone will more or less have to use it).

Imagine that I currently feel more or less content with the broad trends of humanity and would like to be very careful before supporting profound interventions like widespread availability of genetic enhancement.

Comment by meerpirat on EA Survey 2019 Series: Community Demographics & Characteristics · 2019-12-27T22:56:37.713Z · score: 4 (3 votes) · EA · GW

Thanks a lot for this, I found the post very informative and look forward to the next one.

One note: I would have found a more extensive list of countries in the geography section useful.


Comment by meerpirat on EA Meta Fund November 2019 Payout Report · 2019-12-14T10:10:37.586Z · score: 13 (11 votes) · EA · GW

I feel frustrated by the lack of feedback. The EA Hotel seems to be one of the most discussed projects born out of the EA community in the last months and it prominently struggles for funding. I think I’ve read most of the related discussions on the forum and haven‘t seen a case made why the project isn‘t as promising as it might sound after reading the EA Hotel funding pledges. I understand that your time is limited, but for the sake of cooperation within EA I’m saddened that there seems to have been no communication between you guys at all. :l

Comment by meerpirat on Update on CEA's EA Grants Program · 2019-11-09T06:53:57.719Z · score: 8 (6 votes) · EA · GW

Thanks for the update, I appreciate the transparency on the project's shortcomings.

"Upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns."

I'd be interested in what (kind of) grants you think seem great and not so great.

Comment by meerpirat on [Link] A Future for Neuroscience · 2019-09-14T18:56:29.604Z · score: 4 (3 votes) · EA · GW

This received some discussion on LessWrong last year.

https://www.lesswrong.com/posts/2pnvgwGkMZkQzCvwi/a-future-for-neuroscience

Comment by meerpirat on Our forthcoming AI Safety book · 2019-08-31T08:18:20.068Z · score: 8 (5 votes) · EA · GW

Chapter 7 discusses the risk of AI race, and the constraints that this AI race implies. We argue that, because of this inevitable race, we cannot demand too constraining constraints on AIs.

I‘ve heard arguments that one wants to be careful when talking about AI races, as quotes like yours arguing about an inevitable race might fuel race dynamics further. Race dynamics are worrying and we should discuss the risk, of course. But maybe a book for the general audience, which I expect will jump on issues like the associated political conflicts as they are much easier to understand than technical challenges, might not be the best place for this discussion.

Comment by meerpirat on Our forthcoming AI Safety book · 2019-08-31T08:11:27.047Z · score: 20 (9 votes) · EA · GW

I appreciate you seeking feedback here. A book targeted at the general public seems very well placed to shape the discussion in many unintended ways.

Evidently, the "AI kills" part is designed to be clickbait. We do hope to reach a wide audience, which we regard as desirable to accelerate the spread of AI ethics in all sorts of corporations. Evidently, we expect the title to be misused and abused by many people. But we are confident that a 3-minute discussion with a calm person is sufficient to convince them of the relevancy of the title (see Chapter 3).

I wonder why you think the sensationalist title is worth it. Less sensationalist title will probably cause

  • less unproductive debate
  • lower risk for unintendedly causing the discussion around AI to go down a confused and politicized path
  • less people will read it
  • but the people that will read it are probably more informed and interested to have a sober discussion

Can you explain why you think that AI ethics being discussed in all sorts of corporations is very useful? My impression is that AI Safety mostly needs more academic talent directed to research, and not being another hot topic in corporations that don‘t even do AI research.

Comment by meerpirat on How to Make Billions of Dollars Reducing Loneliness · 2019-08-26T21:58:28.738Z · score: 2 (2 votes) · EA · GW

I believe that the feeling of loneliness probably is one big contributor to mental health issues and I like your idea of tackling it pragmatically/for-profit.

My gut feeling is that this won‘t be used and people are happy enough with the Craigslist solution. Anectdotally, my roommates (intelligent people) thought it was a joke when I proposed that I would design a questionnaire for people applying to live with us. Another platform, OkCupid, tries to offer meaningful matching scores for romantic partners and it seems to be rather fringe, at least in Germany.

Comment by meerpirat on Rationality, EA and being a movement · 2019-06-22T15:45:44.080Z · score: 5 (6 votes) · EA · GW

Thanks for the summary! I don't know if that came up during your discussion, but I would have found concrete examples useful for judging the arguments.

"ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren't particularly rigorous thinker"

I'd hope that bad arguments from high-status people will be pointed out and the discussion moves forward (e.g. Steven Pinker strawmanning worries about x-risks).

"iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to."

For example I find it unlikely that an anonymous writer with good ideas and comments won't be read and discussed on the forum. Maybe it's different on conferences and behind the scences at EA orgs, though?

"iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them."

EAs seem to mostly interact with research groups (part of the institution with the best track record in truth-finding) and non-profits. I'm not worried that research groups pose a significant threat to EAs epistemic standards, rather I expect researchers to 1) enrich them and 2) be a good match for altruistic/ethical motivations and being rigorous about this. Examples that come to mind are OpenPhil causing/convincing bioriks researchers to shift their research in the direction of existential threats.

Does someone know of examples or mechanism how non-profits might manipulate or have manipulated discussions? Maybe they find very consequential & self-serving arguments that are very difficult to evaluate? I believe some people think about AI Safety in this way, but my impression is that this issue has enjoyed a lot of scrutiny.