Posts

What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z

Comments

Comment by meerpirat on Privacy as a Blind Spot: Are There Long Term Harms in Using Facebook, Google, Slack etc.? · 2021-01-18T11:23:13.937Z · EA · GW

Cool! I also think it's great you set up those prediction. For me, the data leakage & government repression predictions don't closely track the issues you're arguing for.

data leak: someone who shouldn't have the data, has them. This is generally non-reversible.

I was once accidentally added to an internal Slack workspace and I was confused about it and read a bunch of stuff I wasn't supposed to read. Also, people unilaterally leaking screenshots from Slack seems to happen regularly. According to your definition, shouldn't this be data leakage? That seems very likely to happen in the next 20 years.

Government repression of NGOs seems to be pretty common in some countries, at least it seems so when reading news from Human Rights groups, e.g. in Turkey, China and Russia. If I'd predict here, I'd focus on countries in this class and little on UK, US or central Europe. 

Comment by meerpirat on The German Effective Altruism Network - recap 2020 · 2021-01-18T00:05:58.903Z · EA · GW

Nice overview! I think its great that you’re supporting EA communities in Germany and I wish you guys a great 2021. Some random thoughts while reading:

These observations [that Germany is generally fairly decentralized] led us to think that we can best grow and support a sustainable community if NEAD's structure reflects the situation in Germany

I didn‘t totally connect the dots here. Your goal to support scattered chapters with project-based services seems similar enough to startups. A Verein seems to resolve around having members, which doesn‘t seem directly necessary, no? Later you mention democratic oversight over the board, which I’m kind of skeptical about. It seems like you guys actively try to get feedback on your reasoning and strategy and try to be public and transparent about it here, and spontaneously I think this might be going a sufficiently long way for a small new EA org.

Who is NEAD working with today?

I was surprised not to see former EAF people here, I suppose they have very useful expertise but still decided to fully focus on their new projects.

the height and opacity of the background coloring of each contributor shows what parts of the community (audience, followers, etc. in the funnel model on the left) the groups and individuals consider their main target audience

Minor point: The two German local groups I’m familiar with focus most on their community of actively engaged people, and I suspect this is true for most other groups (instead of mostly on the „audience“ category).

[Projects & Plans] Create a first set of career guides based on the recommendations from the Local Career Advice Network. This will be done in collaboration with CEA, and 80000 hours.

That’s a good plan! I think career coaching tailored to Germany would be very useful to many EAs living here.

Besides, did you think about organizing local group meetups or an EAGx? I thought it could fit well with your community building goals and I‘ve read about at least one other org making initial plans in case it’ll be possible this year.

Comment by meerpirat on EA Course Syllabus: David Manley's "Changing the World" · 2021-01-14T23:12:31.647Z · EA · GW

Sounds good, and if those Archive posts might be novel and useful for enough readers you could consider also adding them to the Frontpage.

Besides, maybe at some point one could also consider enabling users to "unfollow" individual accounts so those don’t even show up in Recent Discussion if one isn't interested.

Comment by meerpirat on EA Course Syllabus: David Manley's "Changing the World" · 2021-01-14T11:15:32.551Z · EA · GW

Note to other moderators: I've moved this to "Personal Blog" so it won't clutter the frontpage, and I'll do the same with other "posting for later reference" posts.

FWIW I'm glad I saw it and don’t consider seeing a "new" EA syllabus as cluttering.

Comment by meerpirat on The Sense-Making Web · 2021-01-13T14:22:59.167Z · EA · GW

Thanks for bringing this up,  never heard of the Sensemaking scene. I enjoyed the "The War on Sensemaking" talk by Daniel Schmachtenberger that you linked in the end. I liked the idea that we should treat communicating biased or wrong information like pollution.

Comment by meerpirat on Tommy Raskin: In Memoriam · 2021-01-12T20:50:43.117Z · EA · GW

:(

Comment by meerpirat on Retrospective on Teaching Rationality Workshops · 2021-01-05T08:19:33.196Z · EA · GW

I and two or three others also went to a CfAR workshop, so mostly things from there. Productive disagreements and Hamming circles, where people split in small groups and confidentially talk about their biggest personal bottlenecks, stick out as most valuable in memory right now. Oh, and I remember people from a later iteration finding the bug hunt from the Hammertime Sequence most valuable, where people are guided to find things in their lives that could use improvement. I remember that this was a minor mind-blow for one person. https://www.lesswrong.com/posts/rFjhz5Ks685xHbMXW/hammertime-day-1-bug-hunt

Comment by meerpirat on Can I have impact if I’m average? · 2021-01-04T22:45:15.383Z · EA · GW

Random thought related to activities that require little skill but feel meaningful and arguably contribute to the broad project of EA: I sometimes find myself wondering how I can be one among at best a handful of people who publicly celebrate and cheer for some EA-related work or project. For example a couple of months ago I was part of a small audience for a public talk by an AI researcher who shared advice and talked about her work. The talk itself was interesting and useful for me, but it also felt really meaningful and positive to spend some thoughts and emotions just celebrating her work, feeling grateful that there’s one more person with altruistic intentions and the competence to do difficult things, and me being able to radiate some positivity and gratitude in her general direction. I have the impression that we would be an even nicer community if we did more of this and that simply doing this is already a meaningful and noteworthy contribution.

ETA: Thanks a lot for writing this, Fabienne. I relate a lot to experiencing drops in self-worth because of comparisons with people that in expectation are able to have much more positive impact than I, and also know of others who experience the same, which as you say is really unfortunate.

Comment by meerpirat on Retrospective on Teaching Rationality Workshops · 2021-01-04T20:44:18.540Z · EA · GW

Wow, really solid work, thanks for sharing! I‘m really impressed how systematically and intentionally you went about this. I vaguely remember me and us much more on the half-assing and blindly copying end of the spectrum when we organized our rationality and EA workshops.

Comment by meerpirat on One’s Future Behavior as a Domain of Calibration · 2021-01-02T22:38:43.134Z · EA · GW

so at the end of the day you're assigning a number representing how productive the day was, and you consider predicting that number the day before?

Almost, I now integrated the prediction into my morning routine. Yeah, I could actively try to do nothing about it the first month, e.g. if I predict an unproductive day. Or I think I will randomize if I consciously think about altering my plans, so I get immidiate benefits from the practice (fixing unproductive days) and also learn to have better calibration.

If you end up doing this, I'd be very interested in how things go. May I message you in a month or so?

For sure! :)

Comment by meerpirat on Forecasting of Priorities: a tool for effective political participation? · 2021-01-01T18:04:24.526Z · EA · GW

Very cool idea, would love to see this implemented. Some thoughts while reading:

the government is happy because it effectively harnesses a lot of inputs about what to fund

don’t they have plenty of that already and further pressures are actually negative if they think they know best?

Non-populist politicians are happy because they can tell their voters “your opinion matters” and now it's believable

FWIW I feel like I rarely observe non-populist politicians running on a direct democracy platform, and the one time I remember they weren’t successful (the German Piratenpartei).

Who are the experts? I expect this to cause controversy. Even without this mechanism I already perceive that on fairly clearcut questions there is a lot of controversy, also in less polarized countries relative to the US, like Germany (e.g. on nuclear energy, GMOs, rent control). Maybe this could be circumvented by letting the population decide it? Or at least their elected representatives? I‘ve stumbled upon the „Bayesian Truth Serum“ mechanism that might useful for eliciting non-measurable outcomes. https://www.overcomingbias.com/2017/01/surprising-popularity.html

My intuition is, that around 70% of citizens in the western societies, who decide to participate, would go for the "Activist" strategy and 30% for the "Forecaster" strategy, but there is no evidence.

Maybe this could be a useful use case of integrating an Elicit forecast like described here. https://forum.effectivealtruism.org/posts/YgbPSyTvft6EcKnWm/forum-update-new-features-december-2020#Elicit_predictions FWIW I’d expect even more „Activists“, more like 95% maybe? Only being able to predict which of ~20 categories some group of „experts“ will find most important feels demotivating to me, even though I do forecasts regularly. This might change with the amount of monetary compensation, though I expect this wouldn’t motivate too many people, maybe 1 in 100?

Since you can comment only while allocating credit, you can write hoaxes to actively cause harm, but you actually have to fund (give some of your credit to) the causes that you want to harm.

If the granularity is on the level of „ longevity, corruption, legalization of drugs, mental health, better roads, etc“ I think I don’t expect people to get hoaxy on those. I assume they’d get filtered beforehand to be kind of common-sensical and maybe even boring to think about. Speaking of granularity, I wonder if this would even be minimally enough to distinguish good from lucky forecasters, especially when there are so many participants.

ETA: Right, and congrats on getting the grants approved! Do they come from the Czech government?

Comment by meerpirat on One’s Future Behavior as a Domain of Calibration · 2021-01-01T16:22:33.459Z · EA · GW

Do you only make forecasts that resolve within the week? I imagine it would also be useful to sharpen one’s predictive skills for longer timeframes, e.g. achieving milestones of a project, finishing a chapter of your thesis, etc.

Comment by meerpirat on One’s Future Behavior as a Domain of Calibration · 2021-01-01T16:20:01.359Z · EA · GW

Cool, I definitely feel motivated to integrate something like this into my routines. E.g. every night I rate how productive my day was, so I’m now thinking about making a forecast every morning about that. Of course my influence on the outcome will be really high, but it seems like I’ll get useful info from this anyway.

Comment by meerpirat on Against GDP as a metric for timelines and takeoff speeds · 2020-12-31T13:52:52.027Z · EA · GW

Thanks for this, I haven‘t thought about the concrete time surrounding AI points of no return yet and I think this is getting increasingly important.

Some thoughts:

  • even if we don’t expect actual output to increase, could we maybe expect that stocks of Google and co. will rise because investors also think about potential for AI windfalls? Similarly, do you think forecasting platforms might be informative enough to be kept in mind here, too?
  • do you think that the level of cooperation/cooperativeness between all stakeholders should be another factor we should care about in your list regarding takeoff speeds? It might help slow everything down if all stakeholders listen to and care about the perspective of one another and can agree on being more careful
Comment by meerpirat on Big List of Cause Candidates · 2020-12-26T17:22:02.836Z · EA · GW

Aw, really glad to hear that!

Comment by meerpirat on Propose and vote on potential tags · 2020-12-26T17:16:17.597Z · EA · GW

Yes, I also had something like 5-15 tags in mind. Your proposal for China makes sense to me, though I had a more "internal" perspective in mind, where EAs from the US/UK/Australia/Germany/Canada/etc. could get an overview of articles that are relevant for their specific country and are maybe indirectly encouraged to add something. So I'd write it as

The [country/region] tag is for posts that are about [country/region], that are especially relevant for EAs from and EA communities in [country/region] or that are relevant for projects that involve [country/region].

Looking at the EA Survey results on geographic distribution, I'd maybe do

  • US
  • UK
  • Austria-NZ
  • Germany-Austria-Switzerland
  • Canada
  • Netherlands
  • France
  • Scandinavia
  • Southeast Asia
  • Latin America
Comment by meerpirat on Propose and vote on potential tags · 2020-12-25T21:58:03.489Z · EA · GW

Country-specific tags

I just saw "creation of country specific content"as an example among the higher rated meta EA areas in the recent article What areas are the most promising to start new EA meta charities - A survey of 40 EAs. What do you think about introducing tags for specific countries? E.g. I'd already have a couple of articles in mind that would be specifically interesting for members of German/Austrian/Swiss communities.

Comment by meerpirat on Big List of Cause Candidates · 2020-12-25T20:38:48.429Z · EA · GW

Cool! Like the list, like the forecasting project idea.

Tiny suggestion, I think "Air Purifiers Against Pollution" shouldn't go into the Climate Change basket, and instead probably to Global Health & Development.

Comment by meerpirat on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-23T18:30:46.477Z · EA · GW

Thanks a lot for the review, I was looking forward to reading it, it updated me further towards donating to the LTFF this year and you reviewed some really interesting papers that I haven't noticed before.

In donating to the LTFF, I think (many) donors are hoping to be funding smaller projects that they could not directly access themselves. As it is, such donors will probably have to consider such organisation allocations a mild ‘tax’ – to the extent that different large organisations are chosen then they would have picked themselves. 

I think that's a reasonable point. It was also touched upon in the recent AMA of the LTFF, and the fund managers seem to agree that funding individuals and smaller projects is their comparative advantage. I personally won't feel taxed if they decided that some established org can use my money best as long as a meaningful fraction of the money goes towards projects I wouldn't have heard of.

Comment by meerpirat on Incentivizing forecasting via social media · 2020-12-19T16:23:28.350Z · EA · GW

Also, I think one can become better at forecasting on one’s own? (I think most people get better calibrated when they do calibration exercises on their own—they don’t need to watch other people do it.)

You also get feedback in form of the community median prediction on Metaculus and GJOpen, which in my experience is usually useful as feedback. Though I do think in general following the reasoning of competent individuals is very useful, but I think the comments and helpful people that enjoy teaching their skills do a solid job covering that.

Comment by meerpirat on Incentivizing forecasting via social media · 2020-12-19T13:12:52.817Z · EA · GW

Hm, regarding sports and election betting, I think you're right that people find it enjoyable, but then again I'd expect no effect on epistemic skills due to this. Looking at sports betting bars in my town it doesn't seem to be a place for people that e.g. would ever track their performance. But I also think the online Twitter crowd is different. I'm not sure how much I'd update on Youtubers investing time into gaming Youtube's algorithms. This seems to be more a case of investing 2h watching stuff to get a recipe to implement?

Just in case you didn't see it, Metaculus' binary forecasts are implemented with exactly those 0%-100% sliders. 

I agree that this approach is more realistic. :) However, it would require many more resources and would take longer.

Not sure if I think it would require that many more resources. I was surprised that Metaculus' AI forecasting tournament was featured on Forbes the other day with "only" $50k in prizes. Also, from the point of view of a participant, the EA groups forecasting tournament seemed to go really well and introduced at least 6 people I know of into more serious forecasting (being run by volunteers with prizes in form of $500 donation money). The coursera course sounds like something that's just one grant away. Looking at Good Judgement Open, ~half of their tournaments seem to be funded by news agencies and research institutes, so reaching out to more (for-profit) orgs that could make use of forecasts and hiring good forecasters doesn't seem so far off, either.

I also imagined that the effect on epistemic competence will mostly be that most people learn that they should defer more to the consensus of people with better forecasting ability, right? I might expect to see the same effect from having a prominent group of people that perform well in forecasting. E.g. barely anyone who's not involved in professional math or chess or poker will pretend they could play as well as them. Most people would defer to them on math or poker or chess questions.

Comment by meerpirat on Incentivizing forecasting via social media · 2020-12-18T13:58:14.726Z · EA · GW

Thanks, stimulating ideas!

My quick take: Forecasting is such an intellectual exercise, I’d be really surprised if it becomes a popular feature on social media platforms, or will have effects on the epistemic competencies of the general population.

I think I‘d approach it more like making math or programming or chess a more widely shared skill: lobby to introduce it at schools, organize prestigious competitions for highschools and universities, convince employers that this is a valuable skill, make it easy to verify the skill (I like your idea of a coursera course + forecasting competition).

Comment by meerpirat on 80k hrs #88 - Response to criticism · 2020-12-12T21:32:09.439Z · EA · GW

I also really appreciate your comments. I didn‘t downvote your initial comment, but my first reaction upon seeing it was something like „Hey, I felt really positive about a researcher coming to the forum and explaining why he disagrees with Tristan. I don’t want someone to discourage this from happening!“ I’ve initially read the parts you cited partly as tongue in cheek and maybe as a little unnecessary, but far from wanting to signal that the overall contribution was not welcome.

I appreciate that you explained your negative reaction a lot, especially given how rarely people do it. I did read over the parts you cited not even wondering much how Tristan would react to it and I think it’s great someone brought it up as I now think that new users of our forum should strive to communicate disagreements less confrontationally than is common on other platforms. So I think it’d be unfortunate if you feel discouraged from this experience.

Comment by meerpirat on Long-Term Future Fund: November 2020 grant recommendations · 2020-12-11T08:04:40.507Z · EA · GW

Thanks, I'm really really impressed with the projects in this write-up and predict now that I'm more likely than not to donate to the LTFF this year. 

Comment by meerpirat on My mistakes on the path to impact · 2020-12-10T22:14:42.154Z · EA · GW

I relate to what you wrote a lot, thanks for sharing.

Comment by meerpirat on Center on Long-Term Risk: 2021 Plans & 2020 Review · 2020-12-09T11:16:48.077Z · EA · GW

Thanks for the update, I feel good about your work and reasoning and wish you even more success in 2021! The cooperation with CERR seems pretty exciting, looking forward to read their announcement.

Comment by meerpirat on Introducing High Impact Athletes · 2020-12-02T05:38:01.524Z · EA · GW

From my perspective this is more just a complicated and controversial topic where people disagree a lot. You both framed your comments in a way that doesn’t acknowledge that reasonable people may disagree, which might make it more disagreeable for people with different perspectives. And critical feedback might be sparse because it’s so time and energy consuming to hash it out. I think it might be a bit uncharitable to think the people who downvoted are just social/racial justice detectors, no? And I agree, I also wish this wouldn’t make anyone feel anxious (and I definitely would feel anxious, too, even responding here and only implying that I disagree with you feels scary to me).

Comment by meerpirat on Where are you donating in 2020 and why? · 2020-12-01T12:23:56.558Z · EA · GW

Haven't read into it, but this LessWrong essay contest was about the relationship between EA and cryonics. 

I also have the gut feeling that I'd not view a cryonics contract for myself as following my altruistic ideal for helping others, as it would have strong benefits for myself. Maybe one might want to tell Alcor to randomly choose a new customer and pay the contract for them. If I wouldn't be so excited about cryonics anymore, then probably my excitement came from something else than altruistic impact.

Comment by meerpirat on Introducing High Impact Athletes · 2020-12-01T10:51:17.607Z · EA · GW

Thanks a lot for doing this, I'm really happy about it. The website looks great and I love the testimonials, they are very inspiring to me and I imagine to other athletes, too. 

I imagine that this project could potentially get a lot of sudden attention within the sports community in the coming months. I wonder if you or others think this is plausible and how important it is to prepare for this, e.g. by quickly developing the capacity to manage this well, and by being especially careful to avoid any potentially offputting initial impressions.

Comment by meerpirat on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-28T11:01:12.596Z · EA · GW

Fair. One might consider that there could currently be a difference in the reasonableness of different factions, with progressive voices being earlier adopters and better at using social media to make their points, which would get more balanced with increased general sophistication. Anecdotally, many in my bubble seem kind of clueless how to argue against anti-capitalist ideas (which at least I get confronted with regularly), which may be explained with this imbalance.

Comment by meerpirat on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-27T11:37:41.457Z · EA · GW

I was thinking more that religion and politics become increasingly truth-restricted with increasing content providers in a new medium. People with other politics bring up supporting scientific results and will get attention due to people having an inborn truth orientation. That might’ve happened when more moderate voices entered the printed word business, when more radio stations opened that were not aligned with one faction, and maybe will happen too when more reasonable voices with different biases enter social media.

Comment by meerpirat on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-26T10:45:48.968Z · EA · GW

I think [persuasion tools making debate more truth-oriented] is a possibility but I do not think it is probable. After all, it doesn’t seem to be what’s happened in the last two decades or so of widespread internet use, big data, AI, etc.

One might want to compare this to writing, radio and TV. At least for the former two their misuse seems to have become less and truth oriented communication more common, no? I have the story in my head that initially there is a political monopoly or oligopoly on the new media form, and with the addition of new perspectives there begins a slow process of cultural debate that is overall truth oriented because the truth is most convincing and people have enough of an intrinsic desire to know the truth.

Comment by meerpirat on How can I do the most good with my 3D printer? · 2020-11-25T05:33:47.802Z · EA · GW

Cool! Random idea, maybe you could look into what the preppers/survivalists are doing with their 3D printers and look out for innovation ideas from a „humanity recovering from global catastrophes“ lense. What will be likely production bottlenecks, how could we use 3D printers and how could we make them more useful for this already today.

Comment by meerpirat on How can I bet on short timelines? · 2020-11-23T18:40:01.788Z · EA · GW

Robin Hanson would probably take a bet: https://twitter.com/robinhanson/status/1330552185738899460?s=21

Comment by meerpirat on How to best address Repetitive Strain Injury (RSI)? · 2020-11-22T12:05:44.542Z · EA · GW

Small change I found useful: Using my small finger or ring finger instead of the index finger when I have to use my laptop touchpad. This way my wrist is less twisted, which before felt uncomfortable after a time. 

Comment by meerpirat on What are some quick, easy, repeatable ways to do good? · 2020-11-15T16:21:51.143Z · EA · GW

Related to doing something nice for people close to you, I was reminded of Neel Nanda's post where he encourages the idea that helping friends to become more effective might be really useful. Doing this probably gives warm fuzzies and doesn't need to take longer than a call or taking a walk together.

Comment by meerpirat on Introducing Probably Good: A New Career Guidance Organization · 2020-11-15T15:08:57.997Z · EA · GW

Sorry if this is not helpful, but I felt like brainstorming some names.

  • Worthwhile/Worthy Pursuits
  • Paths of Impact
  • Good Callings
  • Careers for Good/Change
  • Good Careers Advice
  • Altruistic Career Support
  • (Impactify, WorkWell seem already taken... and for the latter GiveWell might not appreciate the association)
Comment by meerpirat on Election scenarios · 2020-11-13T08:38:12.918Z · EA · GW

Luke Muehlhauser mentions another argument that we didn’t mention yet: who would replace the US in its global leadership roles? China seems most likely to me, given economic and military growth, and also seemingly much worse in terms of human rights standards.

One of my least controversial views is that both the US in particular and humanity in general will probably be better off if the US (despite its many deep flaws) remains the world’s leading power, given the available alternatives for global leadership.

http://lukemuehlhauser.com/one-billion-americans/

Comment by meerpirat on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T17:53:32.454Z · EA · GW

Thanks for making the case, I think this is written well and will make it easy to concretely disagree for more sceptical readers than me. I get away most convinced that this looks like a great opportunity to flesh out international cooperation infrastructure on AI. I expect rapid increases in AI capabilities in the next decades, capabilities that will go far beyond AWS and require a ton of good people having difficult conversations on the international stage.

One question I had when I read about "drawing a line": I wonder if pushing for such a strong stance will make it harder to agree on it as I suppose there is currently a lot of investment going on. And even if countries sign the agreement, maybe others will have little trust in other countries following it, because it spontaneously it seems relatively easier to secretely work on this (compared to chemical and nuclear weapons).

Lastly, through Gwern's Twitter I found a thread on a study which found that AI researchers are much more positive about working  for the Department of Defense than one would think if one follows the public discussions around working for them.

Comment by meerpirat on How can I bet on short timelines? · 2020-11-08T16:47:33.065Z · EA · GW

Yes, I'd be really interested, send you a PM.

Comment by meerpirat on How can I bet on short timelines? · 2020-11-07T14:25:48.111Z · EA · GW

Can I buy these things with money? I don't think so... As the linked post argues, knowledge isn't something you can buy, in general. On some topics it is, but not all, and in particular not on the topic of what needs to be done to save the world. As for help, I've heard from various other people that hiring is net-negative unless the person you hire is both really capable and really aligned with your goals.

What do you think about setting up a bunch of research prizes? Raise some questions that seem relevant (I suppose those could go from vague deconfusion to specific empirical questions) and put enough money on them, and repeat. I've seen some good results of this approach on Lesswrong, with prizes on the order of a couple hundred Dollars.

I'm also really interested in this as I also expect AGI sooner than others (50% 2030 is also what I thought last time), and also am unsure what to do about it. My PhD in Cognitive Science seems only rather vaguely useful so far.

Comment by meerpirat on Introducing Probably Good: A New Career Guidance Organization · 2020-11-07T09:38:29.990Z · EA · GW

That was also my first thought. My brain autocompleted something like "Probably good, but wouldn't be surprised if bad". I think I don't mind names being more or less informative much, though,  as long as the name is unique and sounds nice (though the EA standard seems to be more discriptive rather than less). 

(And thanks to the founders, I really would love seeing new orgs to cover what 80,000Hours doesn't!)

Comment by meerpirat on EA Updates for November · 2020-11-03T07:22:24.545Z · EA · GW

Thanks, lots of cool things that slipped my other content aggregation nets.

Just in case others also were curious about where the "Scientific research" grants of OpenPhil went to:

Comment by meerpirat on Investing to Give: FP Research Report · 2020-10-31T22:34:49.114Z · EA · GW

Investment-like giving opportunities also seem most relevant to me and I'd love seeing more thought on this topic. My current intuition leans towards giving now, though my understanding feels pretty simplistic, mostly based on impressions that

  1. many very smart people are currently directing their intellectual and altruistic ambitions suboptimally because they never thought about prioritisation and longtermism
  2. there is not too much competition in offering them positions in an intellectual and altruistic environment where they can work on priority issues

I likely got this wrong, but as a concrete plausibly not actually true example, I'm thinking about a researcher like David Roodman, independently doing excellent work that caught the attention of GiveWell, influencing him to down the road directing his attention on longtermist issues. I expect there are many more people like him at different stages of their career and I'd love to see more Open Philanthropy Projects and Future of Humanity Institutes and MIRIs (maybe Edward Kmett as a similar example) and so forth where they can work or with whom they'd want to collaborate.

Comment by meerpirat on Desperation Hamster Wheels · 2020-10-31T15:13:44.884Z · EA · GW

Thanks a lot, I feel like this post could prove to be really usefuly for me, especially with giving this pattern a nice handle. I very much relate to stressing myself about having impact with my research. This led to me feeling averse towards trying to think about new "useful" research projects, which plausible decreases my research productivity quite a bit. 

Relatedly, I'm currently reading "Why Greatness Cannot Be Planned: The Myth of the Objective" by Ken Stanley and Joel Lehman [1], where they argue that, among others, innovation and research is best achieved by aiming for what's interesting and not what makes progress on a more concrete objective. I don't yet have formed an opinion if I should avoid having impact at the forefront of my day-to-day thinking about research, but I found the idea refreshing that I might just focus on my interest and apply the impact-filter much more sparsely. 

[1] a nice interview about the book can be found here: https://braininspired.co/podcast/86/

Comment by meerpirat on The end of the Bronze Age as an example of a sudden collapse of civilization · 2020-10-28T14:07:16.143Z · EA · GW

Wow, vulcano erupting, a famine, an earthquake, a  pandemic, civil wars and rioting sea people, that's quite a task. Really interesting read, thanks for writing it! And the graph ended up really nicely. 

But this climatic approach does not explain everything. The civilizations in this part of earth already survived similar events in the past. For example the destruction of the Minoan civilization on Crete (which is in the middle of the eastern Mediterranean) was caused by another major volcanic eruption (Marinatos, 1939). However, all other civilizations survived mostly unharmed. This indicates that also the societal structure comes into play.

This argument didn't seem super watertight to me. There seems to be a lot of randomness involved, and causal factors at play that are unrelated to societal structure, no? For example maybe the other eruption was a little bit weaker, or the year before yielded enough food to store? Or maybe the wind was stronger in that year or something? Would be interesting to hear why the mono- and/or some of the duo-causal historians disagree with societal structure mattering.

However, we also have more resources and more knowledge than the people in the Bronze Age.

I wondered how much this is an understatement. I have no idea of how people thought back then, only the vague idea that the people that spend the most time trying to make sense of things like this were religious leaders and highly confused about bascially everything?

Lastly, your warnings  of tipping points and the problems around the breakdown of trade reminded me of these arguments from Tyler Cowen, warning that the current trade war between China and the US and the strains from the current pandemic could lead to a sudden breakdown of international trade, too.

Comment by meerpirat on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-27T19:18:51.732Z · EA · GW

Really really cool idea, and great to see it executed already! :)

I’m a little bit unsure how I feel about the name. It’s concise and informative, but it sounds a bit odd and un-fuzzy to my ears... hard to put in words. You probably put a lot of thought into this, would be interested what you and others think.

Comment by meerpirat on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-27T13:42:04.047Z · EA · GW

Hmm, do you maybe mean "based on a real effect" when you say significant? Because we already now that 10 of the 55 tests came out significant, so I don't understand why we would want to calculate the probability of these results being significant. I was calculating the probability of seeing the 10 significant differences that we saw, assuming all the differences we observed are not based on real effects but on random variation, or basically 

p(observing differences in the comparisons that are so high that they the t-test with a 5% threshold says 'significant' ten out of 55 times | the differences we saw are all just based on random variation in the data).

In case you find this confusing, that is totally on me. I find signicance testing very unintuitive and maybe shouldn't even have tried to explain it. :') Just in case, chapter 11 in Doing Bayesian Data Analysis introduces the topic from a Bayesian perspective and was really useful for me.

Comment by meerpirat on Use resilience, instead of imprecision, to communicate uncertainty · 2020-10-23T08:42:08.581Z · EA · GW

if you have X% credence in a theory that produces 30% and Y% credence in a theory that produces 50%, then your actual probability is just a weighted sum. Having a range of subjective probabilities does not make sense!

Couldn't those people just not be able to sum/integrate over those ranges (yet)? I think about it like this: for very routine cognitive tasks, like categorization, there might be some rather precise representation of p(dog|data) in our brains. This information is useful, but we are not trained in consciously putting it into precise buckets, so it's like we look at our internal p(dog|data)=70%, but we are using a really unclear lense so we can‘t say more than "something in the range of 60-80%". With more training in probabilistic reasoning, we get better lenses and end up being Superforecasters that can reliably see 1% differences.

Comment by meerpirat on Life Satisfaction and its Discontents · 2020-10-21T08:32:05.152Z · EA · GW

I found the claim of animals not making assessments of their lives interesting.

[One might] insist that all sentient creatures can make overall assessments of their lives.

This is not credible. To make progress, let’s try to be a bit more precise about where the line is. Plausibly, self-awareness is a necessary condition for being able to make an overall evaluation of one’s life—if a creature lacks a sense of itself, it cannot have a view on how its life is going.

I just skimmed that part of your paper, so I apologize if this point is moot due to how you define having a view on one's life. What do you think about a hypothetical animal that has an internal tracking system for how good everything is going. For example, the animal might take accumulated information about the whole last year into account when considering an option to drastically change its circumstances. More concretely, an animal might be deciding to change territory because finding food and mates has been rough, and the neighborhood getting worse. This doesn't seem to entail self-awareness.