Posts

What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z · score: 8 (4 votes)
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z · score: 18 (10 votes)

Comments

Comment by meerpirat on The end of the Bronze Age as an example of a sudden collapse of civilization · 2020-10-28T14:07:16.143Z · score: 6 (3 votes) · EA · GW

Wow, vulcano erupting, a famine, an earthquake, a  pandemic, civil wars and rioting sea people, that's quite a task. Really interesting read, thanks for writing it! And the graph ended up really nicely. 

But this climatic approach does not explain everything. The civilizations in this part of earth already survived similar events in the past. For example the destruction of the Minoan civilization on Crete (which is in the middle of the eastern Mediterranean) was caused by another major volcanic eruption (Marinatos, 1939). However, all other civilizations survived mostly unharmed. This indicates that also the societal structure comes into play.

This argument didn't seem super watertight to me. There seems to be a lot of randomness involved, and causal factors at play that are unrelated to societal structure, no? For example maybe the other eruption was a little bit weaker, or the year before yielded enough food to store? Or maybe the wind was stronger in that year or something? Would be interesting to hear why the mono- and/or some of the duo-causal historians disagree with societal structure mattering.

However, we also have more resources and more knowledge than the people in the Bronze Age.

I wondered how much this is an understatement. I have no idea of how people thought back then, only the vague idea that the people that spend the most time trying to make sense of things like this were religious leaders and highly confused about bascially everything?

Lastly, your warnings  of tipping points and the problems around the breakdown of trade reminded me of these arguments from Tyler Cowen, warning that the current trade war between China and the US and the strains from the current pandemic could lead to a sudden breakdown of international trade, too.

Comment by meerpirat on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-27T19:18:51.732Z · score: 2 (2 votes) · EA · GW

Really really cool idea, and great to see it executed already! :)

I’m a little bit unsure how I feel about the name. It’s concise and informative, but it sounds a bit odd and un-fuzzy to my ears... hard to put in words. You probably put a lot of thought into this, would be interested what you and others think.

Comment by meerpirat on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-27T13:42:04.047Z · score: 1 (1 votes) · EA · GW

Hmm, do you maybe mean "based on a real effect" when you say significant? Because we already now that 10 of the 55 tests came out significant, so I don't understand why we would want to calculate the probability of these results being significant. I was calculating the probability of seeing the 10 significant differences that we saw, assuming all the differences we observed are not based on real effects but on random variation, or basically 

p(observing differences in the comparisons that are so high that they the t-test with a 5% threshold says 'significant' ten out of 55 times | the differences we saw are all just based on random variation in the data).

In case you find this confusing, that is totally on me. I find signicance testing very unintuitive and maybe shouldn't even have tried to explain it. :') Just in case, chapter 11 in Doing Bayesian Data Analysis introduces the topic from a Bayesian perspective and was really useful for me.

Comment by meerpirat on Use resilience, instead of imprecision, to communicate uncertainty · 2020-10-23T08:42:08.581Z · score: 1 (1 votes) · EA · GW

if you have X% credence in a theory that produces 30% and Y% credence in a theory that produces 50%, then your actual probability is just a weighted sum. Having a range of subjective probabilities does not make sense!

Couldn't those people just not be able to sum/integrate over those ranges (yet)? I think about it like this: for very routine cognitive tasks, like categorization, there might be some rather precise representation of p(dog|data) in our brains. This information is useful, but we are not trained in consciously putting it into precise buckets, so it's like we look at our internal p(dog|data)=70%, but we are using a really unclear lense so we can‘t say more than "something in the range of 60-80%". With more training in probabilistic reasoning, we get better lenses and end up being Superforecasters that can reliably see 1% differences.

Comment by meerpirat on Life Satisfaction and its Discontents · 2020-10-21T08:32:05.152Z · score: 5 (3 votes) · EA · GW

I found the claim of animals not making assessments of their lives interesting.

[One might] insist that all sentient creatures can make overall assessments of their lives.

This is not credible. To make progress, let’s try to be a bit more precise about where the line is. Plausibly, self-awareness is a necessary condition for being able to make an overall evaluation of one’s life—if a creature lacks a sense of itself, it cannot have a view on how its life is going.

I just skimmed that part of your paper, so I apologize if this point is moot due to how you define having a view on one's life. What do you think about a hypothetical animal that has an internal tracking system for how good everything is going. For example, the animal might take accumulated information about the whole last year into account when considering an option to drastically change its circumstances. More concretely, an animal might be deciding to change territory because finding food and mates has been rough, and the neighborhood getting worse. This doesn't seem to entail self-awareness.

Comment by meerpirat on [Link] "Where are all the successful rationalists?" · 2020-10-18T01:30:46.098Z · score: 6 (4 votes) · EA · GW

I found Roko's Twitter thread in response interesting, arguing that

  • being very successful requires very high conscientiousness, which is very rare, so no surprise that a small group hasn't seen much of it
  • the rationalist community makes people focus less on the what their social peer groups consider appropriate/desireable, which is key to being supported by them

Personally what comes to mind here, I always felt uneasy about not having a semi-solid grasp of *everything* from the bottom up, and the rationalist project has been great for helping me in that regard.

Comment by meerpirat on Five New EA Charities with High Potential for Impact · 2020-10-17T06:58:07.349Z · score: 3 (3 votes) · EA · GW

That's really cool, thanks to all participants and those who made this happen!

In countries with developing health systems, unintended pregnancies can have severe health consequences. Unintended pregnancies lead to an estimated 96 million deaths due to unsafe abortions and other complications.

What does the 96 million mean? Over all of the past? I expected a yearly estimate that would inform us about the situation today and was confused by this huge number.

Comment by meerpirat on Buck's Shortform · 2020-10-16T07:14:55.106Z · score: 1 (1 votes) · EA · GW

Huh, interesting thoughts, have you looked into the actual motivations behind it more? I'd've guessed that there was little "big if true" thinking in alchemy and mostly hopes for wealth and power.

Another thought, I suppose alchemy was more technical than something like magical potion brewing and in that way attracted other kinds of people, making it more proto-scientific? Another similar comparison might be sincere altruistic missionaries that work on finding the "true" interpretation of the bible/koran/..., sharing their progress in understanding it and working on convincing others to save them.

Regarding pushing chemnistry being easier than longtermism, I'd have guessed the big reasons why pushing scientific fields is easier are the possibility of repeating experiments and profitability of the knowledge. Are there really longtermists who find it plausible we can only work on x-risk stuff around the hinge? Even patient longtermists seem to want to save resources and I suppose invest in other capacity building. Ah, or do you mean "it's only possible to *directly* work on x-risk stuff", vs. indirectly? It just seemed odd to suggest that everything longtermists have done so far has not affected the probability of eventual x-risk, in the very least it has set in motion the longtermism movement earlier and shaping the culture and thinking style and so forth via institutions like FHI.

Comment by meerpirat on Max_Daniel's Shortform · 2020-10-14T14:49:03.958Z · score: 5 (3 votes) · EA · GW

Thanks for putting your thoughts together, I only accidentally stumbled on this and I think it would be a great post, too.

I was really surprised about you giving ~20% for TAI this century, and am still curious about your reasoning, because it seems to diverge strongly from your peers. Why do you find inside-view based arguments less convincing? I've updated pretty strongly on the deep (reinforcement) learning successes of the last years, and on our growing computational and algorithmic level understanding of the human mind. I've found AI Impacts' collection of inside- and outside-view arguments against current AI leading to AGI fairly unconvincing, e.g. the list of "lacking capacities" seem to me (as someone following CogSci, ML and AI Safety related blogs) to get a lot of productive research attention.

Comment by meerpirat on Against prediction markets · 2020-10-14T10:50:42.311Z · score: 1 (1 votes) · EA · GW

That's what I thought, too. Hiring the top 30 superforecasters seems much less scalable than a big prediction market like you describe it, where becoming a superforecaster suddenly would become a valid career. I wonder if it's not too far off to expect some more technocratic government to set one up at some point in the coming years. I wonder what the OT and others here would think about lobbying for predictions markets from an EA perspective.

Comment by meerpirat on Review and Summary of 'Moral Uncertainty' · 2020-10-08T08:18:56.561Z · score: 2 (2 votes) · EA · GW

Thanks for your summary and review, really enjoyed it.

And clearly this is a book many years in the making — for instance, MacAskill's BPhil thesis about moral uncertainty was published in 2000!

I had to double check that, and I think it wasn't quite that long ago, probably 2010. But on the topic of taking long, I also found this in Will's CV from ~2015-2016:

Books
Under Contract: Moral Uncertainty, Oxford University Press.
First author; co-authored with Toby Ord and Krister Bykvist. Expected publication: Feb 2017
Comment by meerpirat on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-03T10:49:04.451Z · score: 1 (1 votes) · EA · GW

Not sure if that is what you asked for, but here my attempt to spell this out, almost more to order my own thoughts:

  • assuming the null-hypothesis "There is no personality difference in a [personality trait Y] between people prioritizing vs. not prioritizing [cause area X].", the false-positive rate of the t-test is designed to be 5%
    • i.e. even if there is no difference, due to random variation we expect differences in the sample averages anyway and we only want to decide "There is a difference!" if the difference is big enough/unlikely enough when assuming there is no difference in reality
    • we decide to call a difference "significant" if the observed difference is less than 5% likely due to random variation only
  • so, if we do one hundred t-tests where there is actually no difference in reality, only by random variation we expect to see 5% of them to show significant differences in the averages of the samples
    • same goes for 55 t-tests, where we expect 55*5%=2.75 significant results if there is no difference in real life
  • so instead seeing 10 significant results is very unlikely when we assume the null-hypothesis
    • how unlikely can be calculated with the cumulative distribution function of the binomial distribution: 55 repetitions with p=5% gives a probability of 0.04% that 10 or more tests would be significant due to random chance alone
  • therefore, given the assumptions of the t-test, there is a 99.96% probability that the observed differences of personality differences are not all due to random variation
Comment by meerpirat on How should we run the EA Forum Prize? · 2020-09-30T15:37:31.535Z · score: 7 (4 votes) · EA · GW

I really like this idea. CEA could offer people they trust to use the Forum Prize money to announce prizes for exploring a topic or specific question and writing a post about it.

Why this might be a good idea

1. incentivize useful content (and others could chime in and add to the prize pool, signalling higher usefulness)

2. direct attention for people looking for useful work they might do as a sideproject

Some possible downsides

1. Draws in people that are not primarily altruistically motivated

  • could be an upside, too, if they do great work and get drawn in, as I expect most smart people will be happy to be able to contribute to improve the world

2. Disagreements about fulfilling the goal criteria might cause frustration

3. Increase the noise in the forum, i.e. more posts of lower quality


Skimming a bit through the forum, this idea has been tried a few times! But less often then I'd have expected if this were a good idea. Some examples:

I just found out that there is a tag on LW for active and closed bounties, listing 4 and 24 prizes in total. My impression is that experimenting with bounties on the EA forum would be a good idea and I wonder what others here think about it.

Comment by meerpirat on Election scenarios · 2020-09-26T10:32:34.573Z · score: 2 (2 votes) · EA · GW

Though surely you both would also agree that the stability of the democracy in the United States is much more valuable both in the short and long term, compared to say Brazil's or India's, no? I don't know what a reasonable allocation of attention is, but the headcount seems to be just one of many factors, e.g. along things like contribution to scientific and technological innovation, wealth creation, military power, political influence, cultural influence, and ethical and human rights standards.

Of course in general US politics seem much less neglected. But then again, there are also comparative advantages because so many EAs are US citizens and consequently have more knowledge of and influence over that system. And the non-EAs who pay attention to US politics do not necessarily have the same goals as EAs (making their tribe win vs. averting instability).

Comment by meerpirat on What are some low-information priors that you find practically useful for thinking about the world? · 2020-09-23T19:54:14.568Z · score: 1 (1 votes) · EA · GW

Hm, but if we don't know anything about the possible colours, the natural prior to assume seems to me to give all colors the same likelihood. It seems arbitrary to decide to group a subsection of colors under the label "other", and pretend like it should be treated like a hypothesis on equal footing with the others in your given set, which are single colors.

Yeah, Jeffreys prior seems to make sense here.

Comment by meerpirat on What are some low-information priors that you find practically useful for thinking about the world? · 2020-09-22T11:51:46.497Z · score: 1 (1 votes) · EA · GW

Thanks a lot for the pointers! Greaves' example seems to suffer the same problem, though, doesn't it?

Suppose, for instance, you know only that I am about to draw a book from my shelf, and that each book on my shelf has a single-coloured cover. Then POI seems to suggest that you are rationally required to have credence ½ that it will be red (Q1=red, Q2 = not-red; and you have no evidence bearing on whether or not the book is red), but also that you are rationally required to have credence 1/n that it will be red, where n is the ‘number of possible colours’ (Qi = ith colour; and you have no evidence bearing on what colour the book is).)

We have information about the set and distribution of colors, and assigning 50% credence to the color red does not use that information.

The cube factory problem does suffer less from this, cool!

A factory produces cubes with side-length between 0 and 1 foot; what is the probability that a randomly chosen cube has side-length between 0 and 1/2 a foot? The classical intepretation’s answer is apparently 1/2, as we imagine a process of production that is uniformly distributed over side-length. But the question could have been given an equivalent restatement: A factory produces cubes with face-area between 0 and 1 square-feet; what is the probability that a randomly chosen cube has face-area between 0 and 1/4 square-feet? Now the answer is apparently 1/4, as we imagine a process of production that is uniformly distributed over face-area.

I wonder if one should simply model this hierarchically, assigning equal credence to the idea that the relevant measure in cube production is side length or volume. For example, we might have information about cube bottle customers that want to fill their cubes with water. Because the customers vary in how much water they want to fit in their cube bottles, it seems to me that we should put more credence into partitioning it according to volume. Or if we'd have some information that people often want to glue the cubes under their shoes to appear taller, the relevant measure would be the side length. Currently, we have no information like this, so we should assign equal credence to both measures.

Comment by meerpirat on What are some low-information priors that you find practically useful for thinking about the world? · 2020-09-22T09:59:53.578Z · score: 2 (2 votes) · EA · GW

I'm confused about the partition problem you linked to. Both examples in that post seem to be instances where in one partition available information is discarded.

Suppose you have a jar of blue, white, and black marbles, of unknown proportions. One is picked at random, and if it is blue, the light is turned on. If it is black or white, the light stays off (or is turned off). What is the probability the light is on?
There isn’t one single answer. In fact, there are several possible answers.
[1.] You might decide to assign a 1/2 probability to the light being on, because you’ve got no reason to assign any other odds. It’s either on (50%) or off (50%).
[2.] You could assign the blue marble a 1/3 probability of being selected (after all, you know that there are three colors). From this it would follow that you have a 1/3 chance of the light being on, and 2/3 chance of the light being off.

Answer 1. seems to simply discard information about the algorithm that produces the result, i.e. that it depends on the color of the marbles. The same holds for the other example in the blogpost, where the information about the number of possible planets is ignored in one partition.

Comment by meerpirat on The Cost Of Wasted Motion · 2020-09-22T09:09:38.794Z · score: 7 (2 votes) · EA · GW

Thanks for writing this! I notice that this post seemed to have received some downvotes. I remember that when I read this post for the first time I felt a little irritated because I thought this post is not clear whether it (as I suppose) should be thought of as a summary of the concept of "wasted motion" from the Replacing Guilt series, or as an independently reached original contribution.

Comment by meerpirat on Getting Excited about Efficiency · 2020-09-22T08:07:09.992Z · score: 8 (3 votes) · EA · GW

Thanks for your writing, Lynette. This really resonated with me and put me in a delighted mood to get some work done!

Comment by meerpirat on How much does it cost to save a life in the mediterranean sea? · 2020-09-16T10:59:46.817Z · score: 5 (3 votes) · EA · GW

Hm, I'm surprised you're surprised. It's noteworthy and sad that the murder rate in the US is so high. I'd also guess the overall murder rate is not representative of the safety of refugees, and the murder rates might be underestimations in countries like Libya, a country that is literally in a civil war right now. Have you read the FAQ? Quoting:

Libya is known as a “failed state”, particularly since the start of the civil war. The German Federal Foreign Office writes (as of March 2019) of Libya: “The population and foreign refugees and migrants suffer criminality, kidnappings, irregular detention, arbitrary executions, torture and oppression of freedom of speech by the various actors due to the prevailing lack of rights.”

I unfortunately haven't found numbers when googling "murder rates in refugee camps", but here some more quotes from a DW article last year that gave me a strong impression that those places clearly are not safe:

According to Amnesty, the already calamitous conditions in Libyan camps worsened since the outbreak of fighting in early April; those detained were caught between the warring fronts and were left without food for days. On July 3, more than 50 refugees and migrants were killed during an airstrike on the Tajoura prison camp in Tripoli.
According to Julien Raickmann, the head of Doctors Without Borders in Libya, people in the camps continue to die from hunger and disease and the situation is "catastrophic."
Amnesty has reported instances of torture, serious violence and exploitation — including through sexual means — and forced labor. Amnesty also documented cases of people being murdered while trying to escape. Primarily, however, militias and traffickers are using refugees to make money by threatening them with violence or death — in some cases by making torture videos to send to their families.
Comment by meerpirat on How much does it cost to save a life in the mediterranean sea? · 2020-09-12T15:10:03.706Z · score: 3 (2 votes) · EA · GW

Just skimmed the FAQ from the discussed organization about returning rescued people to Africa:

  • Sea rescue conventions and international human rights law: People rescued at sea must be brought to a safe place, and African countries clearly disqualify (at least they say so about Tunisia and Libya)
  • African countries refuse to accept the rescued people, at least Tunisia did that repeatedly
  • African countries not having a procedure for taking asylum seekers, at least Tunisia doesn't
  • In Lybia they'd be put in detention centers where they'd lose their money and will return to the Mediterranean again afterwards
Comment by meerpirat on How much does it cost to save a life in the mediterranean sea? · 2020-09-11T12:21:51.542Z · score: 3 (3 votes) · EA · GW

This situation is such a tragedy and I appreciate that you looked at it with an EA eye. I found it surprising that the cost per saved life might be that low, thanks for sharing your calculation.

I find the political implications complicated here. In an 80,000Hours interview, talking about open borders, Peter Singer seemed to be very worried that a nations sense of controlling their border might strongly determine support for right-wing parties and politicians. I wonder if refugee boats in the mediterranean sea also contribute to this, and how this would be properly weighed in. If this indeed would be a significant problem, are there options that avoid this effect?

Peter Singer: Well, it’s not numbers, I agree it’s not [the numbers of immigrants that count]. But it is the sense of losing control of the borders. I think that’s the common thing [that leads to catastrophic events like Brexit and the vote of Trump]. And the Syrian refugee crisis has had an effect in Europe. It had no effect on the United States. It was minuscule numbers. And it wasn’t the focus. The focus there was people coming across the Mexican border, and that’s why Trump wants to build a wall, et cetera.
And it was the sense that we are losing control of our nation. And going back to a little earlier, Australia went through this as well with the boat people, the so called ‘asylum seekers’ coming across in small boats from Indonesia, which again helped to elect a conservative government in the 90s, the Howard government, rather than labor governments during that era, and maybe even contributed to the re-election of the Morrison government just recently which is also a very bad government on climate change.
So you’re right that it’s perceptions and the perceptions don’t depend on numbers, but they do depend on, do we have control of our borders, right? That’s the issue. And of course, if you really advocate open borders, you’re saying there should be no control of the borders, and that’s going to frighten people.
Comment by meerpirat on Are there any other pro athlete aspiring EAs? · 2020-09-10T09:47:47.630Z · score: 5 (4 votes) · EA · GW

My intuition was that pro athletes have more "cognitive horsepower" than average (and are much more able/willing to work hard, which also seems like a really valuable trait). I searched "average iq of athletes" on Google scholar and found this meta-analysis from 2019 that looks at cognitive function of pro-athletes vs. non‐elite athletes, seemingly supporting this. From the abstract:

An extraordinary physiological capacity combined with remarkable motor control, perception, and cognitive functioning is crucial for high performance in sports. [...] Moreover, a growing area of research evolved in the recent past that is particularly concerned with the basic cognitive functions by means of neurocognitive tests in experts and elite athletes. The aim of this meta‐analysis (k = 19) is to quantify differences among experts and nonexperts as well as elite athletes and non‐elite athletes. In addition, it aims to assemble and compare previous research and analyze possible differences in cognitive functions depending on age, skill level, and used cognitive tasks. Overall, the mean effect size was small to medium (r = 0.22), indicating superior cognitive functions in experts and elite athletes.
Comment by meerpirat on How have you become more (or less) engaged with EA in the last year? · 2020-09-09T10:30:56.524Z · score: 48 (18 votes) · EA · GW

I'm really excited about EA since I found out about it ~6 years ago. I think I always had the underlying impression that I'm not smart enough to contribute to anything except by donating and being a welcoming and generally knowledgable cheerleader in my local group. Maybe two years ago I started realizing that this mindset, while keeping me from feeling bad about making mistakes, was also keeping me from growing. Since then I try to push myself to make up my own mind more and I started to take part in discussions on the EA forum, mostly when I feel like I wouldn't increase the noise too much with my comments, e.g. when nobody else commented, or I feel strongly that what I comment adds something. (Reading this, I feel like the true story is messier, but it describes a facet of how my engagement changed.)

Comment by meerpirat on An argument for keeping open the option of earning to save · 2020-09-09T09:51:03.808Z · score: 1 (1 votes) · EA · GW

Thanks for sharing your thoughts, I never thought about this question before and it challenges my intuitive outlook.

Larks mentioned a factor that seems central to me and that I don't know how to fit into your argument:

Diminishing returns due to running out of good people to employ.

My gut's perspective is this: By investing resources to employ/engage/convince smart people today, we are investing in "capacity building", and that's key for long-lasting impact. OPP excellent "Direct work" contributions to both long- and also short-term cause areas will pay off immensely by drawing in more excellent people, further growing the amount of smart minds and resources available to longtermist causes. So as long as there are a lot of EA & longtermism-sympathetic smart minds out there, we should try to reach them by public excellent work that they naturally would want to be part of.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T11:44:43.212Z · score: 22 (5 votes) · EA · GW

Two other reasons to be "edgy" came to my mind:

Signalling frank discussion norms - when the host of a discussion now and then uses words and phrases that would be considered insensitive among a general audience, people in this discussion can feel permitted to talk frankly without having to worry about how the framing of their argument might offend anybody.

Relatedly, I noticed feeling relieved when a person higher in status made a "politically incorrect" joke. I felt like I could relax some part of my brain that worries about saying something that in some context could cause offense and me being punished socially (e.g. being labeled "problematic", which seems to be happening much quicker than I'd like, also in EA circles).

Only half joking, if somebody would leak the chats I have had with my best friend over the years, there is probably something in there to deeply offend every person on Earth. So maybe another reason to be "edgy" is just that it's fun for some people to say things in a norm-violating way? I remember laughing out loudly at two of Hanson's breaches of certain norms. Some part of me is worried about how this makes me look like, here. I think I laughed because it violated some norm in a suprising way (which would relate it to signalling intelligence), and not because I didn't find the topic serious or wasn't interested in serious discussion. I don't want to imply this was intended by Hanson, though. But I can imagine that it draws in some people, too.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T17:58:12.986Z · score: 25 (10 votes) · EA · GW

I'm glad you came back to look at this discussion again because I found your comments here (and generally) really valuable. I refrained from upvoting your comment because you called the comparison "pretty ridiculous". I would feel attacked if you called my reasoning ridiculous and would be less able to constructively argue with you.

I think you are right when pointing out that some topics are much more sensitive to many more people, and EAs being more careful around those topics makes our community more welcoming to more people. That said, I understood vaniver's point was to take an example where most people reading it would not feel like it is a sensitive topic, and *even there* you might upset some people (e.g. if they stumble on a discussion comparing the death of five vs. one). So the solution should not be to punish/deplatform somebody that discussed a topic in a way that was upsetting for someone, and going forward stop people from thinking publically when touching potentially upsetting topics, but something else.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T12:06:03.225Z · score: 13 (5 votes) · EA · GW

Thanks for the pushback, I'm still confused and it helped me think a bit better (I think). What do you think about the idea that the issue resolves around what Kelsey Piper called competing access needs? I explained how I think about it in this comment. I feel like I want to protect edgy think aloud spaces like those from Hanson. I feel like I benefit a lot from it and I feel like I (not being on any EA insides) am already excluded from many valuable but potentially offending EA think aloud spaces because people are not willing to bare the costs like Hanson does.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T11:37:01.283Z · score: 1 (1 votes) · EA · GW

One idea in the direction of making discussion norms explicit that just came to my mind are Crocker's rules.

By declaring commitment to Crocker's rules, one authorizes other debaters to optimize their messages for information, even when this entails that emotional feelings will be disregarded. This means that you have accepted full responsibility for the operation of your own mind, so that if you're offended, it's your own fault.

I've heard that some people are unhappy with those rules. Maybe because they seem to signal what Khorton alluded to: "Oh, of course I can accommodate your small-minded irrational sensitivities if you don't want a message optimized for information". I know that they are/were used in the LessWrong Community Weekends in Berlin, where you would where a "Crocker's rules" sticker on your nametag.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T11:32:07.980Z · score: 4 (3 votes) · EA · GW

Thank you for writing it and keeping this up. I think it's really valuable that people share the discomfort they feel around the way some people discuss. I wonder if Kelsey Piper's discussion of competing access needs and safe spaces captures the issue at hand.

Competing access needs is the idea that some people, in order to be able to participate in a community, need one thing, and other people need a conflicting thing (source)
  • For some people it is really valuable to have a space where one can discuss sensitive topics without caring about offense, where taking offense is discouraged because it would hinder progressing the arguments. Maybe even a space where one is encouraged to let one's mind go to places that are uncomfortable, to develop one's thinking around topics where social norms discourage you to go.
  • For others, a space like this would be distressing, depressing and demotivating. A space like this might offer a few insights, but they seem not worth the emotional costs and there seem to be many other topics to explore from an EA perspective, so why spend any time there.

I also hope that it is very easy for people to avoid spaces like this at EA conferences, e.g. to avoid a talk by Robin Hanson (though from the few talks of him that I saw I think his talks are much less "edgy" than the discussed blog posts). I wonder if it would be useful to tag sessions at an EA conference that would belong into the described space, or if people mostly correctly avoid sessions they would find discomforting already.

Comment by meerpirat on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-28T13:13:19.708Z · score: 36 (21 votes) · EA · GW

Thanks for writing about this. This incident bothered me and I really appreciate your thoughts and find them clarifying. I also tend to feel really frustrated with people finding offense in arguments (and notice this frustration right now), just to flag this here.

To the extent that I support some of Hanson’s ideas and want to see them become better-known, I am annoyed that this is less likely to happen because of Hanson’s missteps.

I found it improper that you call it "missteps", as if he made mistakes. As you said, openly discussing sensitive topics will cause offense if you don't censor yourself a lot. You mention that his collegues do a better job at making controversial ideas more palatable, but again, as you suggested, maybe they actually spent more time editing their work. This seems like a tradeoff to me, and I'm not convinced that Hanson is making missteps and we should encourage him to change how he runs his blog to have a more positive impact. Not saying this is true for Hanson, but for some thinkers it might be draining to worry about people taking offense at their thoughts. I'm worried about putting pressure on an important thinker to direct mental resources to other things than having smart thoughts about important topics.

Comment by meerpirat on Which properties does the EA movement share with deep-time organisations? · 2020-08-28T12:28:32.642Z · score: 2 (2 votes) · EA · GW
Do you mean with your comment that it would be ok (or even good) if EA ceases to exist if the reason would be that EA ideas were widespread?

That's what I thought. For example, we don't have democratic movements in a lot of countries because democratic values are already widespread. That's not a failure of the democracy movement, but a sign that it succeeded.

Comment by meerpirat on Which properties does the EA movement share with deep-time organisations? · 2020-08-28T09:41:13.747Z · score: 4 (4 votes) · EA · GW

Thanks, I found this really informative. I like the term "deep-time organisation".

EA has neither a monopoly on anything, nor social culturally supported dissemination.

I wonder if EA orgs or the EA communities currently are the single supplier of some goods, for example

  • (relatively) cause-neutral career advice,
  • incubation of impact-focused charities,
  • careers in global priorities research,
  • being part of a utilitarian-ish community (where else would I go other than to the local EA chapter?)

But in all those cases, it seems like a big win if there would be competition from outside, right? This would happen in the path where EA ideas would become more widespread, so it wouldn't hurt if the likelihood of seizing to exist will go up due to this factor.

Comment by meerpirat on "Good judgement" and its components · 2020-08-26T09:10:55.241Z · score: 5 (3 votes) · EA · GW

Thanks for writing this up, I found it very stimulating.

(One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)

Probably an edge case, but I wonder if an adversary could purposefully divert your attention away from important considerations. Thinking about it, I actually remember doing something like this in an adversarial board game, where I used „helpful“ clues to direct the attention of somebody who I‘m plotting against.

Another thought regarding incentives for good judgements came up: People seem to automatically develop good judgement in areas where they get feedback and have „skin in the game“, for example when deciding how to get a tasty meal or with whom to talk about a personal problem. So I wondered how much attention we should put on the surrounding incentives to develop good judgement. Will we get feedback, e.g. from peers that look at our reasoning, or from failed predictions because we develop the habit to make forecasts? Are we betting on our believes and making our track record public? There is probably much more to say how we could better incentivize good judgement.

Comment by meerpirat on Common ground for longtermists · 2020-08-18T08:33:43.720Z · score: 2 (2 votes) · EA · GW
This has sometimes led to tensions between effective altruists, especially over the question of how valuable pure extinction risk reduction is.

Sorry to hear that. I think your post is a very good idea that might usefully be replicated for any tense disagreement within EA.

I was wondering a bit what might lead to tensions here, without having any inside view into the long-termist community. Your essay seems to suggest that proponents of both philosophies might forget, or not immidiately see, the room for cooperation, and slide into a competitive spirit over EA resources.

I've also heard somewhere that the negative utilitarian "If you could kill everybody painlessly in their sleep, would you do it?" thought experiment can give an initial impression of uncooperative attitudes, like one would work on opposing strategies.

Personally, I also vaguely remember reading an expected value estimate for the long-term future by Brian Tomasik and finding it a bit depressing. I can imagine that some people worry that a pessimistic outlook on the future might be draining. For example, I understood the "existential hope" work of the FLI as a reaction to a worry like that.

Comment by meerpirat on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T21:34:40.503Z · score: 6 (4 votes) · EA · GW

Completely out of my depth here, but I wondered if Robin Hanson's "Age of Em" would be considered as a new insight for longtermists along the lines of making the case that brain emulations could also "be a technology with impacts comparable to the Industrial Revolution, and [whose] impacts may not be close-to-optimal by default"

Comment by meerpirat on Forecasting Newsletter: July 2020. · 2020-08-14T09:30:22.021Z · score: 2 (2 votes) · EA · GW

Thanks, I still find this very useful. Haven't seen Ought's Ergo before, really cool.

Comment by meerpirat on APPG on Future Generations impact report – Raising the profile of future generation in the UK Parliament · 2020-08-14T09:13:22.894Z · score: 6 (4 votes) · EA · GW

Thanks for the writeup. I think this is really impressive and inspiring. The high-level advice on top is also great.

I am unconvinced the EA community gets policy and has prioritised the correct policy areas. [...] (I will hopefully write more on all of this shortly)

I think more thoughts along this line would be super useful.

Do you have more thoughts on what personal traits would indicate a great fit for pulling something like this off in another country? Besides a solid understanding of the political arena I for example imagine having to be exceptionally sociable.

Comment by meerpirat on EA Meta Fund Grants – July 2020 · 2020-08-13T08:50:58.407Z · score: 6 (4 votes) · EA · GW

Anonymized, aggregate thoughts sound like the perfect solution, and thanks for the pointers!

Comment by meerpirat on EA Meta Fund Grants – July 2020 · 2020-08-12T19:40:16.305Z · score: 10 (4 votes) · EA · GW

Thanks for the work and the concise summaries! I’m really happy with the EA funds.

While risks and reservations for these organizations have been taken into account, we do not discuss them below in most cases.

Reading this I thought that it’s unfortunate that this really valuable information is not communicated as well. Or does this stem from the private and/or person-affecting nature of those reservations? For example, I think I have little intuition about why some projects are considered to have some downside risk and are therefore better not funded/undertaken. Reading more about these kinds of thoughts could be useful.

Comment by meerpirat on Super-exponential growth implies that accelerating growth is unimportant in the long run · 2020-08-11T17:35:45.847Z · score: 6 (4 votes) · EA · GW

In case you missed it, Leopold Aschenbrenner wrote a paper on economic growth and existential risks, suggesting that future investments in prevention efforts might be a key variable that may in the long run offset increased risks due to increasing technological developments.

https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1

Comment by meerpirat on BenMillwood's Shortform · 2020-07-10T15:34:39.519Z · score: 1 (1 votes) · EA · GW

After reading this I thought that a natural next step for the self-interested rational actor that wants to short nuclear war would be to invest in efforts to reduce its likelihood, no? Then one might simply look at the yearly donation numbers of a pool of such efforts.

Comment by meerpirat on Will AGI cause mass technological unemployment? · 2020-06-29T06:36:30.440Z · score: 2 (2 votes) · EA · GW

Hmm, might the lawn mowing analogy break down with increasing speed difference and dependencies? Imagine if the lawn had to be ready for Tiger to play golf and Tiger being 1000 times faster than Joe.

Not sure if related, but I looked up Robin Hanson‘s predictions of the role of humans in the Age of Em, where brain emulations (ems) would become feasible and increasingly perform most of the economic activities on Earth. Summary of chapter 27 on the book website:

As humans are marginal to the em world, their outcomes are harder to predict. Humans can’t earn wages, but might become like retirees today, who we rarely kill or steal from. The human fraction of wealth falls, but total human wealth rises fast. Humans are objects of em gratitude, but not respect.

Unfortunately, I don’t recall how he derived at the conclusion, maybe somebody else can chime in.

Comment by meerpirat on Differential technological development · 2020-06-28T20:32:48.167Z · score: 0 (3 votes) · EA · GW

Thanks, I also think writing this was a good idea.

Growth can’t continue indefinitely, due to the natural limitations of resources available to us in the universe.

This reminded me of arguments that economic growth on Earth would be necessarily diminished by limits of natural resources, which seems to forget that with increasing knowledge we will be able to do more with less resources. E.g. compare how much more value we can get out of a barrel of oil today compared to 200 years ago.

Comment by meerpirat on How should we run the EA Forum Prize? · 2020-06-25T08:25:12.612Z · score: 1 (1 votes) · EA · GW

Me too. Maybe a normal downvote by a very high karma member? And I also remember one instance where someone accidentally clicked on downvote without noticing.

Comment by meerpirat on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-19T15:14:57.871Z · score: 5 (3 votes) · EA · GW

Thanks for continuing the series, this is one of the most stimulating philosophical issues for me.

After the AI asks Bob if it should do what an ideally informed version of him would want, Bob replies:

Bob: Hm, no. [...] I don’t necessarily care about my take on what’s good. I might have biases. No, what I’d like you to do is whatever’s truly morally good; what we have a moral reason to do in an… irreducibly normative sense. I can’t put this in different terms, but please discount any personal intuitions I may have about morality—I want you to do what’s objectively moral.

I think that part paints a slightly misleading picture of (at least my idea of) moral realism. As if the AI shouldn't mostly study humans like Bob when finding out what is good in this universe, and instead focus on "objective" things like physics? Logic? My Bob would say:

Hm, kinda. I expect my idealized preferences to have many things in common with what is truly good, but I'm worried that this won't maximize what is truly good. I might, for example, carry around random evolutionary and societal biases that will waste astronomical resources for things of no real value, like my preference for untouched swaths of rainforest. Maybe start with helping us understand what what we mean with the qualitative feeling of joy, there might be something going on that you can work with, because it just seems like something that is unquestionably good. Vice versa with pain and sorrow and suffering, those seems undeniably bad. Of course I'm open to be convinced otherwise, but I expect there's a there there.
Comment by meerpirat on EA Forum feature suggestion thread · 2020-06-17T14:39:57.195Z · score: 13 (10 votes) · EA · GW

I'd like to have the option to make polls within a post. I recently wrote a short question post to see if an idea seems promising and I got a couple of upvotes and no comments. Having the option to get quick and cheap feedback from the community would've been useful.

Comment by meerpirat on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-15T21:40:32.957Z · score: 1 (1 votes) · EA · GW

One of the benefits of the proposed scheme is that it’s a costly signal that I expect to actually be not costly at all. And from the perspective of others it’s also a win-win („Either I win the bet and waste some time, or I lose a bit of money but will improve my productivity/wellbeing/etc“).

Comment by meerpirat on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-15T13:06:16.676Z · score: 3 (2 votes) · EA · GW

True. I expect this to matter less with the amount of money I had in mind (on the order of 50€), given that I expect marginal improvements in something like note-taking will seem like a big win to most EAs.

A friend of mine had the idea of donating the money to a preferred EA charity instead of paying out, which might further reduce those incentives (at least it would for my not-quite-there-yet lizard brain).

Comment by meerpirat on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-12T15:36:24.401Z · score: 1 (1 votes) · EA · GW
What does it entail when you say that your subjective experience "is real"?

Hmm, that my introspective observation of at time 14:04:38 EST corresponds to something that exists? (I notice feeling like talking in circles, sorry) Say I tell you that I just added two numbers in my head. I believe this is a useful description of some aspect of my cognitive processes, and it is possible to find shared patterns in other cognitive systems when they do addition.

I feel like Alice is uncharitable in this conversation. Lack of sharp boundaries is in my mind no strong argument for denying the existence of a claimed aspect of reality. Okay, this also feels uncharitable, but it felt like Alice was arguing that the moon doesn't exist because there are edge cases, like the big rock that orbits Pluto. I wished she would make the argument why, in this particular case, the observation of Bob does not correspond to anything that exists. Bob would say

I think I notice something that is real but hard to grasp, has a character of 'wanting to be ended', and which sounds a lot like what other people talk about when they are hurt. I've observed this "experience" many times now.

Then Alice would maybe say

I can relate to the feeling like there is something to be explained. Like the thing that you call "your experience" has certain features that correspond to something. For example like the different color labels for apples and bananas correspond to something real: a different mix of photons being emitted. I claim that what you call a qualitative experience does not correspond to anything real, there is no pattern of reality where it's useful to call it being conscious. Now let's go meditate for a year and you will realize that you were totally confused about this part of your observations.