Posts

Comments

Comment by markus_over on A small observation about the value of having kids · 2020-01-24T16:35:38.422Z · score: 1 (1 votes) · EA · GW

Most EAs I know are not planning to have children as far as I know (which I admit is not very far - to most I haven't explicitly spoken about the topic). Even if they did, it seems like a really slow and expensive way to build a movement. It may be one factor among others for EAs considering to build a family, but I doubt it is decisive for a considerable number of individuals.

If we simplify the possible outcome to two scenarios, a) children raised by EAs will overwhelmingly become EAs themselves, or b) this effect is much weaker and very few children will share the same values, I'd argue the value of information appears to be low.

Firstly, it seems highly unlikely to me that having children is anywhere near the most effective thing an EA can do. It is of course fine to make that plan for other, personal reasons, but I doubt many EAs get to the conclusion "the best use of my time on this planet in my pursuit to make this world a better place is to raise my own altruistic children". Growing the movement can certainly be done quicker without first growing your own little humans.

So given that assumption, the a) scenario, i.e. the "positive" outcome, could actually turn out harmful in a sense as it might convince a few additional EAs to have children that otherwise wouldn't. Scenario b) on the other hand would be the opposite and possibly keep a few EAs from having children that without that evidence would have done so. In both cases it seems we're better off simply assuming the children we have will not turn into EAs, as opposed to spending decades and hundreds of thousands of dollars on an experiment conducted in order to gain some value of information.

This line of argumentation of course only works if you agree with my assumption that having children is a very ineffective way to grow a movement though.

Comment by markus_over on Physical Exercise for EAs – Why and How · 2020-01-17T18:52:29.286Z · score: 1 (1 votes) · EA · GW

Thanks for this! Very useful.

One tiny nitpick:

Marie commutes daily by bicycle to the chemistry lab where she works.

Sorry for taking things a little too literal here, but most people (that I know of) work 5 days a week, have 2-6 weeks off per year, and call in sick something like 5-15 days per year, plus there may be some nationwide holidays on top. That leaves us with a range of around 210 - 245 actual commuting days or 57-67% of all days of the year. There are also likely days where rain/snow/wind cause Marie to get to work some other way, so effectively, even somebody who pretty much always takes the bike to work, will still end up at something like 50% of all days, but would probably tend to describe it as "everyday".

I'm not so much intending to criticize the example here, just point to the fact that such simplification makes it rather easy to delude oneself. I thought of myself as someone who takes the bike to work "almost always", yet when I actually tracked it, only got to around 100 days per year which was somewhat surprising.

Maybe the recommendations already take this into account however, and exceptions (even a lot of them, as naturally tend to happen) are tolerable as long as "the typical week" goes according to plan?

Comment by markus_over on Applied Rationality Workshop in Münster, Germany · 2019-09-24T12:54:34.399Z · score: 4 (4 votes) · EA · GW

Neat! The workshop in Cologne was quite good, and this one apparently will even include resolve cycles and hamming circles which I'm very much in favor of (and which as far as I remember weren't part of the Cologne workshop).

I'd probably recommend participating to anyone who lives even remotely close and feels like they could benefit from marginal improvements in their applied rationality, which realistically is probably almost everyone. Plus you'll surely get to know a lot of great people.

Thanks for organizing this!

Comment by markus_over on Alien colonization of Earth's impact the the relative importance of reducing different existential risks · 2019-09-06T17:43:45.346Z · score: 1 (1 votes) · EA · GW

I'm not that deep into AI safety myself, so keep that in mind. But that being said, I haven't heard that thought before and basically agree with the idea of "if we fall victim to AI, we should at least do our best to ensure it doesn't end all life in the universe" (which is basically how I took it - correct me if that's a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:

  • probability of AI managing to spread through the universe (I'd intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
  • probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
  • probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
  • probability of aliens having values sufficiently similar to ours'

I guess there's also a side to the Fermi paradox that's relevant here - it's not only that we don't see alien civilizations out there, we also don't see any signs of an ASI colonizing space. And while there may be many explanations for that, we're still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.

In the end I don't really have any conclusive thoughts (yet). I'd be surprised though if this consideration were a surprise to Nick Bostrom.

Comment by markus_over on Local Community Building Funnel and Activities - EA Geneva · 2019-09-01T10:11:09.476Z · score: 1 (1 votes) · EA · GW

Hi Konrad,

given your comment is now a year old, could you very briefly provide an update of whether anything significantly changed since then (maybe there are some updates to how you run EA Geneva that wouldn't justify an entire new post, but are still noteworthy)?

Also I'd be interested to know how close the growth assumptions were, and whether your member count and advanced workshop participation went up roughly as you expected.

This whole post seems very valuable by the way, so thank you!

Comment by markus_over on What is the effect of relationship status on EA impact? · 2019-06-28T20:51:12.530Z · score: 3 (3 votes) · EA · GW

While I don't have an actual answer of any kind, I'd argue that a relationship can have "positive externalities" on altruistic endeavours, e.g. by discussing EA ideas much more frequently than you otherwise would (depending on your circumstances), and, in case the other person is into EA as well, keeping each other motivated. I personally would assume that my long term engagement in EA would drop quite a bit were it not for my relationship. That's certainly different for other people however, so this isn't anything more than one random data point.

Comment by markus_over on Is this a valid argument against clean meat? · 2019-05-19T16:34:38.870Z · score: 2 (2 votes) · EA · GW

Even if there are minor negative short term effects (and while there almost certainly are >0 people in the world following the cited logic, I'm sure they're responsible for far less than even 0.1% of global meat consumption), it still seems to me like the most likely solution to factory farming in the long term, and thus the expected benefits of cultured meat very vastly outweigh the cost that is implied by that argument.

1) I believe most ethical vegetarians avoid meat in order to not actively cause any harm to animals, and not so much in order to solve factory farming. And for the former, the advent of cultured meat in the future doesn't make that much of a difference for their present behaviour.

2) People committed enough to actually think about how their actions contribute to creating a more vegetarian (or at least factory farm-less) world, and thus people who would in theory be affected by the given argument, probably aren't the same people that would think "oh well this issue is being dealt with by others already, nothing to do here". Plus 1) still applies here, as people with such a level of commitment almost certainly also want to avoid personally causing harm to animals.

3) the exception to 1) and 2) may be a few effective altruists (or people with similar mindsets) here and there, who get to the conclusion that sticking to a vegetarian/vegan diet is not worth it for them personally given the apparent tractability and non-neglectedness of the problem, but we're probably talking about dozens or at most hundreds of people around the globe at this point, and if they actually exist this would even be a good sign, as the reason these people would make this decision would be the fact that, well, cultured meat solves the issue of factory farming in such an effective manner that their personal contribution via ethical consumption would have a smaller marginal impact than whatever else they decide to do.

Admittedly a lot of speculation on my part, but what it comes down to is that the argument, while probably playing some non-zero role, just hasn't enough weight to it to justify changing one's view on cultured meat.

Comment by markus_over on Descriptive Population Ethics and Its Relevance for Cause Prioritization · 2018-12-29T12:39:34.465Z · score: 1 (1 votes) · EA · GW

Hi Stijn. You mention that people tend to fall into these two categories mostly - totalist view and person-affecting view. Can you elaborate on how you obtained this impression? Did you already run a survey of some kind, or is the impression based on conversations with people, or from the comments on your blog? Does it reflect the intuitions of primarily EAs, or philosophy students, or the general population?

Comment by markus_over on On Becoming World-Class · 2018-11-10T15:49:17.378Z · score: 1 (1 votes) · EA · GW

Thanks for the post, really interesting read! I find your arguments quite intriguing.

Whether aiming at becoming world class is a valid strategy or not seems to vary quite a lot depending on which area we're talking about. I guess for musicians it's very difficult to make such an argument - there are just too many highly talented people out there, plus there seems to be a lot of luck/randomness involved in achieving fame/recognition there. So even if you're an extremely capable singer/guitar player/drummer/..., the chances may just be too slim. Things may look different for more exotic instruments, or even sports that aren't very popular. If you have the right preconditions to be good at discus throwing, and you decide to give everything to become world class at it, the chances are probably much higher you'll succeed simply due to the much smaller base rate of people sharing that goal. And while the recognition that comes with it is certainly reduced when compared to actors/musicians/NBA stars etc., I'm pretty sure the "expected recognition" when taking such a path is much, much higher overall.

Similarly, there surely are many artists, but it's quite possible that certain niches with a lot of potential for motivated individuals exist.

Secondly, when stating that generally it seems like a good idea to have more world class in anything people in the movement, there are of course the two options of either developing those people from within the movement (which we're mostly discussing here), or advocating among already world class at something people to join the movement. I'm not sure if any organized attempt to achieve the latter already exists, but it might certainly be worthwhile as well, and for many of the more mainstream areas, I'd argue the chances of this strategy to get more world class at something people into the movement are much higher than via the "hold my beer I can do this" approach.

Comment by markus_over on What Activities Do Local Groups Run · 2018-09-11T14:27:49.717Z · score: 2 (2 votes) · EA · GW

Coworking sessions sound interesting. The fact that few groups utilize them, but those that do do it apparently very frequently, seems to suggest that it may be underrated. Could people from groups that do this on a regular basis elaborate on the format? Is it about organizing the group itself, i.e. preparing events etc.? Actively working on research topics? Or just generally people from the group meeting to work on things they personally need to get done? Would you say this specific setup increases productivity substantially?

Comment by markus_over on How to have cost-effective fun · 2018-07-21T09:16:08.734Z · score: 0 (0 votes) · EA · GW

I don't think eating out ranks highly on the "fun per dollar" scale for me personally simply due to the amounts of dollars involved, but still I find it really difficult to imagine a world without me going out for dinner relatively regularly. It may be my most expensive "hobby", but still it seems to provide quite a lot of value. I'm not quite sure why exactly, and if there are less expensive ways to obtain the same gain.

Could you maybe expand a little on the details of why it ranks so highly for you? I'd be interested in a more detailled perspective.

Comment by markus_over on Open Thread #40 · 2018-07-19T10:06:13.316Z · score: 1 (1 votes) · EA · GW

Aren't there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as "increasing empathy" or "improving human rationality" come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is "reduce existencial risks" - unless you weigh suffering risks so heavily that it's unclear whether preventing existential risk is good or bad in the first place.

Regarding such causes - given we can identify robust ones - it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.

If you were to agree with that, then maybe we could reframe your argument from "cost-effectiveness may be of low value" to "cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)" or something like that.

Comment by markus_over on Accountability buddies: a proposed system · 2018-07-18T13:55:29.946Z · score: 1 (1 votes) · EA · GW

Can you give us more details on what's going to happen afterwards? Will you personally try to match up pairs of people? Will this end up as a semi-public list?

Comment by markus_over on EA Hotel with free accommodation and board for two years · 2018-06-21T08:24:18.806Z · score: 9 (9 votes) · EA · GW

Plus there's reason to believe that of the non-vegans/vegetarians, a substantial subset probably still agrees to some extent that it's generally a good idea, and simply doesn't commit to the diet due to lack of motivation, or practicality in their situation, and thus would still welcome or at least be open to vegan food being provided in the hotel. So I guess even if 80% of EAs consider themselves to be omnivores, we can't assume that the whole 80% would personally perceive this policy of the hotel as negative.

Comment by markus_over on Want to be more productive? · 2018-06-20T12:00:54.632Z · score: 1 (1 votes) · EA · GW

I'm hearing of this for the first time now, and actually spent quite a bit of time throughout the last few months thinking about this exact concept and how it seems to be missing in the EA community, and whether this could be something I could possibly work on myself. The problem being that coaching of any kind really isn't my comparative advantage, and thus I'd probably be the wrong person to do it.

I find it rather difficult to decide whether or not scheduling a (series of) call(s) would make sense for me. In your testimonials, many people speak of productivity increases in concrete numbers, such as +15%. Are these their personal judgments, or did you provide a certain framework to measure productivity?

Can you elaborate a bit more on what kind of people would profit most from working with you?

Also +1 on richard_ngo's question about the comparison to CFAR.

Comment by markus_over on Visualising animal agriculture · 2018-06-20T11:39:23.774Z · score: 1 (1 votes) · EA · GW

A quite compelling reason for caring more about factory farmed animals is that we are inflicting a massive injustice against them, and that isn't the case for wild animals generally.

But couldn't you say that, for instance, the forces of evolution are inflicting an even more massive injustice against wild animals? Assuming injustices are more relevant because our species happens to inflict them doesn't seem 100% convincing to me. From the animal's point of view, it probably doesn't matter very much whether its situation is caused by some kind of injustice, what matters to the animal is whether and by what degree it's suffering.

I do of course share your intuition about injustice being bad generally, and "fixing your own mistakes before fixing those of others" so to speak seems like a reasonable heuristic. It's hard to tell whether the hypothetical "ideal EA movement" would shift its focus more towards WAS than it currently does, or not. My rather uninformed impression is that quite many EAs know about the topic and like talking about it - just like we are now - so it often seems there's a huge focus on wild animals, but the actual work going into the area is still a great degree lower than that. https://was-research.org/about-us/team/ still only lists three employees, after all.

Also I, too, like the visualization. I wonder how it would look with ~2k animals/second, which seems to be the sad statistic of the planet.

Comment by markus_over on Visualising animal agriculture · 2018-06-20T08:33:43.091Z · score: 0 (0 votes) · EA · GW

Or maybe the area is unexplored and there are big potential benefits from spending some effort figuring out if there are high-impact interventions?

I think that's pretty much it. Right now, there aren't many known concrete promising interventions to my knowledge, but the value of information in this area seems extremely high.

Using the standard method of rating cause areas by scale, neglectedness and tractability, it seems wild animal suffering scores a lot higher on scale, much higher on neglectedness (although farm animals are already pretty neglected), and seemingly much lower on tractability. There's quite a bit of uncertainty regarding the scale, but still it seems very clear it's orders of magnitude beyond farm animals. Neglectedness is apparent and not uncertain at all. The one point that would count against investing in wild animal suffering, tractability, on the other hand is highly uncertain (i.e. has "low resilience", see https://www.effectivealtruism.org/articles/the-moral-value-of-information-amanda-askell/ ), so there's a chance that even little research could yield highly effective interventions, making it a highly promising cause area in that regard.

I would feel a lot more hesitant about large-scale interventions on wild animals, since they are part of complex ecosystems where I've been led to believe we don't have a good enough understanding to anticipate long-term consequences accurately

You're right about this one, and we probably all agree on things being a bit tricky. So either research on our long term impact on ecosystems could be very helpful, or we could try focusing on interventions that have a very high likelihood of having predictable consequences.

(That all being said, there may be many reasons to still put a lot of our attention on farm animal suffering; e.g. going too public with the whole wild animal suffering topic before there's a more solid fundamental understanding of what the situation is and what, in principle, we could do to solve it while avoiding unforeseen negative effects, seems like a bad idea. Also finding ways to stop factory farming might be necessary for humanity's "moral circle" to expand far enough to even consider wild animals in the first place, thus making a solution to factory farming a precondition to successful large scale work on wild animal suffering. But I'm rambling now, and don't actually know enough about the whole topic to justify the amount of text I've just produced)

Comment by markus_over on A lesson from an EA weekend in London: pairing people up to talk 1 on 1 for 30 mins seems to be very useful · 2018-06-19T20:15:30.286Z · score: 1 (1 votes) · EA · GW

I guess this very much depends on how individual activities are executed. We had our 2.5 day retreat in Dortmund, Germany about a month ago, and while I didn't see the evaluation results, I got a strong impression that most people agreed on these points (still, take this with a grain of salt):

  • career discussion in small groups (~3-5) was quite useful; we had about 1 hour per group, and more would probably have been better.

  • double crux (I guess similar to productive disagreement?) was a cool concept, but a bit difficult to execute under the given circumstances (although it worked great for me), for similar reasons as mentioned by you

  • discussion about where to donate - this was, to some degree, what this weekend was primarily about for us, as we raised money on the first evening and then had to figure out where to send it. And while it started very slowly, we ended up spending many hours on Sunday on this (very open) discussion, and it was tremendously valuable. I really didn't expect this, but ultimately, judging from how engaged everybody was, how interesting our conversations were in the end, and how often each of us changed their mind over the course of the discussion, this was a great way to spend our time