Comment by markus_over on A few really quick ideas about personal finance · 2020-09-14T10:54:52.888Z · score: 1 (1 votes) · EA · GW
But use with caution, as I think the time/$ trade-off might imply you should maximise your working hours, which I think isn’t a good assumption. See point earlier about knowledge workers.

I definitely agree with "use with caution". The value of my time question has been bothering me for years now, and I never really found a satisfying heuristic for it. There are just a lot of complications and there imho is no way this can reasonably be considered to be a constant value:

  • it depends not only on the amount of time you're thinking about spending in a certain way (or gaining), but on the difference of value of the thing you're considering you do as compared to the thing you would be doing otherwise
  • even if you manage to pin it down to some value in a situation, that's only the marginal value of time given your circumstances, and it would differ if you had more/less time or money on your hands; thus, if you derive some great heuristic from that (e.g. "take a cab after parties"), depending on how much that impacts your behavior, this changes the value of your time, so the heuristic is sort of self-defeating to a degree

So I basically came up with a range of the value certain things I tend to do are worth per hour, and in that way have a way to compare things. It does feel like I'm overthinking and under-utilizing the principle though... one conclusion it led me to however was that I should work less, since my time was worth more to me than what I was paid per hour (after taxes). This is definitely a case where point 2 from above is highly relevant though. Reducing my time at the job rather drastically changes the value of my time, so I need to take that into account and try to find an equilibrium in working just as much (or little) such that the estimated value of my time (maybe averaged over my typical daily habits) reduces to roughly what I'm being paid to work. Which is tough!

But yeah, I'm getting a bit rambly. Just one more thing: Consumption smoothing is an interesting concept which I have to admit never occurred to me before. Thanks for the post! It also nudged me to once again look into investing which I've been procrastinating for years.

Comment by markus_over on Consider a wider range of jobs, paths and problems if you want to improve the long-term future · 2020-07-05T19:40:02.906Z · score: 2 (2 votes) · EA · GW
Designing recommender systems at top tech firms

Semi-related and somewhat off-topic, so forgive me for following that different track – but I recently thought about how one of the major benefits of EAGx Virtual for me was that it worked as a recommender system of sorts, in the form of "people reading my Grip profile (or public messages in Slack) and letting me know of other people and projects that I might be interested in". A lot of "oh you're interested in X? Have you heard of Y and Z?" which often enough led me to new interesting discoveries.

I'm curious if there may be a better approach to this, rather than "have a bunch of people get together and spontaneously connect each other with facts/ideas/people/projects based on mostly random interactions". This current way seems to work quite well, but it is also pretty non-systematic, luck-based, and doesn't scalethat well (it kind of does, but only in the "more people participate and invest time -> more people benefit" kind of way).

(That all being said, conferences obviously have a lot of other benefits than this recommender system aspect; so I'm not really asking whether there are ways to improve conferences, but rather whether there are different/separate approaches to connecting people with the information most relevant to them)

Comment by markus_over on 2019 - Year in Review · 2020-06-19T17:14:40.392Z · score: 8 (3 votes) · EA · GW

Just a random experience report - I've been using the website for my monthly donations for about half a year now, and think it's great. The process is so easy and friction-less that donating is something I'm always looking forward to as opposed to it feeling like an obligation I need to get through with, which basically had been my feeling about it beforehand. The website's great UX really makes a huge difference.

Also it's a great and easy page to point people to that sympathize with the idea of effective giving but don't really know what the next steps would be. Which isn't surprising as I guess that's partly the reason the project exists in the first place.

Random thought - I've seen many EAs apply stickers (e.g. GiveWell) to their phones or laptops, which I guess is a very cheap tool that may lead to a couple of conversations (and an expected >0 "conversions") about EA or the respective org in particular. I'd assume that at least marginally such stickers are pretty effective little things, but it may depend on how difficult it is to set up and distribute such a product. So it may certainly not be on top of your backlog if anything, but have you put any thought into such ideas? Maybe something like this would be a good opportunity for a volunteer, as I imagine it would require very little coordination initially.

Comment by markus_over on How to change minds · 2020-06-19T16:41:31.770Z · score: 3 (3 votes) · EA · GW

I wonder what heuristics people here follow regarding the question of when "how to change (other) minds" is a good mindset to have as opposed to "how to bring both conversation partners into a state of willingness to change one's mind", i.e. oneself being open to having one's mind changed as well, and then figuring out who has the right(er) idea about the topic at hand. The latter seems generally more sincere and useful, but I guess there are situations where you can be reasonably sure you really do know more about a topic than the other person and can be confident enough in your judgment that changing their mind is a reasonable goal to have.

Comment by markus_over on EA Forum feature suggestion thread · 2020-06-19T16:01:49.231Z · score: 5 (2 votes) · EA · GW

I'm not sure if such a feature would be worth the work it would involve, but: a very simple "editor" to very easily create probability distributions (or maybe more generally graphs that don't require mathematical formulas but just very rough manual sketching) and embed them into posts or comments could be useful. I'm not sure how often people would really use that though. Generally however, it would probably be a good thing to make probability estimates as explicit as possible, and being able to easily "draw" distributions in a few seconds and having a polished looking output could make that happen.

If this is something people would find useful, I'd be willing to spend the time to create such a component so it would theoretically just have to be "plugged in" afterwards.

Comment by markus_over on Why "animal welfare" is a thing? · 2020-06-19T12:33:15.218Z · score: 25 (17 votes) · EA · GW

Firstly, I think this may be helpful in understanding the downvotes: - to me, your post isn't very clear, and it seems you're using a somewhat superficial excuse of a question in order to make a bunch of semi-related points (if this is not the case and you're sincerely just looking for an answer, then sorry for the assumption).

Linking to your book doesn't really add to the post and comes off as unnecessary self promotion independent of whatever the actual concrete point of this post may be.

"Playing Bach or Mozart" to animals is probably just an intended minor provocation and you're not seriously thinking that this is what EA is going for when it comes to animal welfare. Still, to attempt to answer your question:

  • "animal welfare" is a cause area in the sense that it's a big global problem (billions of animals experiencing pain and suffering) that is neglected (comparably few resources going into improving the situation) and potentially solvable
  • playing music to animals on the other hand would be one possible intervention (so an answer to the question of how we could approach that big problem), and certainly not the most effective one, and I don't think anybody here has claimed that. But correct me if I'm wrong.
  • if you disagree with how animal welfare is handled in EA currently, there are at least two possible constructive ways of attack:
    • you argue that animal welfare is not an important cause area, because either it is not as big a problem, because it is not neglected, or because it is not solvable; all of these things are pretty well established however, so unless you know of some very crucial consideration, even strong evidence in any of these areas would probably only lead to comparably small adjustments in how this cause area is prioritized in comparison to others
    • you argue that there are interventions to tackle that problem that are more effective than those currently favored by EA; this seems closer to what you're trying to do here. So your question should not be "Why is animal welfare a thing", but "Why do you assume intervention X is more effective than intervention Y" (e.g. X being research into clean meat, and Y being carbon tax), and then doing some research on the effectiveness of both things; or alternatively if you're relatively sure of that, writing a post in favor of intervention Y being underrated and why people should look more into it as it's a very effective animal welfare intervention.

Building on the last point: when arguing against a position, you'll get more support and fewer downvotes if you follow a) the good faith principle (basically assuming the position you're arguing against originates from well meaning people with a genuine interest in doing good) and b) try steelmanning the opposite view (i.e. trying to find the best possible available argument, as opposed to strawmanning, which "playing Bach or Mozart" basically is).

To get closer to the actual object level here, I'd be interested in what you think about these statements and to what extent you agree or disagree with them:

1. Animal suffering is a problem worth solving

2. We should prioritize approaches of solving the problem that do the most good per dollar/time (i.e. alleviate the most suffering or yield the most happiness, or following a similar metric depending on your values)

3. Which approach is the most effective one is an open question that should be answered primarily by gathering evidence

Comment by markus_over on Why and how the EA-Movement has to change · 2020-06-04T18:46:07.152Z · score: 2 (2 votes) · EA · GW

Thanks for sharing. Probably a bit too cynical for my taste (e.g. you mention many of them are vegan, which may not be the most effective thing you can do, but certainly is evidence for them going out of their way to live in line with their values, yet regarding donating 10% being "unpopular (I wonder why)" you seem to imply they wouldn't be open to any kind of sacrifice), but I do believe I've at least seen a few of these tendencies in others as well as myself, and it makes sense to look out for them.

Also I found your remark on the 10% number rarely being questioned somewhat enlightening, as I myself haven't done so I'm afraid. Maybe it's a bit similar to vegetarianism and veganism which are two comparably crowded spots on a continuum of ways to eat. These are easy categories, and once you're in one of them, it's easy to communicate it to others and has a clear effect on your self image, i.e. thinking to yourself as "a vegetarian" instead of "<insert random complicated formula of how to evaluate which being you eat and which you don't">. Plus it probably works better as a potential role model for others.

With donating 10% (esp. if in combination with the giving what we can pledge) you also end up in such a distinct category. For people who donate less it's a nice (albeit arguably arbitrary) ideal to look up to. For people who've reached it it certainly makes sense to grow beyond it. Although I can imagine people wanting to do good primarily via their career, and donating 10% simply being sort of their baseline, and maybe a way of signalling to the outside world that they're really living what they preach and as such gaining more credibility. And for signalling purposes, which aren't inherently bad or anything, it makes sense to settle on a nice round number.

Comment by markus_over on Developing my inner self vs. doing external actions · 2020-05-31T10:32:14.102Z · score: 3 (2 votes) · EA · GW

I've thought about this question quite a bit as well (not very productively though), and these are basically my thoughts on it so far:

  • the two extremes are most likely highly suboptimal, so it must indeed be a question of finding the right balance
  • it "feels" like "doing a bit of both" is a sensible heuristic and trying to calculate this out more thoroughly may be overkill, as there are too many unknowns to get to any reliable solution
  • but the above may also just be my laziness talking, as on the other hand, it also seems clear that shifting the balance a bit towards the optimum could easily increase your whole life's output by a few %. Thus it would absolutely make sense to spend, say, a week or so, thinking deeply about this and at least trying to find a good balance
  • the answer likely isn't a ratio, but always depends on the concrete opportunities (especially as, as others have pointed out, few things fall strictly in one of the other category, but often it's a bit of both), which arise very often on the lower levels (e.g. "have this conversation or not" on a small level, "read this book" on a higher level) so it definitely make sense to follow some kind of heuristic for these cases
  • on even higher levels, with decisions such as "take this job where I can learn a lot vs this job where I have direct impact", it certainly makes sense to not follow heuristics but investigate the concrete option(s) and estimate their effects on our personal development and impact
  • delaying our own impact to the future always bears some risk of value drift changing our plans, nullifying our impact
  • it's possible that the best way to learn how to have much impact is to try to have much impact, so optimizing for impact is the dominant strategy, but that certainly depends on the concrete cases and may be more true for the more high level decisions than low level ones, and is also only true if you get enough and quick enough feedback to actually evaluate your impact and correct your approach

So in a nutshell, I haven't in any way answered this for myself yet. I also haven't come up with a useful heuristic yet and mostly just follow my gut, possibly erring on the self development side so far, which makes sense as that part is comparably easy/rewarding/forgiving, whereas outside facing impact considerations have some risk of failure and much increased uncertainty. So I guess at least for myself "focus more on impactful projects and less on reading books" would be a useful heuristic and very likely lead me closer to the optimum balance.

Comment by markus_over on How do you deal with FOMO / FOBO · 2020-05-08T11:06:45.262Z · score: 5 (3 votes) · EA · GW

Hi Smer, I admittedly find it a little difficult to answer your question as it seems you raise a few different ones. Is it correct that your main struggle right now is to decide where to move your career, as you have a lot of different options and don't know which one to take? Or is that just one example, and you're more interested in the very general issue of finding it difficult to make decisions in the first place?

If it's the former, then I believe a more in-depth (maybe coaching kind of) conversation could be more fruitful here than a forum post, as I'm not sure broad answers to your post's title will be very applicable to your concrete situation. Also you mention a lot of things which you're already trying, but I find it difficult to see them in context as you didn't provide any notes on which ones of those work for you, and in what ways they already contribute (or not) to the central problem.

Anyway, just to try to add my two cents to the "how to deal with FOMO"-question, which I can relate to rather well as I'm also a 30-ish web developer often struggling with making decisions:

  • I personally have the impression I'll just have to get used to living with the feeling of "what if this decision I'm making is not the best one?" - especially for big decisions that feeling will be there no matter what, so I might as well take it for what it is (a matter of subjective experience of uncertainty, and not actual evidence of the decision being suboptimal)
  • delays caused by postponed decisions usually come at a cost, so quick suboptimal decisions are often better than ideal decisions made (too) late
  • I sometimes use the book Decisive (or rather summaries thereof) to aid my decision-making, although I have a feeling it's often only more about about raising my confidence in a decision than about actually finding the best one
  • The author of Algorithms to Live By makes the point that real world problems are often too complex to properly solve, and that it makes sense to artificially relax these problems into easier ones so we can find suboptimal but still pretty good solutions. That's mostly with regards to travelling salesman kinds of problems, and may or may not apply to personal decisionmaking.
  • Career considerations may be a category where premature decisions come at a high cost, so here it really makes sense to spend a lot of time thinking them through thoroughly. Which probably includes discussing them with others. I'm not sure if 80,000 hours still offers 1-on-1 career consultation, but if they do, that may be a good thing to try, and if they don't I'm sure there are other people from the EA community who'd be willing to help out as well.
Comment by markus_over on Why I'm Not Vegan · 2020-04-12T11:20:16.805Z · score: 1 (1 votes) · EA · GW

While I certainly like that argument/thought experiment, I think it's very difficult to imagine the subjective experience of an (arguably) lower degree of consciousness. Depending on what animal's living conditions we're talking about, I'd arguably take 1/10th even assuming human level consciousness (so basically *me* living in a factory farm for 36.5 days to gain one additional year of life as a human), but have naturally a hard time judging what a reasonable value for chicken level consciousness would be.

Also, this framing likely brings a few biases into the mix and reframing it slightly could probably greatly change how people answer. E.g. if you had the choice to die today or live for another 50 years but every year would start with a month experienced as a pig in factory farming conditions, I'd most certainly pick the latter option.

Comment by markus_over on Why I'm Not Vegan · 2020-04-12T11:04:32.066Z · score: 4 (3 votes) · EA · GW

I think this is a very interesting point which I hadn't thought of before. To add to it, let's assume the "how much animals matter" values from the original post were chosen in a way more favorable to animals such that veganism seems to make economic moral sense, so we come to the conclusion "it's probably an effective intervention for an EA to go vegan".

Now assume some charity finds a super-effective intervention that cuts the cost of saving a human life to 10% its previous best value. Following the original argument, that would basically mean at this point going vegan is not recommended anymore because it may now be much less effective than the one thing we're semi-arbitrarily comparing it to.

It seems rather counter-intuitive that thousands of hypothetical rational EAs would now start eating meat again, simply because a charity found a cheaper way to save humans.

But then again, I can't get rid of the feeling that this whole counter-argument too is arbitrary and constructed, and that it wouldn't convince me if I were of the opposite opinion, but rather seem like a kind of logic puzzle where you have to find the error of thought. Maybe despite being counter-intuitive, the absurd sounding conclusion would still be the correct one in some sense.

Comment by markus_over on A small observation about the value of having kids · 2020-01-24T16:35:38.422Z · score: 1 (1 votes) · EA · GW

Most EAs I know are not planning to have children as far as I know (which I admit is not very far - to most I haven't explicitly spoken about the topic). Even if they did, it seems like a really slow and expensive way to build a movement. It may be one factor among others for EAs considering to build a family, but I doubt it is decisive for a considerable number of individuals.

If we simplify the possible outcome to two scenarios, a) children raised by EAs will overwhelmingly become EAs themselves, or b) this effect is much weaker and very few children will share the same values, I'd argue the value of information appears to be low.

Firstly, it seems highly unlikely to me that having children is anywhere near the most effective thing an EA can do. It is of course fine to make that plan for other, personal reasons, but I doubt many EAs get to the conclusion "the best use of my time on this planet in my pursuit to make this world a better place is to raise my own altruistic children". Growing the movement can certainly be done quicker without first growing your own little humans.

So given that assumption, the a) scenario, i.e. the "positive" outcome, could actually turn out harmful in a sense as it might convince a few additional EAs to have children that otherwise wouldn't. Scenario b) on the other hand would be the opposite and possibly keep a few EAs from having children that without that evidence would have done so. In both cases it seems we're better off simply assuming the children we have will not turn into EAs, as opposed to spending decades and hundreds of thousands of dollars on an experiment conducted in order to gain some value of information.

This line of argumentation of course only works if you agree with my assumption that having children is a very ineffective way to grow a movement though.

Comment by markus_over on Physical Exercise for EAs – Why and How · 2020-01-17T18:52:29.286Z · score: 1 (1 votes) · EA · GW

Thanks for this! Very useful.

One tiny nitpick:

Marie commutes daily by bicycle to the chemistry lab where she works.

Sorry for taking things a little too literal here, but most people (that I know of) work 5 days a week, have 2-6 weeks off per year, and call in sick something like 5-15 days per year, plus there may be some nationwide holidays on top. That leaves us with a range of around 210 - 245 actual commuting days or 57-67% of all days of the year. There are also likely days where rain/snow/wind cause Marie to get to work some other way, so effectively, even somebody who pretty much always takes the bike to work, will still end up at something like 50% of all days, but would probably tend to describe it as "everyday".

I'm not so much intending to criticize the example here, just point to the fact that such simplification makes it rather easy to delude oneself. I thought of myself as someone who takes the bike to work "almost always", yet when I actually tracked it, only got to around 100 days per year which was somewhat surprising.

Maybe the recommendations already take this into account however, and exceptions (even a lot of them, as naturally tend to happen) are tolerable as long as "the typical week" goes according to plan?

Comment by markus_over on Applied Rationality Workshop in Münster, Germany · 2019-09-24T12:54:34.399Z · score: 4 (4 votes) · EA · GW

Neat! The workshop in Cologne was quite good, and this one apparently will even include resolve cycles and hamming circles which I'm very much in favor of (and which as far as I remember weren't part of the Cologne workshop).

I'd probably recommend participating to anyone who lives even remotely close and feels like they could benefit from marginal improvements in their applied rationality, which realistically is probably almost everyone. Plus you'll surely get to know a lot of great people.

Thanks for organizing this!

Comment by markus_over on Alien colonization of Earth's impact the the relative importance of reducing different existential risks · 2019-09-06T17:43:45.346Z · score: 1 (1 votes) · EA · GW

I'm not that deep into AI safety myself, so keep that in mind. But that being said, I haven't heard that thought before and basically agree with the idea of "if we fall victim to AI, we should at least do our best to ensure it doesn't end all life in the universe" (which is basically how I took it - correct me if that's a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:

  • probability of AI managing to spread through the universe (I'd intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
  • probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
  • probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
  • probability of aliens having values sufficiently similar to ours'

I guess there's also a side to the Fermi paradox that's relevant here - it's not only that we don't see alien civilizations out there, we also don't see any signs of an ASI colonizing space. And while there may be many explanations for that, we're still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.

In the end I don't really have any conclusive thoughts (yet). I'd be surprised though if this consideration were a surprise to Nick Bostrom.

Comment by markus_over on Local Community Building Funnel and Activities - EA Geneva · 2019-09-01T10:11:09.476Z · score: 1 (1 votes) · EA · GW

Hi Konrad,

given your comment is now a year old, could you very briefly provide an update of whether anything significantly changed since then (maybe there are some updates to how you run EA Geneva that wouldn't justify an entire new post, but are still noteworthy)?

Also I'd be interested to know how close the growth assumptions were, and whether your member count and advanced workshop participation went up roughly as you expected.

This whole post seems very valuable by the way, so thank you!

Comment by markus_over on What is the effect of relationship status on EA impact? · 2019-06-28T20:51:12.530Z · score: 3 (3 votes) · EA · GW

While I don't have an actual answer of any kind, I'd argue that a relationship can have "positive externalities" on altruistic endeavours, e.g. by discussing EA ideas much more frequently than you otherwise would (depending on your circumstances), and, in case the other person is into EA as well, keeping each other motivated. I personally would assume that my long term engagement in EA would drop quite a bit were it not for my relationship. That's certainly different for other people however, so this isn't anything more than one random data point.

Comment by markus_over on Is this a valid argument against clean meat? · 2019-05-19T16:34:38.870Z · score: 2 (2 votes) · EA · GW

Even if there are minor negative short term effects (and while there almost certainly are >0 people in the world following the cited logic, I'm sure they're responsible for far less than even 0.1% of global meat consumption), it still seems to me like the most likely solution to factory farming in the long term, and thus the expected benefits of cultured meat very vastly outweigh the cost that is implied by that argument.

1) I believe most ethical vegetarians avoid meat in order to not actively cause any harm to animals, and not so much in order to solve factory farming. And for the former, the advent of cultured meat in the future doesn't make that much of a difference for their present behaviour.

2) People committed enough to actually think about how their actions contribute to creating a more vegetarian (or at least factory farm-less) world, and thus people who would in theory be affected by the given argument, probably aren't the same people that would think "oh well this issue is being dealt with by others already, nothing to do here". Plus 1) still applies here, as people with such a level of commitment almost certainly also want to avoid personally causing harm to animals.

3) the exception to 1) and 2) may be a few effective altruists (or people with similar mindsets) here and there, who get to the conclusion that sticking to a vegetarian/vegan diet is not worth it for them personally given the apparent tractability and non-neglectedness of the problem, but we're probably talking about dozens or at most hundreds of people around the globe at this point, and if they actually exist this would even be a good sign, as the reason these people would make this decision would be the fact that, well, cultured meat solves the issue of factory farming in such an effective manner that their personal contribution via ethical consumption would have a smaller marginal impact than whatever else they decide to do.

Admittedly a lot of speculation on my part, but what it comes down to is that the argument, while probably playing some non-zero role, just hasn't enough weight to it to justify changing one's view on cultured meat.

Comment by markus_over on Descriptive Population Ethics and Its Relevance for Cause Prioritization · 2018-12-29T12:39:34.465Z · score: 1 (1 votes) · EA · GW

Hi Stijn. You mention that people tend to fall into these two categories mostly - totalist view and person-affecting view. Can you elaborate on how you obtained this impression? Did you already run a survey of some kind, or is the impression based on conversations with people, or from the comments on your blog? Does it reflect the intuitions of primarily EAs, or philosophy students, or the general population?

Comment by markus_over on On Becoming World-Class · 2018-11-10T15:49:17.378Z · score: 1 (1 votes) · EA · GW

Thanks for the post, really interesting read! I find your arguments quite intriguing.

Whether aiming at becoming world class is a valid strategy or not seems to vary quite a lot depending on which area we're talking about. I guess for musicians it's very difficult to make such an argument - there are just too many highly talented people out there, plus there seems to be a lot of luck/randomness involved in achieving fame/recognition there. So even if you're an extremely capable singer/guitar player/drummer/..., the chances may just be too slim. Things may look different for more exotic instruments, or even sports that aren't very popular. If you have the right preconditions to be good at discus throwing, and you decide to give everything to become world class at it, the chances are probably much higher you'll succeed simply due to the much smaller base rate of people sharing that goal. And while the recognition that comes with it is certainly reduced when compared to actors/musicians/NBA stars etc., I'm pretty sure the "expected recognition" when taking such a path is much, much higher overall.

Similarly, there surely are many artists, but it's quite possible that certain niches with a lot of potential for motivated individuals exist.

Secondly, when stating that generally it seems like a good idea to have more world class in anything people in the movement, there are of course the two options of either developing those people from within the movement (which we're mostly discussing here), or advocating among already world class at something people to join the movement. I'm not sure if any organized attempt to achieve the latter already exists, but it might certainly be worthwhile as well, and for many of the more mainstream areas, I'd argue the chances of this strategy to get more world class at something people into the movement are much higher than via the "hold my beer I can do this" approach.

Comment by markus_over on What Activities Do Local Groups Run · 2018-09-11T14:27:49.717Z · score: 2 (2 votes) · EA · GW

Coworking sessions sound interesting. The fact that few groups utilize them, but those that do do it apparently very frequently, seems to suggest that it may be underrated. Could people from groups that do this on a regular basis elaborate on the format? Is it about organizing the group itself, i.e. preparing events etc.? Actively working on research topics? Or just generally people from the group meeting to work on things they personally need to get done? Would you say this specific setup increases productivity substantially?

Comment by markus_over on How to have cost-effective fun · 2018-07-21T09:16:08.734Z · score: 0 (0 votes) · EA · GW

I don't think eating out ranks highly on the "fun per dollar" scale for me personally simply due to the amounts of dollars involved, but still I find it really difficult to imagine a world without me going out for dinner relatively regularly. It may be my most expensive "hobby", but still it seems to provide quite a lot of value. I'm not quite sure why exactly, and if there are less expensive ways to obtain the same gain.

Could you maybe expand a little on the details of why it ranks so highly for you? I'd be interested in a more detailled perspective.

Comment by markus_over on Open Thread #40 · 2018-07-19T10:06:13.316Z · score: 1 (1 votes) · EA · GW

Aren't there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as "increasing empathy" or "improving human rationality" come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is "reduce existencial risks" - unless you weigh suffering risks so heavily that it's unclear whether preventing existential risk is good or bad in the first place.

Regarding such causes - given we can identify robust ones - it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.

If you were to agree with that, then maybe we could reframe your argument from "cost-effectiveness may be of low value" to "cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)" or something like that.

Comment by markus_over on Accountability buddies: a proposed system · 2018-07-18T13:55:29.946Z · score: 1 (1 votes) · EA · GW

Can you give us more details on what's going to happen afterwards? Will you personally try to match up pairs of people? Will this end up as a semi-public list?

Comment by markus_over on EA Hotel with free accommodation and board for two years · 2018-06-21T08:24:18.806Z · score: 9 (9 votes) · EA · GW

Plus there's reason to believe that of the non-vegans/vegetarians, a substantial subset probably still agrees to some extent that it's generally a good idea, and simply doesn't commit to the diet due to lack of motivation, or practicality in their situation, and thus would still welcome or at least be open to vegan food being provided in the hotel. So I guess even if 80% of EAs consider themselves to be omnivores, we can't assume that the whole 80% would personally perceive this policy of the hotel as negative.

Comment by markus_over on Want to be more productive? · 2018-06-20T12:00:54.632Z · score: 1 (1 votes) · EA · GW

I'm hearing of this for the first time now, and actually spent quite a bit of time throughout the last few months thinking about this exact concept and how it seems to be missing in the EA community, and whether this could be something I could possibly work on myself. The problem being that coaching of any kind really isn't my comparative advantage, and thus I'd probably be the wrong person to do it.

I find it rather difficult to decide whether or not scheduling a (series of) call(s) would make sense for me. In your testimonials, many people speak of productivity increases in concrete numbers, such as +15%. Are these their personal judgments, or did you provide a certain framework to measure productivity?

Can you elaborate a bit more on what kind of people would profit most from working with you?

Also +1 on richard_ngo's question about the comparison to CFAR.

Comment by markus_over on Visualising animal agriculture · 2018-06-20T11:39:23.774Z · score: 1 (1 votes) · EA · GW

A quite compelling reason for caring more about factory farmed animals is that we are inflicting a massive injustice against them, and that isn't the case for wild animals generally.

But couldn't you say that, for instance, the forces of evolution are inflicting an even more massive injustice against wild animals? Assuming injustices are more relevant because our species happens to inflict them doesn't seem 100% convincing to me. From the animal's point of view, it probably doesn't matter very much whether its situation is caused by some kind of injustice, what matters to the animal is whether and by what degree it's suffering.

I do of course share your intuition about injustice being bad generally, and "fixing your own mistakes before fixing those of others" so to speak seems like a reasonable heuristic. It's hard to tell whether the hypothetical "ideal EA movement" would shift its focus more towards WAS than it currently does, or not. My rather uninformed impression is that quite many EAs know about the topic and like talking about it - just like we are now - so it often seems there's a huge focus on wild animals, but the actual work going into the area is still a great degree lower than that. still only lists three employees, after all.

Also I, too, like the visualization. I wonder how it would look with ~2k animals/second, which seems to be the sad statistic of the planet.

Comment by markus_over on Visualising animal agriculture · 2018-06-20T08:33:43.091Z · score: 0 (0 votes) · EA · GW

Or maybe the area is unexplored and there are big potential benefits from spending some effort figuring out if there are high-impact interventions?

I think that's pretty much it. Right now, there aren't many known concrete promising interventions to my knowledge, but the value of information in this area seems extremely high.

Using the standard method of rating cause areas by scale, neglectedness and tractability, it seems wild animal suffering scores a lot higher on scale, much higher on neglectedness (although farm animals are already pretty neglected), and seemingly much lower on tractability. There's quite a bit of uncertainty regarding the scale, but still it seems very clear it's orders of magnitude beyond farm animals. Neglectedness is apparent and not uncertain at all. The one point that would count against investing in wild animal suffering, tractability, on the other hand is highly uncertain (i.e. has "low resilience", see ), so there's a chance that even little research could yield highly effective interventions, making it a highly promising cause area in that regard.

I would feel a lot more hesitant about large-scale interventions on wild animals, since they are part of complex ecosystems where I've been led to believe we don't have a good enough understanding to anticipate long-term consequences accurately

You're right about this one, and we probably all agree on things being a bit tricky. So either research on our long term impact on ecosystems could be very helpful, or we could try focusing on interventions that have a very high likelihood of having predictable consequences.

(That all being said, there may be many reasons to still put a lot of our attention on farm animal suffering; e.g. going too public with the whole wild animal suffering topic before there's a more solid fundamental understanding of what the situation is and what, in principle, we could do to solve it while avoiding unforeseen negative effects, seems like a bad idea. Also finding ways to stop factory farming might be necessary for humanity's "moral circle" to expand far enough to even consider wild animals in the first place, thus making a solution to factory farming a precondition to successful large scale work on wild animal suffering. But I'm rambling now, and don't actually know enough about the whole topic to justify the amount of text I've just produced)

Comment by markus_over on A lesson from an EA weekend in London: pairing people up to talk 1 on 1 for 30 mins seems to be very useful · 2018-06-19T20:15:30.286Z · score: 1 (1 votes) · EA · GW

I guess this very much depends on how individual activities are executed. We had our 2.5 day retreat in Dortmund, Germany about a month ago, and while I didn't see the evaluation results, I got a strong impression that most people agreed on these points (still, take this with a grain of salt):

  • career discussion in small groups (~3-5) was quite useful; we had about 1 hour per group, and more would probably have been better.

  • double crux (I guess similar to productive disagreement?) was a cool concept, but a bit difficult to execute under the given circumstances (although it worked great for me), for similar reasons as mentioned by you

  • discussion about where to donate - this was, to some degree, what this weekend was primarily about for us, as we raised money on the first evening and then had to figure out where to send it. And while it started very slowly, we ended up spending many hours on Sunday on this (very open) discussion, and it was tremendously valuable. I really didn't expect this, but ultimately, judging from how engaged everybody was, how interesting our conversations were in the end, and how often each of us changed their mind over the course of the discussion, this was a great way to spend our time