One’s Future Behavior as a Domain of Calibration 2020-12-31T15:48:33.921Z
CFAR Workshop in Hindsight 2020-12-13T16:18:32.356Z


Comment by markus_over on Learning, Knowledge, Intelligence, Mastery, Anki - TYHTL post 2 · 2021-09-14T19:44:50.584Z · EA · GW

Just a side note: While Obsidian is free (and great), I'm pretty sure it's not open source.

Comment by markus_over on Lessons from Running Stanford EA and SERI · 2021-09-11T12:38:29.234Z · EA · GW

Thank you Michael!

  • I personally am definitely more time- than funding constrained. Or maybe evem "energy constrained"? But maybe applying for funding would be something to consider when/if we find a different person to run the local group, maybe a student who could do this for 10h a week or so.
  • regarding a fellowship: my bottlenecks here are probably "lack of detailed picture of how to run such a thing (or what it even is exactly)" and "what would be the necessary concrete steps to get it off the ground". Advertising is surely very relevant, but secondary to these other questions for now.
  • on a slightly more meta level, I think one of the issues is that I don't have a good overview of the "action space" (or "moves") in front of me as an organizer of an EA local group. Running a fellowship appears to be a very promising move, but I don't really know how to make it. Other actions may be intro talks, intro workshops, concepts workshops, discussions, watching EAG talks together, game nights, talks in general, creating a website, setting up a proper newsletter instead of having a manually maintained list of email addresses, looking for a more capable group organizer, facebook ads, flyers, posters, running giving games, icebreaker sessions, running a career club, coworking, 1on1s, meeting other local groups, reaching out to formerly-but-not-anymore-active members, and probably much more I'm not even thinking about. Maybe I'm suffering a bit from decision paralysis here and just doing any of these options would be better than my current state of "unproductive wondering what I should be doing"... :)
  • will message you regarding a call, thanks for the offer!
Comment by markus_over on If you could send an email to every student at your university (to maximize impact), what would you include in it? · 2021-09-04T20:27:33.543Z · EA · GW

Given I just received a link to this article in the 80,000 Hours newsletter: -- that article seems like something that a lot of students might potentially be interested in. So something like a brief description of the key idea plus a link to the article would be one option.

Comment by markus_over on A Case for Better Feeds · 2021-09-04T11:39:24.170Z · EA · GW

Recently I've been thinking a lot about the flow and distribution of information (as in facts/ideas/natural language) as a meta level problem. It seems to me that "ensuring the most valuable information finds its way to the people who need it" could make a huge difference to a lot of things, including productivity, well-being, and problem-solving of any kind, particularly for EAs. (if anybody reading this is knowledgeable in this broad area, please reach out!)

Your post appears to focus on a very related issue, which is how EAs source their EA information and some specific ways to improve it. I definitely agree that this is an issue worth looking into and worth improving (I personally think that either the EA forum or the EA Hub are in the best position to make such improvements, although I'm unsure what these improvements would look like).

The EA Forum Job Hunt idea admittedly doesn't seem very promising to me from how I understood it -- it sounds like by far the most work of all the suggestions, for a problem that, to me, seems as if it's solved to a pretty reasonable degree. 

I don't quite understand the EA Hub suggestion. What would be submitted and upvoted? Just the existence of (local) groups?

The remaining points regarding Twitter bots and feeds sound good to me, simply because they sound like very little work (unless I'm misjudging that), while potentially being helpful to probably many dozens of EAs.

By the way I do wonder what ratio of EAs is actively using twitter. I for once am not at all, and am not aware of many people I know personally doing so, but that might not mean much and not be very representative.

Comment by markus_over on Lessons from Running Stanford EA and SERI · 2021-08-29T09:29:01.581Z · EA · GW

Great post, thanks for sharing! Pretty much exactly the type of post I had been hoping for for a while. Just hearing that one success story of a local group that was in a more or less similar state as mine (albeit arguably in a higher potential environment), but made it into something so impressive, is very inspiring.

Given I only have ~10h per week available to spend on EA things (and not all of them go into community building), I was particularly happy to hear your 80/20 remark. I do wonder if it's possible to move a local group onto a kind of growth trajectory at only, say, 6h per week, or if that's just a lost cause. Maybe I should just spend the majority of these 6h looking for a person with more time and motivation to take over the role. :) 

Currently we're definitely leaving a lot of low hanging fruit on the table (or tree) though. And a lot of that may be due to relatively trivial issues and inconveniences. Some examples of such limiting factors (and I do wonder if similar things are true for other small local groups):

  • I've heard fellowships mentioned & recommended a lot in the last 1-2 years, but have a fairly limited understanding of the concrete details. Should we run our own one? Should we redirect people to other online fellowships? What would I even tell people in order to motivate them to do so? Timing also needs to be taken into account.
  • Fear of organizing things and (almost) nobody (new) showing up. We had quite a few talks and such that ended up only being heard by our core team, although we were hoping to attract some new faces. That being said, our marketing was often pretty shy rather than aggressive.
  • Lack of detailed knowledge about the European data protection regulation and its implications prevents us/me from systemizing our "funnel" (which hardly exists). I have no idea if it's even legal to have a database of names / email addresses / other personal information of people, whether we'd need to inform them beforehand, etc.
  • Most of our small number of members are busy with their own things / studies / careers and have hardly any capacity to engage with the group beyond one weekly social/discussion, so there's little room for organizing bigger things or spending more time on community building, and I find that situation somewhat demoralizing
  • We have a whatsapp group and a Slack workspace. Whatsapp is great to get new people on board quickly, but it's surprisingly difficult to get them to sign up on Slack, and if they do we can never rely on them seeing new messages, or looking in there at all. Right now our Slack workspace is almost exclusively used by our few core members, and others hardly ever engage.
  • I feel very aversive to "push" people to do things, and wonder if that's a necessary skill to have for a community builder, or ideally people should be motivated enough that they only need to be "enabled"/supported instead.
Comment by markus_over on Philosophy Web - Project Proposal · 2021-08-28T12:29:09.845Z · EA · GW

It sounds interesting, albeit to be fair a bit gimmicky as well. To me at least, which may not mean much: I can imagine taking a few minutes to play around with such a tool if it existed, maybe find some contradiction in my beliefs (probably after realizing that many of my beliefs are pretty vague and that it's hard to put these hard labels on them), and get to the conclusion that really my beliefs weren't that strong anyway and so the contradiction probably doesn't matter all that much. I can imagine others would have a very different experience though (and maybe my expectation about myself is wrong as well of course).

I'd be interested in your thoughts on a few questions:

  1. Can you describe an example "user journey" for Philosophy Web? What beliefs would that imaginary user hold, how would they interact with the software, what would come out, just as one prototypical example?
  2. Would there be other, maybe simpler ways for that imaginary user to get to the same conclusion, not involving Philosophy Web? What bottleneck prevents people from making these conclusions?
  3. Who would be the primary target audience for this? What would make the tool "effective"? Are you primarily thinking about EAs getting to a more self-consistent belief set? Philosophy students? Everyone?
  4. What are the most likely ways in which such a project would fail, given you found the necessary support to build it?
  5. Does the project's success depend on some large number of users? What's the "threshold"? How likely is it to pass that threshold?
  6. What would be the smallest possible version (so MVP basically) of the project that achieves its primary purpose? Could something be prototyped within a day that allows people to test it?
  7. Assuming the project is built and completed and people can use it as intended - what are the most likely reasons for members of your target audience to not find it useful?

As an additional note, I'm quite a fan of putting complex information into more easily digestible forms, such as mind maps, and could imagine that "data structure" in itself being quite valuable to people merely to explore different areas of philosophy, even to a limited degree. I'm not quite sure though if the project entails such a web being presented visually, or if users would only see the implications of their personal beliefs.

Comment by markus_over on Open call for EAs with passion for meta-learning <3 · 2021-08-28T11:47:12.516Z · EA · GW

Just wanted to say I very much like the idea, although I'll probably not get involved myself. I was very happy about the anki deck of EA key numbers that was published two months ago, and would find it great if there were more ways to easily add important EA ideas to one's anki deck (e.g. you mention the 80,000 Hours key ideas in the google doc, great idea!).

Comment by markus_over on How much money donated and to where (in the space of animal wellbeing) would "cancel out" the harm from leasing a commercial building to a restaurant that presumably uses factory farmed animals? · 2021-08-25T18:49:35.316Z · EA · GW

It would be quite surprising to me if your idea did not work out, simply because doing good for animals via donations tends to be really low cost (but might depend on what "a lot more money" really means in your case). Imagining for instance that for each and every restaurant in the world some non-negligible cut of the rent (say 5%) would go into effective animal charity, my super rough 3 minute Fermi estimate says that would amount to something in the order of $10 billion per year. Given that about 80 billion land animals are slaughtered each year, that would mean that at a cost effectiveness of sparing 8 animal lives per dollar donated (which doesn't sound entirely unrealistic), your suggested approach to leasing to restaurants would, on a global scale, not only be net positive, but very theoretically end factory farming of land animals (obviously not in practice given diminishing marginal returns). It's a very hypothetical argument, but maybe it adds something.

Apart from that, maybe there's a way to attract more vegetarian/vegan restaurants in particular? No idea about the concrete processes and legislature around that, but maybe you have some power in that regard.

Comment by markus_over on Teaching You How To Learn post 1 is live! · 2021-08-21T09:45:46.893Z · EA · GW

Some random thoughts from me as well:

  • I wonder if different people may have quite different bottlenecks with regards to how to learn most effectively, and it may be not so much about "do these things" but rather "from these typical bottlenecks, which one affects you the most?"
  • the framing of "The best way to learn" seems a bit dangerous to me; even if "scientifically proven", it still basically just means that it works well on average, but not necessarily for everybody. While active recall and spaced repitition probably are indeed very general, it might be good to add a few notes regarding how people might differ.
  • on a similar note, 80,000 Hours tends to incorporate "reasons why you might disagree" or "where we've been wrong in the past" kind of sections and articles, which too I feel would help a little. E.g. "things Anki isn't ideal for", which definitely exist.
  • maybe a relevant part of effective learning is to be more aware of one's true motives in doing things, be it getting a degree, reading non-fiction books, having an anki routine etc., and whether one's truly doing this to learn things, and if so for what exact purpose
  • related to this, there are different dimensions to learning, similar to productivity: what are you learning (and why), how are you learning, and how much time are you spending. So basically direction, quality, quantity of the learning process. It seems that many resources, maybe including your site, mostly focus on the quality part, whereas the direction part may be even more important and comparably neglected.
  • during one of the EAG Virtual conferences I talked to somebody who was involved in creating a free ebook on the most effective learning strategies for students during pandemic times; wasn't able to find it again so far, but if I do I'll add a link
  • I personally would find it very useful to get some better/clearer mental models of learning and knowledge. Maybe the kind of thing Spencer Greenberg tends to do, e.g. in his podcasts, where he frequently goes into "Well I think X can be broken down into 4 categories: ..." mode and suddenly X makes way more sense than it did before that breakdown.
  • for a long time I've been of the conviction that the way we tend to structure information is highly suboptimal. I'm mostly referring to linear texts about things. 1. Texts are good for some things, but by far not for everything, 2. we're not at all using our brain's immense capability for spatial and visual processing, 3. texts are static and non-interactive, 4. while you have things like table of contents, chapters/headlines and some formatting, it's not an ideal implementation of "different zoom levels", and there are certainly better ways of letting people learn things on a very high level first and then "zoom in" further. As a learner, you have to take what you've got of course. But the other side of the coin - how can you make learning for others easier as a content provider of any sort? - seems very important as well, and I think such a page would be in a great position to experiment with such ways, and not rely on classical linear text form.

About the concrete project:

  • I think providing anki cards at the bottom of your posts is a great idea
  • 80,000 Hours tends to have small summaries of their articles at the top, which I would find useful here as well
  • The Key Ideas Guide post is currently very text-heavy, which makes sense since it's in progress and you probably want to focus on the ideas themselves rather than the presentation. For the future though I think it would make it much more digestible if there was a bit more variety to it, be it pictures, graphs, or even just some formatting tweaks. E.g. one or two screenshots from actual anki cars would be a start, or a graph of the forgetting curve.
  • Style-wise, you're using parantheses a lot in your post, which I can totally relate with - I do it all the time e.g. when exchanging messages with people or writing forum posts and comments. But it does still seem sumoptimal to me, as it hurts the reading flow, and may be a sign one's not focusing on what's actually essential.
  • The post to me feels quite a bit like it's trying to sell me something. I was almost expecting a "subscribe to my newsletter to get a FREE ebook!" while reading. :) This is something 80,000 Hours avoid pretty well by being very open and grayscale about things.
  • I find it great that you've just started doing it and putting it out there looking for feedback; I'm working on one or two vague similar-ish projects (not related to learning though) and didn't yet manage to get over my semi-perfectionist "I'll just make sure I have something good before showing it to anybody" attitude, although I know that's a bad approach
  • minor note, at one point you write "(god this bold is intense)" although there's nothing actually bold; maybe the formatting got lost somewhere on the way?

Some counter points on drawbacks/challenges of Anki:

  • you need to be rather conscientous to use it effectively; missing a week can easily break the habit of daily ankiing, because you're suddenly looking at potentially 100s of flashcards to review
  • it might push people to go for memorizing (often useless) facts rather than really learning and understanding deeper concepts
  • also, adding anki cards to your deck now feels like progress; e.g. after reading a book (or chapter), you might have a feeling that not creating new cards is bad. This might nudge you to add useless cards rather than nothing, degrading the quality of your deck over time. I find it really hard to prevent this personally. After reading a book and going through my notes, if I add nothing to my Anki deck, I feel like having read the book was a waste of time. So I'm motivated to add things simply to feel better about the sunk cost. But looking at my deck honestly, I'm almost sure 50% of the stuff in there doesn't really add anything to my life.
  • setting up such a system and getting into it takes a lot of work and willpower, and many people may just not be willing to go that far (even if it does indeed pay off in the long term)

That all being said, if I went back to university, I'd definitely use Anki and I'm sure it would improve my performance a lot compared to my time there in the past where I didn't know what spaced repetition even is. I'd just say that it's maybe something like 40% of my personal ideal learning system, and there would be a lot beyond that (e.g. how to watch lectures, how to take notes, how to work on actual exercises, the fact that explaining things to others is very helpful, how to motivate yourself, how to plan and build a reliable system, ...).

Comment by markus_over on How to Train Better EAs? · 2021-08-06T16:06:31.822Z · EA · GW

I recently read Can't Hurt Me by David Goggins, as well as Living with a SEAL about him, and found both pretty appealing. Also wondered whether EA could learn anything from this approach, and am pretty sure that this is indeed the case, at least for a subset of people. There is surely also some risk of his "no bullshit / total honesty / never quit" attitude to be very detrimental to some, but I assume it can be quite helpful for others.

In a way, CFAR workshops seem to go in a similar-ish direction, don't they? Just much more compressed. So one hypothetical option to think about would be to consider scaling it up to a multi-month program, for highly ambitious & driven people who prioritize maximizing their impact to an unusually high degree. Thinking about it, this does indeed sound at least somewhat like what Charity Entrepeneurship is doing. Although it's a pretty particular and rather "object-level" approach, so I can imagine having some alternatives that require similarly high levels of commitment but have a different focus could indeed be very valuable.

Comment by markus_over on Building my Scout Mindset: #1 · 2021-07-17T12:25:38.590Z · EA · GW

Thanks for making this public, found it really interesting to follow your train of thought. Also, despite hearing about it in the past, I had completely forgotten about Julia's book. Added it to my reading list now. :)

Comment by markus_over on [deleted post] 2021-07-13T11:08:08.250Z

How much time should a participant roughly allocate for this? How much time are we supposed to spend on each of the questions? For how many days/weeks/months will this be running?

Is "start by finding someone to practice with" something one should do before signing up, i.e. should people sign up in groups of 2? Or does that matching of participants happen once you've got enough together? If the latter, do you have control over which of the two roles you get? I couldn't yet make that much sense of the descriptions of what backcaster and retriever are doing exactly, specifically the "pick a date" part and how the date influences things.

What degree of forecasting experience are you looking for? Or all types of people? Would it make sense for people to sign up when they've gone through a lot of calibration training in the past?

And a side note, the first paragraph on the linked page seems to have been pasted twice.

Comment by markus_over on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-04T13:16:49.855Z · EA · GW

+1 to that! Really cool, thanks for doing this. :)

Comment by markus_over on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-06-25T20:49:29.458Z · EA · GW

Thanks a lot for the thorough post Emily! I like the framing of staying up late as a high-interest loan a lot. And I agree that reading Why We Sleep may indeed be quite useful for certain people, despite its shortcomings. You make a lot of good points and provide several interesting ideas, plus the post is written in a very readable way, and the drawings are great.

Not that much else to add, except two tiny nitpicks regarding your estimation:

  • you equated "being 30% less productive" with "taking 30% more time to complete things", but actually being 30% less productive would mean you take 100/70 - 1 = ~42% longer. (a more obvious example of this would be that being 50% less productive means you require twice the time = 100% more, not 50% more)
  • concluding your estimation, your multiplication characters were interpreted as formatting, making the "0.254.9 + 0.502.1 + 0.10*0.4" part quite confusing to read. You could use × or • instead.
Comment by markus_over on How can I best use product management skills for digital services for good? · 2021-06-04T09:19:28.310Z · EA · GW

Same here. :)

Comment by markus_over on Statistics for Lazy People, Part 2 · 2021-04-16T18:01:24.305Z · EA · GW

Neat! Small mistake: "What is the probability that it will still be working after eight twenty years" should probably be "after twenty years". And multiple data points are exciting indeed!

Comment by markus_over on Announcing "Naming What We Can"! · 2021-04-02T16:32:09.414Z · EA · GW

Perfect! In the end the impact will of course be orders of magnitude higher, as a slightly better name of any particular organization will affect tens if not hundreds of thousands of people in the long run. And there may even be a tail chance of better names increasing the community's stability and thus preventing collapse scenarios. I think overall you really undersold your project with that guesstimate model only focusing on this post only, as if that was all there is to it.

Comment by markus_over on Announcing "Naming What We Can"! · 2021-04-02T12:42:40.356Z · EA · GW

I believe there are a few serious flaws in your guesstimate model:

  • a year has 365.2421905 days, not 365.25. That's not even rounded correctly!
  • Smiles per QALY should multiply days in a year with smiles in a good day, instead they are added. They don't even have the same unit, how can you add them! Insanity!
  • the post's karma is far outside of even your 99% interval

Everything else seems quite correct and I agree with your CIs and conclusions.

Also, please find a new name for guesstimate.

Comment by markus_over on Statistics for Lazy People, Part 1 · 2021-04-02T12:19:10.506Z · EA · GW

Nice post! Found it through the forum digest newsletter. Interestingly I knew Lindy's Law as the "Copernican principle" from Algorithms to Live By, IIRC. Searching for the term yields quite different results however, so I wonder what the connection is.

Also, I believe your webcomic example is missing a "1 -". You seem to have calculcated p(no further webcomic will be released this year) rather than p(there will be another webcomic this year). Increasing the time frame should increase the probability, but given the formula in the example, the probability would in fact decrease over time.

Comment by markus_over on EA Münster Predictions 2021 · 2021-01-30T09:52:17.783Z · EA · GW

"Bei 80% der Treffen der EA Münster Lokalgruppe in 2021 waren mehr als 5 Personen anwesend" - how will cancelled meetups (due to lack of attendees, if that ever happens) count into this? Not at all, or as <=5 attendees? (kind of reminds me of how the Deutsche Bahn decided to not count cancelled trains as delayed)

Also, coming from EA Bonn where our average attendance is ~4 people, I find the implications of this question impressive. :D

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2021-01-02T11:39:22.400Z · EA · GW

I see, so at the end of the day you're assigning a number representing how productive the day was, and you consider predicting that number the day before? I guess in case that rating is based on your feeling about the day as opposed to more objectively predefined criteria, the "predictions affect outcomes" issue might indeed be a bit larger here than described in the post, as in this case the prediction would potentially not only affect your behavior, but also the rating itself, so it could have an effect of decoupling the metric from reality to a degree.

If you end up doing this, I'd be very interested in how things go. May I message you in a month or so?

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2021-01-02T11:30:23.113Z · EA · GW

Good point, I also make predictions about quarterly goals (which I update twice a month) as well as my plans for the year. I find the latter especially difficult, as quite a lot can change within a year including my perspective on and priority of the goals. For short term goals you basically only need to predict to what degree you will act in accordance with your preferences, whereas for longer term goals you also need to take potential changes of your preferences into account.

It does appear to me that calibration can differ between the different time frames. I seem to be well calibrated regarding weekly plans, decently calibrated on the quarter level, and probably less so on the year level (I don't yet have any data for the latter). Admittedly that weakens the "calibration can be achieved quickly in this domain" to a degree, as calibrating on "behavior over the next year" might still take a year or two to significantly improve.

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2020-12-31T15:50:29.963Z · EA · GW

I personally tend to stick to the following system:

  • Every Monday morning I plan my week, usually collecting anything between 20 and 50 tasks I’d like to get done that week (this planning step usually takes me ~20 minutes)
    • Most such tasks are clear enough that I don’t need to specify any further definition of done; examples would be “publish a post in the EA forum”, “work 3 hours on project X”, “water the plants” or “attend my local group’s EA social” – very little “wiggle room” or risk of not knowing whether any of these evaluates to true or false in the end
    • In a few cases, I do need to specify in greater detail what it means for the task to be done; e.g. “tidy up bedroom” isn’t very concrete, and I thus either timebox it or add a less ambiguous evaluation criterion
  • Then I go through my predictions from the week before and evaluate them based on which items are crossed off my weekly to do list (~3 minutes)
    • “Evaluate” at first only means writing a 1 or a 0 in my spreadsheet next to the predicted probability
    • There are rare exceptions where I drop individual predictions entirely due to inability to evaluate them properly, e.g. because the criterion seemed clear during planning, but it later turned out I had failed to take some aspect or event into consideration[1], or because I deliberately decided to not do the task for unforeseeable reasons[2]. Of course I could invest more time into bulletproofing my predictions to prevent such cases altogether, but my impression is that it wouldn’t be worth the effort.
  • After that I check my performance of that week as well as of the most recent 250 predictions (~2 minutes)
    • For the week itself, I usually only compare the expected value (sum of probabilities) with actually resolved tasks, to check for general over- or underconfidence, as there aren’t enough predictions to evaluate individual percentage ranges
    • For the most recent 250 predictions I check my calibration by having the predictions sorted into probability ranges of 0..9%, 10..19%, … 90..99%.[3] and checking how much the average outcome ratio of each category deviates from the average of predictions in that range. This is just a quick visual check, which lets me know in which percentage range I tend to be far off.
    • I try to use both these results in order to adjust my predictions for the upcoming week in the next step
  • Finally I assign probabilities to all the tasks. I keep this list of predictions hidden from myself throughout the following week in order to minimize the undesired effect of my predictions affecting my behavior (~5 minutes)
    • These predictions are very much System 1 based and any single prediction usually takes no more than a few seconds.
    • I can’t remember how difficult this was when I started this system ~1.5 years ago, but by now coming up with probabilities feels highly natural and I differentiate between things being e.g. 81% likely or 83% likely without the distinction feeling arbitrary.
    • Depending on how striking the results from the evaluation steps were, I slightly adjust the intuitively generated numbers. This also happens intuitively as opposed to following some formal mathematical process.

While this may sound complex when explaining it, I added the time estimates to the list above in order to demonstrate that all of these steps are pretty quick and easy. Spending these 10 minutes[4] each week seems like a fair price for the benefits it brings.

  1. An example would be “make check up appointment with my dentist”, but when calling during the week realizing the dentist is on vacation and no appointment can be made; given there’s no time pressure and I prefer making an appointment there later to calling a different dentist, the task itself was not achieved, yet my behavior was as desired; as there are arguments to be made to evaluate this both as true or false, I often just drop such cases entirely from my evaluation ↩︎

  2. I once had the task “sign up for library membership” on my list, but then during the week realized that membership was more expensive than I had thought, and thus decided to drop that goal; here too, you could either argue “the goal is concluded” (no todo remains open at the end of the week) or “I failed the task” (as I didn’t do the formulated action), so I usually ignore those cases instead of evaluating them arbitrarily ↩︎

  3. One could argue that a 5% and a 95% prediction should really end up in the same bucket, as they entail the same level of certainty; my experience with this particular forecasting domain however is that the symmetry implied by this argument is not necessarily given here. The category of things you’re very likely to do seems highly different in nature from the category of things you’re very unlikely to do. This lack of symmetry can also be observed in the fact that 90% predictions are ~10x more frequent for me in this domain than 10% predictions. ↩︎

  4. It’s 30 minutes total, but the first 20 are just the planning process itself, whereas the 3+2+5 afterwards are the actual forecasting & calibration training. ↩︎

Comment by markus_over on Announcing the Forecasting Innovation Prize · 2020-12-30T12:19:12.024Z · EA · GW

"Before January 1st" in any particular time zone? I'll probably (85%) publish something within the next ~32h at the time of writing this comment. In case you're based in e.g. Australia or Asia that might then be January 1st already. Hope that still qualifies. :)

Comment by markus_over on Make a Public Commitment to Writing EA Forum Posts · 2020-12-20T16:10:11.845Z · EA · GW

Indeed, thank you. :) I haven't started the other, forecasting related one, but intend to spend some time on it next week and hopefully come up with something publishable before the end of the year.

Comment by markus_over on CFAR Workshop in Hindsight · 2020-12-14T08:48:55.281Z · EA · GW

My thoughts on how to best prepare for the workshop (as mentioned in the post):

  • Write down your expectations, i.e. what you personally hope to take away from the workshop (and if you’re fancy, maybe even add quantifications/probability estimates to each point)
  • Make sure you can go into the workshop with a clear head and without any distractions
  • Don’t make the same mistake I made, which was booking a flight home way too early on the day after the end of the workshop. I didn’t realize beforehand how difficult it was to get from the workshop venue to the airport, and figuring out a solution stressed me quite a bit during the week (but was in the end solved for me by the super kind ops people)
  • Do your best in the week(s) before to stay healthy
  • Sleep enough the nights before
  • Maybe prepare a bug list and take it with you; this will also be one of the first sessions, but the more the better
  • Don’t panic; if you don’t manage to prepare in any significant way, the workshop is still extremely well designed and you’ll do just fine.
Comment by markus_over on Announcing the Forecasting Innovation Prize · 2020-11-22T10:14:00.381Z · EA · GW

Sure. Those I can mention without providing too much context:

  • calibrating on one's future behavior by making a large amount of systematic predictions on a weekly basis
  • utilizing quantitative predictions in the process of setting goals and making plans
  • not prediction-related, but another thing your post triggered: applying the "game jam principle" (developing a complete video game in a very short amount of time, such as 48 hours) to EA forum posts and thus trying to get from idea to published post within a single day; because I realized writing a forum post is (for me, and a few others I've spoken to) often a multi-week-to-month endeavour, and it doesn't have to be that way, plus there are surely diminishing returns to the amount of polishing you put into it

If anybody actually ends up planning to write a post on any of these, feel free to let me know so I'll make sure focus on something else.

Comment by markus_over on Make a Public Commitment to Writing EA Forum Posts · 2020-11-20T16:30:07.206Z · EA · GW

Good timing and great idea. Considering I've just read this: I'll gladly commit to submitting at least one forum post to the forecasting innovation prize (precise topic remains to be determined). Which entails writing and publishing a post here or on lesswrong before the end of the year.

I further commit to publishing a second post (which I'd already been writing on for a while) before the end of the year.

If anybody would like to hold me accountable, feel free to contact me around December 20th and be very disappointed if I haven't published a single post by then. 

Thanks for the prompt Neel!

Comment by markus_over on Announcing the Forecasting Innovation Prize · 2020-11-20T16:24:16.148Z · EA · GW

Nice! In the few minutes of reading this post I came up with five ideas for related things I could (and maybe should) write a post on. My only issue is that there's only 6 weeks of time for this, and I'm not sure if that'll be enough for me to finish even one given my current schedule. But I'll see what I can do. May even be the right kind of pressure, as otherwise I'd surely be following Parkinson's law and work on a post for way too long.

(The many examples you posted were very helpful by the way, as without them I would have assumed I don't have much to contribute here)

Comment by markus_over on Have you ever used a Fermi calculation to make a personal career decision? · 2020-11-13T16:37:28.597Z · EA · GW

Firstly, Monte Carlo simulations (such as on ) is likely more precise/useful than pure Fermi estimates, as they make uncertainty, including your consideration of the value possibly being off by factor 10, explicit and thus you can have greater confidence in the results. One advantage Fermi estimates definitely have is that they force you to think about the different components of the problem, or in this case how different careers contribute to your impact. But they are generally speaking primarily helpful to estimate orders of magnitude, and are thus not all that useful in comparing different options unless they lie very far apart.

Secondly, I agree with shawzach that it makes a lot of sense to talk your career considerations through with other people. I used EAGx Virtual for that purpose and looked for all the people who might have something to contribute to my career considerations. But then again, it differed from your situation in that I didn't have several alternatives but rather one preexisting plan and wanted to figure out whether it was "good" or I should look further for alternatives. Still, people might be able to add a few crucial considerations or just arguments you hadn't thought of before that affect your estimates.

Comment by markus_over on The Vegan Value Asymmetry and its Consequences · 2020-10-27T11:47:32.750Z · EA · GW

But in order to have net positive lives, we need to do something more than follow consumer-choice based principles.

I agree. Veganism is (for most vegans, I believe) mostly about reducing the harm you inflict on the world. It's clear you can't ever get to 0. Even if your life is net positive, somewhere along the way you always harm somebody or some being. And while veganism itself certainly has this asymmetry you refer to, it seems a lot of vegans take steps beyond that in the more positive direction, such as

  • being effective altruists and in that way trying to do more good in the world
  • go into activism or animal rights advocacy
  • work at animal shelters or even just taking care of a stray animal

So I don't think the risk of neglecting the positive side of things is all that high. Certainly makes sense to take it into consideration though, and I appreciate your post!

Comment by markus_over on Prabhat Soni's Shortform · 2020-09-26T07:11:04.941Z · EA · GW

I don't have or know of any data (which doesn't mean much, to be fair), but my hunch would be that rationalist people who haven't heard of EA are, on average, probably more open to EA ideas than the average altruistic person who hasn't heard of EA. While altruistic people might generally agree with the core ideas, they may be less likely to actually apply them to their actions.

It's a vague claim though, and I make these assumption because of the few dozens of EAs I know personally, I'd very roughly assume 2/3 of them to come across as more rationalist than altruistic (if you had to choose which of the two they are), plus I'd further assume that from the general population more people will appear to be altruistic, than rationalist. If rationalists are more rare in the general population, yet more common among EAs, that would seem like evidence for them being a better match so to speak. These are all just guesses however without much to back them up, so I too would be interest in what other people think (or know).

Comment by markus_over on A few really quick ideas about personal finance · 2020-09-14T10:54:52.888Z · EA · GW
But use with caution, as I think the time/$ trade-off might imply you should maximise your working hours, which I think isn’t a good assumption. See point earlier about knowledge workers.

I definitely agree with "use with caution". The value of my time question has been bothering me for years now, and I never really found a satisfying heuristic for it. There are just a lot of complications and there imho is no way this can reasonably be considered to be a constant value:

  • it depends not only on the amount of time you're thinking about spending in a certain way (or gaining), but on the difference of value of the thing you're considering you do as compared to the thing you would be doing otherwise
  • even if you manage to pin it down to some value in a situation, that's only the marginal value of time given your circumstances, and it would differ if you had more/less time or money on your hands; thus, if you derive some great heuristic from that (e.g. "take a cab after parties"), depending on how much that impacts your behavior, this changes the value of your time, so the heuristic is sort of self-defeating to a degree

So I basically came up with a range of the value certain things I tend to do are worth per hour, and in that way have a way to compare things. It does feel like I'm overthinking and under-utilizing the principle though... one conclusion it led me to however was that I should work less, since my time was worth more to me than what I was paid per hour (after taxes). This is definitely a case where point 2 from above is highly relevant though. Reducing my time at the job rather drastically changes the value of my time, so I need to take that into account and try to find an equilibrium in working just as much (or little) such that the estimated value of my time (maybe averaged over my typical daily habits) reduces to roughly what I'm being paid to work. Which is tough!

But yeah, I'm getting a bit rambly. Just one more thing: Consumption smoothing is an interesting concept which I have to admit never occurred to me before. Thanks for the post! It also nudged me to once again look into investing which I've been procrastinating for years.

Comment by markus_over on Consider a wider range of jobs, paths and problems if you want to improve the long-term future · 2020-07-05T19:40:02.906Z · EA · GW
Designing recommender systems at top tech firms

Semi-related and somewhat off-topic, so forgive me for following that different track – but I recently thought about how one of the major benefits of EAGx Virtual for me was that it worked as a recommender system of sorts, in the form of "people reading my Grip profile (or public messages in Slack) and letting me know of other people and projects that I might be interested in". A lot of "oh you're interested in X? Have you heard of Y and Z?" which often enough led me to new interesting discoveries.

I'm curious if there may be a better approach to this, rather than "have a bunch of people get together and spontaneously connect each other with facts/ideas/people/projects based on mostly random interactions". This current way seems to work quite well, but it is also pretty non-systematic, luck-based, and doesn't scalethat well (it kind of does, but only in the "more people participate and invest time -> more people benefit" kind of way).

(That all being said, conferences obviously have a lot of other benefits than this recommender system aspect; so I'm not really asking whether there are ways to improve conferences, but rather whether there are different/separate approaches to connecting people with the information most relevant to them)

Comment by markus_over on 2019 - Year in Review · 2020-06-19T17:14:40.392Z · EA · GW

Just a random experience report - I've been using the website for my monthly donations for about half a year now, and think it's great. The process is so easy and friction-less that donating is something I'm always looking forward to as opposed to it feeling like an obligation I need to get through with, which basically had been my feeling about it beforehand. The website's great UX really makes a huge difference.

Also it's a great and easy page to point people to that sympathize with the idea of effective giving but don't really know what the next steps would be. Which isn't surprising as I guess that's partly the reason the project exists in the first place.

Random thought - I've seen many EAs apply stickers (e.g. GiveWell) to their phones or laptops, which I guess is a very cheap tool that may lead to a couple of conversations (and an expected >0 "conversions") about EA or the respective org in particular. I'd assume that at least marginally such stickers are pretty effective little things, but it may depend on how difficult it is to set up and distribute such a product. So it may certainly not be on top of your backlog if anything, but have you put any thought into such ideas? Maybe something like this would be a good opportunity for a volunteer, as I imagine it would require very little coordination initially.

Comment by markus_over on How to change minds · 2020-06-19T16:41:31.770Z · EA · GW

I wonder what heuristics people here follow regarding the question of when "how to change (other) minds" is a good mindset to have as opposed to "how to bring both conversation partners into a state of willingness to change one's mind", i.e. oneself being open to having one's mind changed as well, and then figuring out who has the right(er) idea about the topic at hand. The latter seems generally more sincere and useful, but I guess there are situations where you can be reasonably sure you really do know more about a topic than the other person and can be confident enough in your judgment that changing their mind is a reasonable goal to have.

Comment by markus_over on EA Forum feature suggestion thread · 2020-06-19T16:01:49.231Z · EA · GW

I'm not sure if such a feature would be worth the work it would involve, but: a very simple "editor" to very easily create probability distributions (or maybe more generally graphs that don't require mathematical formulas but just very rough manual sketching) and embed them into posts or comments could be useful. I'm not sure how often people would really use that though. Generally however, it would probably be a good thing to make probability estimates as explicit as possible, and being able to easily "draw" distributions in a few seconds and having a polished looking output could make that happen.

If this is something people would find useful, I'd be willing to spend the time to create such a component so it would theoretically just have to be "plugged in" afterwards.

Comment by markus_over on Why "animal welfare" is a thing? · 2020-06-19T12:33:15.218Z · EA · GW

Firstly, I think this may be helpful in understanding the downvotes: - to me, your post isn't very clear, and it seems you're using a somewhat superficial excuse of a question in order to make a bunch of semi-related points (if this is not the case and you're sincerely just looking for an answer, then sorry for the assumption).

Linking to your book doesn't really add to the post and comes off as unnecessary self promotion independent of whatever the actual concrete point of this post may be.

"Playing Bach or Mozart" to animals is probably just an intended minor provocation and you're not seriously thinking that this is what EA is going for when it comes to animal welfare. Still, to attempt to answer your question:

  • "animal welfare" is a cause area in the sense that it's a big global problem (billions of animals experiencing pain and suffering) that is neglected (comparably few resources going into improving the situation) and potentially solvable
  • playing music to animals on the other hand would be one possible intervention (so an answer to the question of how we could approach that big problem), and certainly not the most effective one, and I don't think anybody here has claimed that. But correct me if I'm wrong.
  • if you disagree with how animal welfare is handled in EA currently, there are at least two possible constructive ways of attack:
    • you argue that animal welfare is not an important cause area, because either it is not as big a problem, because it is not neglected, or because it is not solvable; all of these things are pretty well established however, so unless you know of some very crucial consideration, even strong evidence in any of these areas would probably only lead to comparably small adjustments in how this cause area is prioritized in comparison to others
    • you argue that there are interventions to tackle that problem that are more effective than those currently favored by EA; this seems closer to what you're trying to do here. So your question should not be "Why is animal welfare a thing", but "Why do you assume intervention X is more effective than intervention Y" (e.g. X being research into clean meat, and Y being carbon tax), and then doing some research on the effectiveness of both things; or alternatively if you're relatively sure of that, writing a post in favor of intervention Y being underrated and why people should look more into it as it's a very effective animal welfare intervention.

Building on the last point: when arguing against a position, you'll get more support and fewer downvotes if you follow a) the good faith principle (basically assuming the position you're arguing against originates from well meaning people with a genuine interest in doing good) and b) try steelmanning the opposite view (i.e. trying to find the best possible available argument, as opposed to strawmanning, which "playing Bach or Mozart" basically is).

To get closer to the actual object level here, I'd be interested in what you think about these statements and to what extent you agree or disagree with them:

1. Animal suffering is a problem worth solving

2. We should prioritize approaches of solving the problem that do the most good per dollar/time (i.e. alleviate the most suffering or yield the most happiness, or following a similar metric depending on your values)

3. Which approach is the most effective one is an open question that should be answered primarily by gathering evidence

Comment by markus_over on Why and how the EA-Movement has to change · 2020-06-04T18:46:07.152Z · EA · GW

Thanks for sharing. Probably a bit too cynical for my taste (e.g. you mention many of them are vegan, which may not be the most effective thing you can do, but certainly is evidence for them going out of their way to live in line with their values, yet regarding donating 10% being "unpopular (I wonder why)" you seem to imply they wouldn't be open to any kind of sacrifice), but I do believe I've at least seen a few of these tendencies in others as well as myself, and it makes sense to look out for them.

Also I found your remark on the 10% number rarely being questioned somewhat enlightening, as I myself haven't done so I'm afraid. Maybe it's a bit similar to vegetarianism and veganism which are two comparably crowded spots on a continuum of ways to eat. These are easy categories, and once you're in one of them, it's easy to communicate it to others and has a clear effect on your self image, i.e. thinking to yourself as "a vegetarian" instead of "<insert random complicated formula of how to evaluate which being you eat and which you don't">. Plus it probably works better as a potential role model for others.

With donating 10% (esp. if in combination with the giving what we can pledge) you also end up in such a distinct category. For people who donate less it's a nice (albeit arguably arbitrary) ideal to look up to. For people who've reached it it certainly makes sense to grow beyond it. Although I can imagine people wanting to do good primarily via their career, and donating 10% simply being sort of their baseline, and maybe a way of signalling to the outside world that they're really living what they preach and as such gaining more credibility. And for signalling purposes, which aren't inherently bad or anything, it makes sense to settle on a nice round number.

Comment by markus_over on Developing my inner self vs. doing external actions · 2020-05-31T10:32:14.102Z · EA · GW

I've thought about this question quite a bit as well (not very productively though), and these are basically my thoughts on it so far:

  • the two extremes are most likely highly suboptimal, so it must indeed be a question of finding the right balance
  • it "feels" like "doing a bit of both" is a sensible heuristic and trying to calculate this out more thoroughly may be overkill, as there are too many unknowns to get to any reliable solution
  • but the above may also just be my laziness talking, as on the other hand, it also seems clear that shifting the balance a bit towards the optimum could easily increase your whole life's output by a few %. Thus it would absolutely make sense to spend, say, a week or so, thinking deeply about this and at least trying to find a good balance
  • the answer likely isn't a ratio, but always depends on the concrete opportunities (especially as, as others have pointed out, few things fall strictly in one of the other category, but often it's a bit of both), which arise very often on the lower levels (e.g. "have this conversation or not" on a small level, "read this book" on a higher level) so it definitely make sense to follow some kind of heuristic for these cases
  • on even higher levels, with decisions such as "take this job where I can learn a lot vs this job where I have direct impact", it certainly makes sense to not follow heuristics but investigate the concrete option(s) and estimate their effects on our personal development and impact
  • delaying our own impact to the future always bears some risk of value drift changing our plans, nullifying our impact
  • it's possible that the best way to learn how to have much impact is to try to have much impact, so optimizing for impact is the dominant strategy, but that certainly depends on the concrete cases and may be more true for the more high level decisions than low level ones, and is also only true if you get enough and quick enough feedback to actually evaluate your impact and correct your approach

So in a nutshell, I haven't in any way answered this for myself yet. I also haven't come up with a useful heuristic yet and mostly just follow my gut, possibly erring on the self development side so far, which makes sense as that part is comparably easy/rewarding/forgiving, whereas outside facing impact considerations have some risk of failure and much increased uncertainty. So I guess at least for myself "focus more on impactful projects and less on reading books" would be a useful heuristic and very likely lead me closer to the optimum balance.

Comment by markus_over on How do you deal with FOMO / FOBO · 2020-05-08T11:06:45.262Z · EA · GW

Hi Smer, I admittedly find it a little difficult to answer your question as it seems you raise a few different ones. Is it correct that your main struggle right now is to decide where to move your career, as you have a lot of different options and don't know which one to take? Or is that just one example, and you're more interested in the very general issue of finding it difficult to make decisions in the first place?

If it's the former, then I believe a more in-depth (maybe coaching kind of) conversation could be more fruitful here than a forum post, as I'm not sure broad answers to your post's title will be very applicable to your concrete situation. Also you mention a lot of things which you're already trying, but I find it difficult to see them in context as you didn't provide any notes on which ones of those work for you, and in what ways they already contribute (or not) to the central problem.

Anyway, just to try to add my two cents to the "how to deal with FOMO"-question, which I can relate to rather well as I'm also a 30-ish web developer often struggling with making decisions:

  • I personally have the impression I'll just have to get used to living with the feeling of "what if this decision I'm making is not the best one?" - especially for big decisions that feeling will be there no matter what, so I might as well take it for what it is (a matter of subjective experience of uncertainty, and not actual evidence of the decision being suboptimal)
  • delays caused by postponed decisions usually come at a cost, so quick suboptimal decisions are often better than ideal decisions made (too) late
  • I sometimes use the book Decisive (or rather summaries thereof) to aid my decision-making, although I have a feeling it's often only more about about raising my confidence in a decision than about actually finding the best one
  • The author of Algorithms to Live By makes the point that real world problems are often too complex to properly solve, and that it makes sense to artificially relax these problems into easier ones so we can find suboptimal but still pretty good solutions. That's mostly with regards to travelling salesman kinds of problems, and may or may not apply to personal decisionmaking.
  • Career considerations may be a category where premature decisions come at a high cost, so here it really makes sense to spend a lot of time thinking them through thoroughly. Which probably includes discussing them with others. I'm not sure if 80,000 hours still offers 1-on-1 career consultation, but if they do, that may be a good thing to try, and if they don't I'm sure there are other people from the EA community who'd be willing to help out as well.
Comment by markus_over on Why I'm Not Vegan · 2020-04-12T11:20:16.805Z · EA · GW

While I certainly like that argument/thought experiment, I think it's very difficult to imagine the subjective experience of an (arguably) lower degree of consciousness. Depending on what animal's living conditions we're talking about, I'd arguably take 1/10th even assuming human level consciousness (so basically *me* living in a factory farm for 36.5 days to gain one additional year of life as a human), but have naturally a hard time judging what a reasonable value for chicken level consciousness would be.

Also, this framing likely brings a few biases into the mix and reframing it slightly could probably greatly change how people answer. E.g. if you had the choice to die today or live for another 50 years but every year would start with a month experienced as a pig in factory farming conditions, I'd most certainly pick the latter option.

Comment by markus_over on Why I'm Not Vegan · 2020-04-12T11:04:32.066Z · EA · GW

I think this is a very interesting point which I hadn't thought of before. To add to it, let's assume the "how much animals matter" values from the original post were chosen in a way more favorable to animals such that veganism seems to make economic moral sense, so we come to the conclusion "it's probably an effective intervention for an EA to go vegan".

Now assume some charity finds a super-effective intervention that cuts the cost of saving a human life to 10% its previous best value. Following the original argument, that would basically mean at this point going vegan is not recommended anymore because it may now be much less effective than the one thing we're semi-arbitrarily comparing it to.

It seems rather counter-intuitive that thousands of hypothetical rational EAs would now start eating meat again, simply because a charity found a cheaper way to save humans.

But then again, I can't get rid of the feeling that this whole counter-argument too is arbitrary and constructed, and that it wouldn't convince me if I were of the opposite opinion, but rather seem like a kind of logic puzzle where you have to find the error of thought. Maybe despite being counter-intuitive, the absurd sounding conclusion would still be the correct one in some sense.

Comment by markus_over on A small observation about the value of having kids · 2020-01-24T16:35:38.422Z · EA · GW

Most EAs I know are not planning to have children as far as I know (which I admit is not very far - to most I haven't explicitly spoken about the topic). Even if they did, it seems like a really slow and expensive way to build a movement. It may be one factor among others for EAs considering to build a family, but I doubt it is decisive for a considerable number of individuals.

If we simplify the possible outcome to two scenarios, a) children raised by EAs will overwhelmingly become EAs themselves, or b) this effect is much weaker and very few children will share the same values, I'd argue the value of information appears to be low.

Firstly, it seems highly unlikely to me that having children is anywhere near the most effective thing an EA can do. It is of course fine to make that plan for other, personal reasons, but I doubt many EAs get to the conclusion "the best use of my time on this planet in my pursuit to make this world a better place is to raise my own altruistic children". Growing the movement can certainly be done quicker without first growing your own little humans.

So given that assumption, the a) scenario, i.e. the "positive" outcome, could actually turn out harmful in a sense as it might convince a few additional EAs to have children that otherwise wouldn't. Scenario b) on the other hand would be the opposite and possibly keep a few EAs from having children that without that evidence would have done so. In both cases it seems we're better off simply assuming the children we have will not turn into EAs, as opposed to spending decades and hundreds of thousands of dollars on an experiment conducted in order to gain some value of information.

This line of argumentation of course only works if you agree with my assumption that having children is a very ineffective way to grow a movement though.

Comment by markus_over on Physical Exercise for EAs – Why and How · 2020-01-17T18:52:29.286Z · EA · GW

Thanks for this! Very useful.

One tiny nitpick:

Marie commutes daily by bicycle to the chemistry lab where she works.

Sorry for taking things a little too literal here, but most people (that I know of) work 5 days a week, have 2-6 weeks off per year, and call in sick something like 5-15 days per year, plus there may be some nationwide holidays on top. That leaves us with a range of around 210 - 245 actual commuting days or 57-67% of all days of the year. There are also likely days where rain/snow/wind cause Marie to get to work some other way, so effectively, even somebody who pretty much always takes the bike to work, will still end up at something like 50% of all days, but would probably tend to describe it as "everyday".

I'm not so much intending to criticize the example here, just point to the fact that such simplification makes it rather easy to delude oneself. I thought of myself as someone who takes the bike to work "almost always", yet when I actually tracked it, only got to around 100 days per year which was somewhat surprising.

Maybe the recommendations already take this into account however, and exceptions (even a lot of them, as naturally tend to happen) are tolerable as long as "the typical week" goes according to plan?

Comment by markus_over on Applied Rationality Workshop in Münster, Germany · 2019-09-24T12:54:34.399Z · EA · GW

Neat! The workshop in Cologne was quite good, and this one apparently will even include resolve cycles and hamming circles which I'm very much in favor of (and which as far as I remember weren't part of the Cologne workshop).

I'd probably recommend participating to anyone who lives even remotely close and feels like they could benefit from marginal improvements in their applied rationality, which realistically is probably almost everyone. Plus you'll surely get to know a lot of great people.

Thanks for organizing this!

Comment by markus_over on Alien colonization of Earth's impact the the relative importance of reducing different existential risks · 2019-09-06T17:43:45.346Z · EA · GW

I'm not that deep into AI safety myself, so keep that in mind. But that being said, I haven't heard that thought before and basically agree with the idea of "if we fall victim to AI, we should at least do our best to ensure it doesn't end all life in the universe" (which is basically how I took it - correct me if that's a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:

  • probability of AI managing to spread through the universe (I'd intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
  • probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
  • probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
  • probability of aliens having values sufficiently similar to ours'

I guess there's also a side to the Fermi paradox that's relevant here - it's not only that we don't see alien civilizations out there, we also don't see any signs of an ASI colonizing space. And while there may be many explanations for that, we're still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.

In the end I don't really have any conclusive thoughts (yet). I'd be surprised though if this consideration were a surprise to Nick Bostrom.

Comment by markus_over on Local Community Building Funnel and Activities - EA Geneva · 2019-09-01T10:11:09.476Z · EA · GW

Hi Konrad,

given your comment is now a year old, could you very briefly provide an update of whether anything significantly changed since then (maybe there are some updates to how you run EA Geneva that wouldn't justify an entire new post, but are still noteworthy)?

Also I'd be interested to know how close the growth assumptions were, and whether your member count and advanced workshop participation went up roughly as you expected.

This whole post seems very valuable by the way, so thank you!

Comment by markus_over on What is the effect of relationship status on EA impact? · 2019-06-28T20:51:12.530Z · EA · GW

While I don't have an actual answer of any kind, I'd argue that a relationship can have "positive externalities" on altruistic endeavours, e.g. by discussing EA ideas much more frequently than you otherwise would (depending on your circumstances), and, in case the other person is into EA as well, keeping each other motivated. I personally would assume that my long term engagement in EA would drop quite a bit were it not for my relationship. That's certainly different for other people however, so this isn't anything more than one random data point.

Comment by markus_over on Is this a valid argument against clean meat? · 2019-05-19T16:34:38.870Z · EA · GW

Even if there are minor negative short term effects (and while there almost certainly are >0 people in the world following the cited logic, I'm sure they're responsible for far less than even 0.1% of global meat consumption), it still seems to me like the most likely solution to factory farming in the long term, and thus the expected benefits of cultured meat very vastly outweigh the cost that is implied by that argument.

1) I believe most ethical vegetarians avoid meat in order to not actively cause any harm to animals, and not so much in order to solve factory farming. And for the former, the advent of cultured meat in the future doesn't make that much of a difference for their present behaviour.

2) People committed enough to actually think about how their actions contribute to creating a more vegetarian (or at least factory farm-less) world, and thus people who would in theory be affected by the given argument, probably aren't the same people that would think "oh well this issue is being dealt with by others already, nothing to do here". Plus 1) still applies here, as people with such a level of commitment almost certainly also want to avoid personally causing harm to animals.

3) the exception to 1) and 2) may be a few effective altruists (or people with similar mindsets) here and there, who get to the conclusion that sticking to a vegetarian/vegan diet is not worth it for them personally given the apparent tractability and non-neglectedness of the problem, but we're probably talking about dozens or at most hundreds of people around the globe at this point, and if they actually exist this would even be a good sign, as the reason these people would make this decision would be the fact that, well, cultured meat solves the issue of factory farming in such an effective manner that their personal contribution via ethical consumption would have a smaller marginal impact than whatever else they decide to do.

Admittedly a lot of speculation on my part, but what it comes down to is that the argument, while probably playing some non-zero role, just hasn't enough weight to it to justify changing one's view on cultured meat.