Start a Minimal Local Group for Passive Outreach 2022-09-27T11:44:53.126Z
Cause Exploration Prize: Distribution of Information Among Humans 2022-08-12T00:58:11.906Z
One’s Future Behavior as a Domain of Calibration 2020-12-31T15:48:33.921Z
CFAR Workshop in Hindsight 2020-12-13T16:18:32.356Z


Comment by markus_over on Cost-effectiveness of making a video game with EA concepts · 2022-09-21T07:34:30.290Z · EA · GW

This seems like a very cool project, thanks for sharing! I agree that this type of project can be considered a "moonshot", which implies that most of the potential impact lies in the tail end of possible outcomes. Consequently the estimated become very tricky. If the EV is dominated by a few outlier scenarios, reality will most likely turn out to be underwhelming.

I'm not sure if one can really make a good case that working on such a game is worthwhile from an impact perspective. But looking at the state of things and the community as a whole, it does still seem preferable to me that somebody somewhere puts some significant effort into EA games (sorry for the pun).

Also, to add one possible path to impact this might enable: it might be yet another channel to deliberately nudge people into in order to expose them to the key EA ideas in an entertaining way (HPMOR being another such example). So your players might not all end up being "random people", but a ratio of them might be preselected in a way.

Lastly, it seems like at least 5-10 people (and probably considerably more) in EA are interested or involved in game development. I'm not familiar of any way in which this group is currently connected - would probably be worth doing so. Maybe something low on overhead such as a Signal group would work as a start?

Comment by markus_over on EA is about maximization, and maximization is perilous · 2022-09-03T12:23:06.245Z · EA · GW

Sometimes I think that this is the purpose of EA. To attempt to be the "few people" to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.

Very interesting perspective and comment in general, thanks for sharing!

Comment by markus_over on Toby Ord’s The Scourge, Reviewed · 2022-08-31T06:24:56.766Z · EA · GW

Very good argument imo! It shows there's a different explanation rather than "people don't really care about dying embryos" that can be derived from this comparison. People tend to differentiate between what happens "naturally" (or accidentally) vs deliberate human actions. When it comes to wild animal suffering, even if people believe it exists, many will think something along the lines of "it's not human-made suffering, so it's not our moral responsibility to do something about it" - which is weird to a consequentialist, but probably quite intuitive for most people.

It takes a few non-obvious steps in reasoning to get to the conclusion that we should care about wild animal suffering. And while fewer steps may be required in the embryo situation, it is still very conceivable that a person who actually cares a lot about embryos might not initially get to the conclusion that the scope of the problem exceeds abortion.

Comment by markus_over on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-08-30T06:45:35.538Z · EA · GW

This seems very useful! Thank you for the summaries. Some thoughts:

  • having this available as a podcast (read by a human) would be cool
  • at one point you hinted at happenings in the comments (regarding GiveWell), this generally seems like a good idea. Maybe in select cases it would make sense to also summarize on a very high level what discussions are going on beneath a post.
  • this sentence is confusing to me: "Due to this, he concludes the cause area is one of the most important LT problems and primarily advises focusing on other risks due to neglectedness." - is it missing a "not"?
  • given this post has >40 upvotes now, I'm looking forward to reading the summary of it next week :)
Comment by markus_over on What domains do you wish some EA was an expert in? · 2022-08-28T20:32:30.219Z · EA · GW
  • Flow and distribution of information (inside EA, and in general)
  • how to structure and present information to make it as easily digestible as possible (e.g. in blog posts or talks/presentations)

A bit less pressing maybe, but I'd also be interested in seeing some (empirical) research on polyamory and how it affects people. It appears to be rather prevalent in rationality & EA, and I know many people who like it, and also people who find it very difficult and complicated. 

Comment by markus_over on Are EAs interested in using or consuming more diagrams or data visualisations? · 2022-08-28T15:08:09.415Z · EA · GW

My personal answers are 1. yes and 2. yes.

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2022-07-10T14:28:58.965Z · EA · GW

Sort of, so firstly I have a field next to each prediction that automatically computes its "bucket number" (which is just FLOOR(<prediction> * 10)). To then get the average probability of a certain bucket, I run the following: =AVERAGE(INDEX(FILTER(C$19:K, K$19:K=A14), , 1)) - note that this is google sheets though and I'm not sure to which degree this transfers to Excel. For context, column C contains my predicted probabilities, column K contains the computed bucket numbers, and A14 here is the bucket for which I'm computing this. Similarly I count the number of predictions of a given bucket with =ROWS(FILTER(K$19:K, K$19:K<>"", K$19:K=A14)) and the ratio of predictions in that bucket that ended up true with =COUNTIF(FILTER(D$19:K, K$19:K=A14), "=1") / D14 (D19 onwards contains 1 and 0 values depending on if the prediction happened or not; D14 is the aforementioned number of predictions in that bucket).

If this doesn't help, let me know and I can clear up one such spreadsheet, see if I can export it as xlsx file and send it to you.

Comment by markus_over on Reimagining a social network for EA community · 2022-02-12T21:31:05.547Z · EA · GW

Thanks for sharing! I've had the feeling for a while that it would be great if EA managed to make goals/projects/activities of people (/organizations) more transparent to each other. E.g. when I'm working on some EA project, it would be great if other EAs who might be interested in that topic would know about it. Yet there are no good ways that I'm aware of to even share such information. So I certainly like the direction you're taking here.

I guess one risk would be that, however easy to use the system is, it is still overhead for people to have their projects and goals reflected there. Unless it happens to be their primary/only project management system (which however would be very hard to achieve).

Another risk could be that people use it at first, but don't stick to it very long, leading to a lot of stale information in the system, making it hard to rely on even for highly engaged people.

I guess you could ask two related questions. Firstly, let's call it "easy mode": assuming  the network existed as imagined, and most people in EA were in fact using this system as intended - would an additional person that first learns of it start using it in the same productive way?

And secondly, in a more realistic situation where very few people are actively using it, would it then make sense for any single additional person to start using it, share their goals and projects, keep things up to date persistently, probably with quite a bit of overhead on their part because it would happen on top of their actual project management system?

I think it's great to come up with ideas about e.g. "the best possible version EA Hub" and just see what comes out, even though it's hard to come up with ideas that would answer both above questions positively. Which is why improving the EA Hub generally seems more promising to me than building any new type of network, as at least you'd be starting with a decent user base and would take away the hurdles of "signing up somewhere" and "being part of multiple EA related social networks". 

So long story short, I quite like your approach and the depth of your mock-up/prototype, and think it could work as inspiration for EA Hub to a degree. Have my doubts that it would be worthwhile actually building something new just to try the concept. Except maybe creating a rough interactive prototype (e.g. paper prototype or "click dummy"), and playing it through with a few EAs, which might be worthwhile to learn more about it.

Comment by markus_over on Low-Commitment Less Wrong Book (EG Article) Club · 2022-02-10T20:45:59.302Z · EA · GW

I'd be up for the reading and comment writing part (will see if it works out time-wise), probably not so much for zoom. Nice idea and thanks for taking the initiative!

Comment by markus_over on Stop procrastinating on career planning · 2022-02-07T20:45:11.672Z · EA · GW

Is your post deliberately categorized as question? The four questions included in it all seem to be of the rhetorical kind. :P

Thanks for the post though! I think I'm in a very similar situation and you basically convinced me. I didn't expect five minutes ago to be just one minute of reading away from being convinced of applying to a 80000 hours career advice call, yet here we are.

Comment by markus_over on Giving Multiplier after 14 months · 2022-01-29T21:46:35.874Z · EA · GW

Great write-up! The "many people are happy to donate to effective charities as long as they also donate to their favorite charity" point did indeed come as a surprise. Seems like a very valuable insight for certain types of outreach. 

Comment by markus_over on Introducing Effective Self-Help · 2022-01-09T10:12:40.832Z · EA · GW

I think that you should consider connecting and collaborating with key parties who have interdependent goals & similar incentives

A small addition to your list would be this post about a study on a depression related intervention that I believe originated from within the EA community. Might well be worth contacting the author.

Comment by markus_over on Introducing Effective Self-Help · 2022-01-09T10:06:01.601Z · EA · GW

Interesting project! It reminds me a bit of Huberman lab, the existence and apparent popularity of which could be taken as an argument in favor of ESH to be worthwhile (although format, target audience and focus might of course differ quite a bit). 

One thing I personally find very interesting is the point you mentioned as a counter argument: "Individual differences in benefit significantly outweigh the general differences in value between interventions" - in my opinion, this could even be viewed as quite the opposite: my impression is that in most easily digestible sources (such as pop-sci books, podcasts, blogs), this point is mostly ignored, and getting reliable information about this facet of health and well-being interventions would be great.

People very often speak of effect sizes as if they were a thing inherent to an intervention or substance, when actually they quite often seem to depend strongly on the person. An intervention with a strong positive effect size on only 10% of people could be much more exciting than an intervention with a weak effect on everybody. Even a large positive effect on small number of people combined with negative effects on the rest could be a very useful intervention, given you find out early enough whether it works for you or not. Getting some insight into the nature of variance of different interventions, if such data is available, could be really useful. It might of course be the case that most studies don't offer such insights, because it's impossible to tell whether the subset of participants that benefited from an intervention can be attributed to noise or not.

Comment by markus_over on Longtermism in 1888: fermi estimate of heaven’s size. · 2021-12-25T06:47:55.926Z · EA · GW

This is great, thanks for sharing!

I found the "let's assume humanity remains at a constant population of 900 million" notion particularly interesting. On some level I still have this (obviously wrong) intuition that human knowledge about its history just grows continuously based on what happens at any given time. E.g. I would have implicitly assumed that a person living in 1888 must have known how the population numbers have developed over the preceding centuries. This is of course not necessarily the case for a whole bunch of reasons, but seeing that he wasn't even aware that population growth is a thing was a serious surprise (unless he was aware, but thought it was close enough to the maximum to be negligible in the long term?).

It's funny how he assumes a generation would span 31.125 years without giving any explanation for that really specific number. Maybe he had 8 children at this point in time, and took e.g. his average age during the birth of all of them?

And lastly, he as well as any readers of this letter would have greatly benefited of the scientific notation. Which makes me wonder what terrible inefficiencies in communication & encoding / expressing ideas we're suffering from today, without having any inkling that things could be better... :)

Comment by markus_over on Preprint is out! 100,000 lumens to treat seasonal affective disorder · 2021-11-15T13:34:50.881Z · EA · GW

Nice! :)

Also, I think a few links are missing here:

David Chapman for inspiring us with these two posts in the Meaningness blog, Raemon for inspiring us with this LessWrong post

Comment by markus_over on Could EA be ideas constrained? · 2021-11-08T16:11:49.327Z · EA · GW

Some thoughts (not to say ideas) regarding 3:

  • come up with more ideas 
    • just brainstorming in a very unconstrained way on relevant questions (e.g. "babble")
    • trying some systematic ways to identify implicit assumptions in our existing beliefs and ideas, and questioning them
    • looking at existing entities (orgs, fields, causes, tools...) and thinking about how they could be different
  • share ideas more effectively in the movement
    • encourage sharing in the first place (makes me sad to read of posts people started in the past but never finished)
    • good compression of ideas (e.g. short posts, descriptive titles, beginning with a summary)
    • make things easy to find via search
    • talk to other EAs about your ideas
    • get feedback early on
      • maybe twitter is good for this?
  • actual implementation
    • a lot of ideas may exist, e.g. in the dusty archives of this forum, that nobody has ever acted on and people have more or less forgotten about or never heard of in the first place
    • some (or many) people may generally be more interested in thinking - maybe EA is implementation constrained rather than idea constrained after all? (but I guess there are a lot of constraints anyway, and they vary substantially by who you ask; so idea constraints most certainly are a thing, affecting some more than others)
Comment by markus_over on Could EA be ideas constrained? · 2021-11-08T15:46:22.543Z · EA · GW

There are definitely many coincidence of wants related problems, where someone has a good idea that someone would do or fund but that person never hears of it.

Very much agree with your points, this one in particular. I think in a perfect world we would all have a way of knowing of what others in the EA community are thinking about, working on and what they need help with. I'd love to have a way to share more openly (but without wasting other's attention) what I'm focusing on so that others who think about similar things could be made aware of this opportunity for collaboration. But I don't really know of any practical ways to achieve this. Write a forum post saying "Hey everyone I'm really interested in X recently and plan to spend the next 3 months diving into that topic"? Probably not. 

EA G(X) could be helpful, because you can share your (current) interests in your profile on the networking app. And theoretically find others who mention the same keywords. But then swapcard comes along and doesn't support proper searching, so I missed out on many potentially great relevant contacts. :( Plus of course it doesn't happen all that often, and always contains only a relatively small subset of the community.

Comment by markus_over on What high-level change would you make to EA strategy? · 2021-11-08T15:28:35.708Z · EA · GW

One thing I could imagine being very helpful is some kind of ongoing local group "mentoring". So instead of one or two single calls on strategy or bottlenecks, having some experienced person more deeply invested with any particular local group in need. Somebody who might (occasionally) participate at our virtual meetups, our planning/strategy calls, gets to know our core members, our situation, needs and problems, and can provide actionable insights on all of them.

The problem with calls I've had in the past is that it's quite difficult to get accross everything relevant, so we might just focus on one or two issues, obtain some pointers to other people or resources who might be helpful, and some relatively generic advice. Not to say it isn't useful - but it also doesn't seem to be like a complete solution. I've also read most of the EA Hub resources on running a group, but tend to come out of these articles thinking "yup this makes sense" and not actually turning it into anything concrete. Which, again, is probably entirely my responsibility. But I could imagine I'm not the only time/energy constrained local group coordinator struggling with properly utilizing the existing resources.

On the other hand, such more involved support over longer time of course also comes with significantly higher cost, and I can't tell whether that would be worth it.

Comment by markus_over on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-08T14:50:26.688Z · EA · GW

The distinction reminds me of the foxes vs hedgehogs model from Superforecasting / Tetlock. Hedgehogs being "great idea thinkers" seeing everything in the light of that one great idea they're following, whereas foxes are more nuanced, taking in many viewpoints and trying to converge on the most accurate beliefs. I think he mentioned in  the book that while foxes tend to make much better forecasters, hedgehogs are not only more entertaining but also good in coming up with good questions to forecast in the first place.

An entirely different thought: The Laws of Human Nature by Robert Greene was the first audible book I returned without finishing. It was packed with endless "human archetypes" described in great detail, making some rather bold claims about what "this type" will do in some given situation. You mention in the footnotes already that people who dislike e.g. personality profiling tools might not like this post. And it did indeed somewhat remind me of that book, but maybe your "assessor" way of describing the model, as opposed to Greene's very overconfident seeming way of writing, made this seem much more reasonable. There seems to be a fine line between actually useful models of this kind which have some predictive power (or at least allow thoughts to be a bit tidier), and those that are merely peculiarly entertaining, like Myers-Briggs. And I find it hard to tell from the outside on which side of that line any given model falls. 

Comment by markus_over on [Discussion] Best intuition pumps for AI safety · 2021-11-08T14:28:02.659Z · EA · GW

One thing I could imagine happening in these situations is that people close themselves off to object level arguments to a degree, and maybe for (somewhat) good reason.

  • to the general public, the idea of AI being a serious (existential) risk is probably still very weird
  • people may have an impression that believing in such things correlates with being gullible
  • people may be hesitant towards "being convinced" of something they haven't fully thought through themselves

I remember once when I was younger talking to a Christian fanatic of sorts, who kept coming up with new arguments for why the bible must obviously be true due to the many correct predictions it has apparently made, plus some argument about irreducible complexity. In the moment, I couldn't really tell if/where/why his arguments failed. I found them somewhat hard to follow and just knew the conclusion would be something that is both weird and highly unlikely (for reasons other than his concrete arguments). So my impression then was "there surely is something wrong about his claims, but in this very moment I'm lacking the means to identify the weaknesses". 

I sometimes find myself in similar situations when some person tries to get me to sign something or to buy some product they're offering. They tend to make very convincing arguments about why I should definitely do it. I often have no good arguments against that. Still, I tend to resist many of these situations because I haven't yet heard or had a chance to find the best counter arguments.

When somebody who has thought a lot about AI safety and is very convinced of its importance talks to people to whom this whole area is new and strange, I can imagine similar defenses being present. If this is true, more/better/different arguments may not necessarily be helpful to begin with. Some things that could help:

  • social proof ("these well respected people and organizations think this is important")
  • slightly weaker claims that people have an easier time agreeing with
  • maybe some meta level argument about why the unintuitive-ness is misguided (although this could probably also taken as an attack)
Comment by markus_over on Best practices for organizational effectiveness · 2021-11-08T07:50:20.200Z · EA · GW

The cost of this seems pretty low, but in a way the expected value too seems limited (to me at least from the context you provided): I'd assume that unless this turns out to be so good that it becomes a "standard" of sorts (that people always tend to mention whenever organizational ineffectiveness comes up), it would likely end up as a relatively short lived project that doesn't reach too many people and organizations. Although this could partially be mitigated if it's stored in a persistent, easy to search and find way, so that future people on the lookout for such a guide would stumble upon it and immediately see its value.

Comment by markus_over on Introducing TEAMWORK - an EA Coworking Space in Berlin · 2021-09-20T21:23:48.029Z · EA · GW

Sounds really cool. Time to visit Berlin! :)

Comment by markus_over on Learning, Knowledge, Intelligence, Mastery, Anki - TYHTL post 2 · 2021-09-14T19:44:50.584Z · EA · GW

Just a side note: While Obsidian is free (and great), I'm pretty sure it's not open source.

Comment by markus_over on Lessons from Running Stanford EA and SERI · 2021-09-11T12:38:29.234Z · EA · GW

Thank you Michael!

  • I personally am definitely more time- than funding constrained. Or maybe evem "energy constrained"? But maybe applying for funding would be something to consider when/if we find a different person to run the local group, maybe a student who could do this for 10h a week or so.
  • regarding a fellowship: my bottlenecks here are probably "lack of detailed picture of how to run such a thing (or what it even is exactly)" and "what would be the necessary concrete steps to get it off the ground". Advertising is surely very relevant, but secondary to these other questions for now.
  • on a slightly more meta level, I think one of the issues is that I don't have a good overview of the "action space" (or "moves") in front of me as an organizer of an EA local group. Running a fellowship appears to be a very promising move, but I don't really know how to make it. Other actions may be intro talks, intro workshops, concepts workshops, discussions, watching EAG talks together, game nights, talks in general, creating a website, setting up a proper newsletter instead of having a manually maintained list of email addresses, looking for a more capable group organizer, facebook ads, flyers, posters, running giving games, icebreaker sessions, running a career club, coworking, 1on1s, meeting other local groups, reaching out to formerly-but-not-anymore-active members, and probably much more I'm not even thinking about. Maybe I'm suffering a bit from decision paralysis here and just doing any of these options would be better than my current state of "unproductive wondering what I should be doing"... :)
  • will message you regarding a call, thanks for the offer!
Comment by markus_over on If you could send an email to every student at your university (to maximize impact), what would you include in it? · 2021-09-04T20:27:33.543Z · EA · GW

Given I just received a link to this article in the 80,000 Hours newsletter: -- that article seems like something that a lot of students might potentially be interested in. So something like a brief description of the key idea plus a link to the article would be one option.

Comment by markus_over on A Case for Better Feeds · 2021-09-04T11:39:24.170Z · EA · GW

Recently I've been thinking a lot about the flow and distribution of information (as in facts/ideas/natural language) as a meta level problem. It seems to me that "ensuring the most valuable information finds its way to the people who need it" could make a huge difference to a lot of things, including productivity, well-being, and problem-solving of any kind, particularly for EAs. (if anybody reading this is knowledgeable in this broad area, please reach out!)

Your post appears to focus on a very related issue, which is how EAs source their EA information and some specific ways to improve it. I definitely agree that this is an issue worth looking into and worth improving (I personally think that either the EA forum or the EA Hub are in the best position to make such improvements, although I'm unsure what these improvements would look like).

The EA Forum Job Hunt idea admittedly doesn't seem very promising to me from how I understood it -- it sounds like by far the most work of all the suggestions, for a problem that, to me, seems as if it's solved to a pretty reasonable degree. 

I don't quite understand the EA Hub suggestion. What would be submitted and upvoted? Just the existence of (local) groups?

The remaining points regarding Twitter bots and feeds sound good to me, simply because they sound like very little work (unless I'm misjudging that), while potentially being helpful to probably many dozens of EAs.

By the way I do wonder what ratio of EAs is actively using twitter. I for once am not at all, and am not aware of many people I know personally doing so, but that might not mean much and not be very representative.

Comment by markus_over on Lessons from Running Stanford EA and SERI · 2021-08-29T09:29:01.581Z · EA · GW

Great post, thanks for sharing! Pretty much exactly the type of post I had been hoping for for a while. Just hearing that one success story of a local group that was in a more or less similar state as mine (albeit arguably in a higher potential environment), but made it into something so impressive, is very inspiring.

Given I only have ~10h per week available to spend on EA things (and not all of them go into community building), I was particularly happy to hear your 80/20 remark. I do wonder if it's possible to move a local group onto a kind of growth trajectory at only, say, 6h per week, or if that's just a lost cause. Maybe I should just spend the majority of these 6h looking for a person with more time and motivation to take over the role. :) 

Currently we're definitely leaving a lot of low hanging fruit on the table (or tree) though. And a lot of that may be due to relatively trivial issues and inconveniences. Some examples of such limiting factors (and I do wonder if similar things are true for other small local groups):

  • I've heard fellowships mentioned & recommended a lot in the last 1-2 years, but have a fairly limited understanding of the concrete details. Should we run our own one? Should we redirect people to other online fellowships? What would I even tell people in order to motivate them to do so? Timing also needs to be taken into account.
  • Fear of organizing things and (almost) nobody (new) showing up. We had quite a few talks and such that ended up only being heard by our core team, although we were hoping to attract some new faces. That being said, our marketing was often pretty shy rather than aggressive.
  • Lack of detailed knowledge about the European data protection regulation and its implications prevents us/me from systemizing our "funnel" (which hardly exists). I have no idea if it's even legal to have a database of names / email addresses / other personal information of people, whether we'd need to inform them beforehand, etc.
  • Most of our small number of members are busy with their own things / studies / careers and have hardly any capacity to engage with the group beyond one weekly social/discussion, so there's little room for organizing bigger things or spending more time on community building, and I find that situation somewhat demoralizing
  • We have a whatsapp group and a Slack workspace. Whatsapp is great to get new people on board quickly, but it's surprisingly difficult to get them to sign up on Slack, and if they do we can never rely on them seeing new messages, or looking in there at all. Right now our Slack workspace is almost exclusively used by our few core members, and others hardly ever engage.
  • I feel very aversive to "push" people to do things, and wonder if that's a necessary skill to have for a community builder, or ideally people should be motivated enough that they only need to be "enabled"/supported instead.
Comment by markus_over on Philosophy Web - Project Proposal · 2021-08-28T12:29:09.845Z · EA · GW

It sounds interesting, albeit to be fair a bit gimmicky as well. To me at least, which may not mean much: I can imagine taking a few minutes to play around with such a tool if it existed, maybe find some contradiction in my beliefs (probably after realizing that many of my beliefs are pretty vague and that it's hard to put these hard labels on them), and get to the conclusion that really my beliefs weren't that strong anyway and so the contradiction probably doesn't matter all that much. I can imagine others would have a very different experience though (and maybe my expectation about myself is wrong as well of course).

I'd be interested in your thoughts on a few questions:

  1. Can you describe an example "user journey" for Philosophy Web? What beliefs would that imaginary user hold, how would they interact with the software, what would come out, just as one prototypical example?
  2. Would there be other, maybe simpler ways for that imaginary user to get to the same conclusion, not involving Philosophy Web? What bottleneck prevents people from making these conclusions?
  3. Who would be the primary target audience for this? What would make the tool "effective"? Are you primarily thinking about EAs getting to a more self-consistent belief set? Philosophy students? Everyone?
  4. What are the most likely ways in which such a project would fail, given you found the necessary support to build it?
  5. Does the project's success depend on some large number of users? What's the "threshold"? How likely is it to pass that threshold?
  6. What would be the smallest possible version (so MVP basically) of the project that achieves its primary purpose? Could something be prototyped within a day that allows people to test it?
  7. Assuming the project is built and completed and people can use it as intended - what are the most likely reasons for members of your target audience to not find it useful?

As an additional note, I'm quite a fan of putting complex information into more easily digestible forms, such as mind maps, and could imagine that "data structure" in itself being quite valuable to people merely to explore different areas of philosophy, even to a limited degree. I'm not quite sure though if the project entails such a web being presented visually, or if users would only see the implications of their personal beliefs.

Comment by markus_over on Open call for EAs with passion for meta-learning <3 · 2021-08-28T11:47:12.516Z · EA · GW

Just wanted to say I very much like the idea, although I'll probably not get involved myself. I was very happy about the anki deck of EA key numbers that was published two months ago, and would find it great if there were more ways to easily add important EA ideas to one's anki deck (e.g. you mention the 80,000 Hours key ideas in the google doc, great idea!).

Comment by markus_over on How much money donated and to where (in the space of animal wellbeing) would "cancel out" the harm from leasing a commercial building to a restaurant that presumably uses factory farmed animals? · 2021-08-25T18:49:35.316Z · EA · GW

It would be quite surprising to me if your idea did not work out, simply because doing good for animals via donations tends to be really low cost (but might depend on what "a lot more money" really means in your case). Imagining for instance that for each and every restaurant in the world some non-negligible cut of the rent (say 5%) would go into effective animal charity, my super rough 3 minute Fermi estimate says that would amount to something in the order of $10 billion per year. Given that about 80 billion land animals are slaughtered each year, that would mean that at a cost effectiveness of sparing 8 animal lives per dollar donated (which doesn't sound entirely unrealistic), your suggested approach to leasing to restaurants would, on a global scale, not only be net positive, but very theoretically end factory farming of land animals (obviously not in practice given diminishing marginal returns). It's a very hypothetical argument, but maybe it adds something.

Apart from that, maybe there's a way to attract more vegetarian/vegan restaurants in particular? No idea about the concrete processes and legislature around that, but maybe you have some power in that regard.

Comment by markus_over on Teaching You How To Learn post 1 is live! · 2021-08-21T09:45:46.893Z · EA · GW

Some random thoughts from me as well:

  • I wonder if different people may have quite different bottlenecks with regards to how to learn most effectively, and it may be not so much about "do these things" but rather "from these typical bottlenecks, which one affects you the most?"
  • the framing of "The best way to learn" seems a bit dangerous to me; even if "scientifically proven", it still basically just means that it works well on average, but not necessarily for everybody. While active recall and spaced repitition probably are indeed very general, it might be good to add a few notes regarding how people might differ.
  • on a similar note, 80,000 Hours tends to incorporate "reasons why you might disagree" or "where we've been wrong in the past" kind of sections and articles, which too I feel would help a little. E.g. "things Anki isn't ideal for", which definitely exist.
  • maybe a relevant part of effective learning is to be more aware of one's true motives in doing things, be it getting a degree, reading non-fiction books, having an anki routine etc., and whether one's truly doing this to learn things, and if so for what exact purpose
  • related to this, there are different dimensions to learning, similar to productivity: what are you learning (and why), how are you learning, and how much time are you spending. So basically direction, quality, quantity of the learning process. It seems that many resources, maybe including your site, mostly focus on the quality part, whereas the direction part may be even more important and comparably neglected.
  • during one of the EAG Virtual conferences I talked to somebody who was involved in creating a free ebook on the most effective learning strategies for students during pandemic times; wasn't able to find it again so far, but if I do I'll add a link
  • I personally would find it very useful to get some better/clearer mental models of learning and knowledge. Maybe the kind of thing Spencer Greenberg tends to do, e.g. in his podcasts, where he frequently goes into "Well I think X can be broken down into 4 categories: ..." mode and suddenly X makes way more sense than it did before that breakdown.
  • for a long time I've been of the conviction that the way we tend to structure information is highly suboptimal. I'm mostly referring to linear texts about things. 1. Texts are good for some things, but by far not for everything, 2. we're not at all using our brain's immense capability for spatial and visual processing, 3. texts are static and non-interactive, 4. while you have things like table of contents, chapters/headlines and some formatting, it's not an ideal implementation of "different zoom levels", and there are certainly better ways of letting people learn things on a very high level first and then "zoom in" further. As a learner, you have to take what you've got of course. But the other side of the coin - how can you make learning for others easier as a content provider of any sort? - seems very important as well, and I think such a page would be in a great position to experiment with such ways, and not rely on classical linear text form.

About the concrete project:

  • I think providing anki cards at the bottom of your posts is a great idea
  • 80,000 Hours tends to have small summaries of their articles at the top, which I would find useful here as well
  • The Key Ideas Guide post is currently very text-heavy, which makes sense since it's in progress and you probably want to focus on the ideas themselves rather than the presentation. For the future though I think it would make it much more digestible if there was a bit more variety to it, be it pictures, graphs, or even just some formatting tweaks. E.g. one or two screenshots from actual anki cars would be a start, or a graph of the forgetting curve.
  • Style-wise, you're using parantheses a lot in your post, which I can totally relate with - I do it all the time e.g. when exchanging messages with people or writing forum posts and comments. But it does still seem sumoptimal to me, as it hurts the reading flow, and may be a sign one's not focusing on what's actually essential.
  • The post to me feels quite a bit like it's trying to sell me something. I was almost expecting a "subscribe to my newsletter to get a FREE ebook!" while reading. :) This is something 80,000 Hours avoid pretty well by being very open and grayscale about things.
  • I find it great that you've just started doing it and putting it out there looking for feedback; I'm working on one or two vague similar-ish projects (not related to learning though) and didn't yet manage to get over my semi-perfectionist "I'll just make sure I have something good before showing it to anybody" attitude, although I know that's a bad approach
  • minor note, at one point you write "(god this bold is intense)" although there's nothing actually bold; maybe the formatting got lost somewhere on the way?

Some counter points on drawbacks/challenges of Anki:

  • you need to be rather conscientous to use it effectively; missing a week can easily break the habit of daily ankiing, because you're suddenly looking at potentially 100s of flashcards to review
  • it might push people to go for memorizing (often useless) facts rather than really learning and understanding deeper concepts
  • also, adding anki cards to your deck now feels like progress; e.g. after reading a book (or chapter), you might have a feeling that not creating new cards is bad. This might nudge you to add useless cards rather than nothing, degrading the quality of your deck over time. I find it really hard to prevent this personally. After reading a book and going through my notes, if I add nothing to my Anki deck, I feel like having read the book was a waste of time. So I'm motivated to add things simply to feel better about the sunk cost. But looking at my deck honestly, I'm almost sure 50% of the stuff in there doesn't really add anything to my life.
  • setting up such a system and getting into it takes a lot of work and willpower, and many people may just not be willing to go that far (even if it does indeed pay off in the long term)

That all being said, if I went back to university, I'd definitely use Anki and I'm sure it would improve my performance a lot compared to my time there in the past where I didn't know what spaced repetition even is. I'd just say that it's maybe something like 40% of my personal ideal learning system, and there would be a lot beyond that (e.g. how to watch lectures, how to take notes, how to work on actual exercises, the fact that explaining things to others is very helpful, how to motivate yourself, how to plan and build a reliable system, ...).

Comment by markus_over on How to Train Better EAs? · 2021-08-06T16:06:31.822Z · EA · GW

I recently read Can't Hurt Me by David Goggins, as well as Living with a SEAL about him, and found both pretty appealing. Also wondered whether EA could learn anything from this approach, and am pretty sure that this is indeed the case, at least for a subset of people. There is surely also some risk of his "no bullshit / total honesty / never quit" attitude to be very detrimental to some, but I assume it can be quite helpful for others.

In a way, CFAR workshops seem to go in a similar-ish direction, don't they? Just much more compressed. So one hypothetical option to think about would be to consider scaling it up to a multi-month program, for highly ambitious & driven people who prioritize maximizing their impact to an unusually high degree. Thinking about it, this does indeed sound at least somewhat like what Charity Entrepeneurship is doing. Although it's a pretty particular and rather "object-level" approach, so I can imagine having some alternatives that require similarly high levels of commitment but have a different focus could indeed be very valuable.

Comment by markus_over on Building my Scout Mindset: #1 · 2021-07-17T12:25:38.590Z · EA · GW

Thanks for making this public, found it really interesting to follow your train of thought. Also, despite hearing about it in the past, I had completely forgotten about Julia's book. Added it to my reading list now. :)

Comment by markus_over on [deleted post] 2021-07-13T11:08:08.250Z

How much time should a participant roughly allocate for this? How much time are we supposed to spend on each of the questions? For how many days/weeks/months will this be running?

Is "start by finding someone to practice with" something one should do before signing up, i.e. should people sign up in groups of 2? Or does that matching of participants happen once you've got enough together? If the latter, do you have control over which of the two roles you get? I couldn't yet make that much sense of the descriptions of what backcaster and retriever are doing exactly, specifically the "pick a date" part and how the date influences things.

What degree of forecasting experience are you looking for? Or all types of people? Would it make sense for people to sign up when they've gone through a lot of calibration training in the past?

And a side note, the first paragraph on the linked page seems to have been pasted twice.

Comment by markus_over on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-04T13:16:49.855Z · EA · GW

+1 to that! Really cool, thanks for doing this. :)

Comment by markus_over on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-06-25T20:49:29.458Z · EA · GW

Thanks a lot for the thorough post Emily! I like the framing of staying up late as a high-interest loan a lot. And I agree that reading Why We Sleep may indeed be quite useful for certain people, despite its shortcomings. You make a lot of good points and provide several interesting ideas, plus the post is written in a very readable way, and the drawings are great.

Not that much else to add, except two tiny nitpicks regarding your estimation:

  • you equated "being 30% less productive" with "taking 30% more time to complete things", but actually being 30% less productive would mean you take 100/70 - 1 = ~42% longer. (a more obvious example of this would be that being 50% less productive means you require twice the time = 100% more, not 50% more)
  • concluding your estimation, your multiplication characters were interpreted as formatting, making the "0.254.9 + 0.502.1 + 0.10*0.4" part quite confusing to read. You could use × or • instead.
Comment by markus_over on How can I best use product management skills for digital services for good? · 2021-06-04T09:19:28.310Z · EA · GW

Same here. :)

Comment by markus_over on Statistics for Lazy People, Part 2 · 2021-04-16T18:01:24.305Z · EA · GW

Neat! Small mistake: "What is the probability that it will still be working after eight twenty years" should probably be "after twenty years". And multiple data points are exciting indeed!

Comment by markus_over on Announcing "Naming What We Can"! · 2021-04-02T16:32:09.414Z · EA · GW

Perfect! In the end the impact will of course be orders of magnitude higher, as a slightly better name of any particular organization will affect tens if not hundreds of thousands of people in the long run. And there may even be a tail chance of better names increasing the community's stability and thus preventing collapse scenarios. I think overall you really undersold your project with that guesstimate model only focusing on this post only, as if that was all there is to it.

Comment by markus_over on Announcing "Naming What We Can"! · 2021-04-02T12:42:40.356Z · EA · GW

I believe there are a few serious flaws in your guesstimate model:

  • a year has 365.2421905 days, not 365.25. That's not even rounded correctly!
  • Smiles per QALY should multiply days in a year with smiles in a good day, instead they are added. They don't even have the same unit, how can you add them! Insanity!
  • the post's karma is far outside of even your 99% interval

Everything else seems quite correct and I agree with your CIs and conclusions.

Also, please find a new name for guesstimate.

Comment by markus_over on Statistics for Lazy People, Part 1 · 2021-04-02T12:19:10.506Z · EA · GW

Nice post! Found it through the forum digest newsletter. Interestingly I knew Lindy's Law as the "Copernican principle" from Algorithms to Live By, IIRC. Searching for the term yields quite different results however, so I wonder what the connection is.

Also, I believe your webcomic example is missing a "1 -". You seem to have calculcated p(no further webcomic will be released this year) rather than p(there will be another webcomic this year). Increasing the time frame should increase the probability, but given the formula in the example, the probability would in fact decrease over time.

Comment by markus_over on EA Münster Predictions 2021 · 2021-01-30T09:52:17.783Z · EA · GW

"Bei 80% der Treffen der EA Münster Lokalgruppe in 2021 waren mehr als 5 Personen anwesend" - how will cancelled meetups (due to lack of attendees, if that ever happens) count into this? Not at all, or as <=5 attendees? (kind of reminds me of how the Deutsche Bahn decided to not count cancelled trains as delayed)

Also, coming from EA Bonn where our average attendance is ~4 people, I find the implications of this question impressive. :D

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2021-01-02T11:39:22.400Z · EA · GW

I see, so at the end of the day you're assigning a number representing how productive the day was, and you consider predicting that number the day before? I guess in case that rating is based on your feeling about the day as opposed to more objectively predefined criteria, the "predictions affect outcomes" issue might indeed be a bit larger here than described in the post, as in this case the prediction would potentially not only affect your behavior, but also the rating itself, so it could have an effect of decoupling the metric from reality to a degree.

If you end up doing this, I'd be very interested in how things go. May I message you in a month or so?

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2021-01-02T11:30:23.113Z · EA · GW

Good point, I also make predictions about quarterly goals (which I update twice a month) as well as my plans for the year. I find the latter especially difficult, as quite a lot can change within a year including my perspective on and priority of the goals. For short term goals you basically only need to predict to what degree you will act in accordance with your preferences, whereas for longer term goals you also need to take potential changes of your preferences into account.

It does appear to me that calibration can differ between the different time frames. I seem to be well calibrated regarding weekly plans, decently calibrated on the quarter level, and probably less so on the year level (I don't yet have any data for the latter). Admittedly that weakens the "calibration can be achieved quickly in this domain" to a degree, as calibrating on "behavior over the next year" might still take a year or two to significantly improve.

Comment by markus_over on One’s Future Behavior as a Domain of Calibration · 2020-12-31T15:50:29.963Z · EA · GW

I personally tend to stick to the following system:

  • Every Monday morning I plan my week, usually collecting anything between 20 and 50 tasks I’d like to get done that week (this planning step usually takes me ~20 minutes)
    • Most such tasks are clear enough that I don’t need to specify any further definition of done; examples would be “publish a post in the EA forum”, “work 3 hours on project X”, “water the plants” or “attend my local group’s EA social” – very little “wiggle room” or risk of not knowing whether any of these evaluates to true or false in the end
    • In a few cases, I do need to specify in greater detail what it means for the task to be done; e.g. “tidy up bedroom” isn’t very concrete, and I thus either timebox it or add a less ambiguous evaluation criterion
  • Then I go through my predictions from the week before and evaluate them based on which items are crossed off my weekly to do list (~3 minutes)
    • “Evaluate” at first only means writing a 1 or a 0 in my spreadsheet next to the predicted probability
    • There are rare exceptions where I drop individual predictions entirely due to inability to evaluate them properly, e.g. because the criterion seemed clear during planning, but it later turned out I had failed to take some aspect or event into consideration[1], or because I deliberately decided to not do the task for unforeseeable reasons[2]. Of course I could invest more time into bulletproofing my predictions to prevent such cases altogether, but my impression is that it wouldn’t be worth the effort.
  • After that I check my performance of that week as well as of the most recent 250 predictions (~2 minutes)
    • For the week itself, I usually only compare the expected value (sum of probabilities) with actually resolved tasks, to check for general over- or underconfidence, as there aren’t enough predictions to evaluate individual percentage ranges
    • For the most recent 250 predictions I check my calibration by having the predictions sorted into probability ranges of 0..9%, 10..19%, … 90..99%.[3] and checking how much the average outcome ratio of each category deviates from the average of predictions in that range. This is just a quick visual check, which lets me know in which percentage range I tend to be far off.
    • I try to use both these results in order to adjust my predictions for the upcoming week in the next step
  • Finally I assign probabilities to all the tasks. I keep this list of predictions hidden from myself throughout the following week in order to minimize the undesired effect of my predictions affecting my behavior (~5 minutes)
    • These predictions are very much System 1 based and any single prediction usually takes no more than a few seconds.
    • I can’t remember how difficult this was when I started this system ~1.5 years ago, but by now coming up with probabilities feels highly natural and I differentiate between things being e.g. 81% likely or 83% likely without the distinction feeling arbitrary.
    • Depending on how striking the results from the evaluation steps were, I slightly adjust the intuitively generated numbers. This also happens intuitively as opposed to following some formal mathematical process.

While this may sound complex when explaining it, I added the time estimates to the list above in order to demonstrate that all of these steps are pretty quick and easy. Spending these 10 minutes[4] each week seems like a fair price for the benefits it brings.

  1. An example would be “make check up appointment with my dentist”, but when calling during the week realizing the dentist is on vacation and no appointment can be made; given there’s no time pressure and I prefer making an appointment there later to calling a different dentist, the task itself was not achieved, yet my behavior was as desired; as there are arguments to be made to evaluate this both as true or false, I often just drop such cases entirely from my evaluation ↩︎

  2. I once had the task “sign up for library membership” on my list, but then during the week realized that membership was more expensive than I had thought, and thus decided to drop that goal; here too, you could either argue “the goal is concluded” (no todo remains open at the end of the week) or “I failed the task” (as I didn’t do the formulated action), so I usually ignore those cases instead of evaluating them arbitrarily ↩︎

  3. One could argue that a 5% and a 95% prediction should really end up in the same bucket, as they entail the same level of certainty; my experience with this particular forecasting domain however is that the symmetry implied by this argument is not necessarily given here. The category of things you’re very likely to do seems highly different in nature from the category of things you’re very unlikely to do. This lack of symmetry can also be observed in the fact that 90% predictions are ~10x more frequent for me in this domain than 10% predictions. ↩︎

  4. It’s 30 minutes total, but the first 20 are just the planning process itself, whereas the 3+2+5 afterwards are the actual forecasting & calibration training. ↩︎

Comment by markus_over on Announcing the Forecasting Innovation Prize · 2020-12-30T12:19:12.024Z · EA · GW

"Before January 1st" in any particular time zone? I'll probably (85%) publish something within the next ~32h at the time of writing this comment. In case you're based in e.g. Australia or Asia that might then be January 1st already. Hope that still qualifies. :)

Comment by markus_over on Make a Public Commitment to Writing EA Forum Posts · 2020-12-20T16:10:11.845Z · EA · GW

Indeed, thank you. :) I haven't started the other, forecasting related one, but intend to spend some time on it next week and hopefully come up with something publishable before the end of the year.

Comment by markus_over on CFAR Workshop in Hindsight · 2020-12-14T08:48:55.281Z · EA · GW

My thoughts on how to best prepare for the workshop (as mentioned in the post):

  • Write down your expectations, i.e. what you personally hope to take away from the workshop (and if you’re fancy, maybe even add quantifications/probability estimates to each point)
  • Make sure you can go into the workshop with a clear head and without any distractions
  • Don’t make the same mistake I made, which was booking a flight home way too early on the day after the end of the workshop. I didn’t realize beforehand how difficult it was to get from the workshop venue to the airport, and figuring out a solution stressed me quite a bit during the week (but was in the end solved for me by the super kind ops people)
  • Do your best in the week(s) before to stay healthy
  • Sleep enough the nights before
  • Maybe prepare a bug list and take it with you; this will also be one of the first sessions, but the more the better
  • Don’t panic; if you don’t manage to prepare in any significant way, the workshop is still extremely well designed and you’ll do just fine.
Comment by markus_over on Announcing the Forecasting Innovation Prize · 2020-11-22T10:14:00.381Z · EA · GW

Sure. Those I can mention without providing too much context:

  • calibrating on one's future behavior by making a large amount of systematic predictions on a weekly basis
  • utilizing quantitative predictions in the process of setting goals and making plans
  • not prediction-related, but another thing your post triggered: applying the "game jam principle" (developing a complete video game in a very short amount of time, such as 48 hours) to EA forum posts and thus trying to get from idea to published post within a single day; because I realized writing a forum post is (for me, and a few others I've spoken to) often a multi-week-to-month endeavour, and it doesn't have to be that way, plus there are surely diminishing returns to the amount of polishing you put into it

If anybody actually ends up planning to write a post on any of these, feel free to let me know so I'll make sure focus on something else.

Comment by markus_over on Make a Public Commitment to Writing EA Forum Posts · 2020-11-20T16:30:07.206Z · EA · GW

Good timing and great idea. Considering I've just read this: I'll gladly commit to submitting at least one forum post to the forecasting innovation prize (precise topic remains to be determined). Which entails writing and publishing a post here or on lesswrong before the end of the year.

I further commit to publishing a second post (which I'd already been writing on for a while) before the end of the year.

If anybody would like to hold me accountable, feel free to contact me around December 20th and be very disappointed if I haven't published a single post by then. 

Thanks for the prompt Neel!