Posts

[Brief] Simple Method for EA Community Building Post-College 2022-06-17T16:50:06.438Z
Modeling humanity's robustness to GCRs? 2022-06-09T17:20:32.534Z
rodeo_flagellum's Shortform 2021-09-30T22:11:48.664Z
How to measure (human) impact? 2021-09-27T20:13:38.111Z
Survey-Proposal: Cause-Area Switching 2021-08-06T02:23:44.415Z

Comments

Comment by rodeo_flagellum on 300+ Flashcards to Tackle Pressing World Problems · 2022-07-11T19:47:51.369Z · EA · GW

This is great. I think coordinated sharing of Anki / other flashcards should be a norm / done more frequently. 

Any chance you can share the sources for these notes, if the sources are easily accessible to you? I'd be interested in examining them, but it's okay if you don't have them, given that most people usually do not to have all the sources to their cards immediately accessible. 

Just saw the sources in Anki! Thank you. 

Comment by rodeo_flagellum on My EA Failure Story · 2022-07-11T18:40:24.236Z · EA · GW

Stories of this nature are sobering to hear; thank you for posting this - each post like this gets people in the community mentally  closer  to seeing the base rate of success in the EA community for what it is. 

Your writing is enjoyable to read as well - I would read more of it. 

Controlling for overconfidence, I'm sorry that your expectations were failed with the last EA job you applied for. My brain doesn't usually like to accept such things. 

The expected value of letting go and building a mental foundation that is simple, peaceful, want-free, etc... is positive. Generally speaking, most of the time, my life is actually pretty good. The baseline is good. When negative thoughts enter, I usually just repeat in my internal monologue "your mind is producing negative thoughts, all is actually well", and this usually calms me and actually makes me more content, probably by distracting me from the negative thoughts and images and by increasing my levels of gratitude. It seems to me that your situation would benefit doing something like this. Take it 1 step at a time. 

Have a nice day.

Comment by rodeo_flagellum on Contest: 250€ for translation of "longtermism" to German · 2022-06-17T19:14:08.326Z · EA · GW

Any updates on this? I'm interested to see your thoughts on all these good responses.

Comment by rodeo_flagellum on [Linkpost] World Mental Health Report: Transforming Mental Health for All · 2022-06-17T16:59:51.349Z · EA · GW

Thank you for sharing this. For some reason, a lot of WHO's reports usually escape my radar.

Comment by rodeo_flagellum on What Is Most Important For Your Productivity? · 2022-06-14T01:46:25.741Z · EA · GW

Thank you for posting this.

I want to direct more attention to Decreasing Anxiety. If these observations and pieces of advice were weighted, I would expect reducing one's anxiety to be near or at the top.  

Many environments and activities within the EA-sphere (e.g., research or grant-making) are quite stressful, and operating continually in these environments can lead to burnout or the consequences of anxiety. 

Here is a simple reminder of the activities that are fundamental for flourishing (and for reducing anxiety) as a human:

Though seemingly obvious, many people in Western civilization (especially the USA) routinely fail at these things. 

Comment by rodeo_flagellum on Unflattering reasons why I'm attracted to EA · 2022-06-03T14:57:42.049Z · EA · GW

I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".

Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped under some of the  categories in the final list. I would recommend this reading this book, generally speaking. 

Intro.

What Singer has highlighted with this argument is nothing more than simple, everyday human
hypocrisy—the gap between our stated ideals (wanting to help those who need it most) and our
actual behavior (spending money on ourselves). By doing this, he’s hoping to change his readers’
minds about what’s considered “ethical” behavior. In other words, he’s trying to moralize.
 

Our goal, in contrast, is simply to investigate what makes human beings tick. But we will still
find it useful to document this kind of hypocrisy, if only to call attention to the elephant. In
particular, what we’ll see in this chapter is that even when we’re trying to be charitable, we
betray some of our uglier, less altruistic motives.

Warm Glow

Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results.


Andreoni calls this the “warm glow” theory. It helps explain why so few of us behave like effective altruists. Consider these two strategies for giving to charity: (1) setting up an automatic monthly payment to the Against Malaria Foundation, or (2) giving a small amount to every panhandler, collection plate, and Girl Scout. Making automatic payments to a single charity may be more efficient at improving the lives of others, but the other strategy—giving more widely, opportunistically, and in smaller amounts—is more efficient at generating those warm fuzzy feelings. When we “diversify” our donations, we get more opportunities to feel good.

...

  • Visibility. We give more when we’re being watched.
  • Peer pressure. Our giving responds strongly to social influences.
  • Proximity. We prefer to help people locally rather than globally.
  • Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts.
  • Mating motive. We’re more generous when primed with a mating motive.

This list is far from comprehensive, but taken together, these factors help explain why we donate so inefficiently, and also why we feel that warm glow when we donate. Let’s briefly look at each factor in turn.

Simler and Hanson then cover each of the listed entities in greater depth.

Comment by rodeo_flagellum on Little (& effective) altruism · 2022-06-03T12:49:53.263Z · EA · GW

Thank you Parmest for writing this post. Shared reflections and experiences such as this one seem to occur somewhat infrequently on the EAF, and I appreciate your perspective. 

Some things came to my when reading this. 

A post that you may find enjoyable and insightful is Keeping Absolutes in Mind. Here, Michelle Hutchinson writes about altruistic baselines: 

In cases like those above, it might help to think more about the absolute benefit our actions produce. That might mean simply trying to make the value more salient by thinking about it. The 10% of my income that I donate is far less than that of some of my friends. But thinking through the fact that over my life I’ll be able to do the equivalent of save more than one person from dying of malaria is still absolutely incredible to me. Calculating the effects in more detail can be even more powerful – in this case thinking through specifically how many lives saved equivalent my career donations might amount to. Similarly, when you’re being asked to pay a fee, thinking about how many malaria nets that fee could buy really makes the value lost due to the fee clear. That might be useful if you need to motivate yourself to resist paying unnecessary overheads (though in other cases doing the calculation may be unhelpfully stressful!).

which I believe is in line with your idea that local altruism, or the baseline altruism most people unfamiliar with EA think of when they imagine "altruism", is still absolutely good even if it's less good relative to other actions, and might support or drive other, more "macro-scale" altruistic action.

After days of reflection, I understood what the problem was with me. The big talks on the forum had overshadowed my modesty. This was a profound and important realization for me. I recognized that a sudden jump to the big things was not making me an altruistic human being. Even if I would have managed to make contributions, I would never have become a part of EA. 

In most instances, I suspect lowering the bar for noticing, recognizing, or being cognizant of altruistic deeds probably will not detract significantly from the expected effectiveness of the most altruism deeds, so at minimum it wouldn't hurt to care more about and help those around you in whatever ways possible and would likely improve

Again, thank you for sharing these thoughts.

Comment by rodeo_flagellum on Contest: 250€ for translation of "longtermism" to German · 2022-06-02T21:26:57.399Z · EA · GW

Entering "longtermism" into Google Translate produces Langfristigkeit, which has already been stated below. 

To add additional weight to this definition,  my native-speaking German grandmother believes that "Langfristigkeit" is probably the best or near-best translation for longtermism, after thinking about it for around 10 minutes and reading the other responses, although she is not terribly familiar with the idea of longtermism.

For additional context, the following means "long-term future" in German:

  • langzeitige Zukunft

One problem is properly getting "ism" in the word, and also capturing the idea within longtermism that actions with high (positive) expected value for the long-term future should be prioritized. 

One final phrase for consideration is:

  • Maximierung des zukünftigen Wohlwollens

which means roughly "maximizing future good for mankind". Despite not being a single word, this phrase is also sendorsed by my grandmother. 

Comment by rodeo_flagellum on Global health is important for the epistemic foundations of EA, even for longtermists · 2022-06-02T15:26:23.064Z · EA · GW

Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to "look at other cause-areas with Global Health goggles" more explicit.

Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.

  • Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believed (why?).
    • Idea Support 1: The expected goodness of available actions in the altruistic market differs (across several orders of magnitude) based on the state of the world, which changes over time.
      • If the altruistic market was made efficient (which EA might achieve), then the available actions with the highest expected goodness, which change with the changing state of the world, would routinely be located in any  world state. Some things don't generalize.
    • Idea Support 2: Hindsight bias routinely warps our understanding of which investments, decisions, or beliefs were best made at the time, by having us believe that the best actions were more predictable than they were in actuality. It is plausible that this generalizes to altruism. As such, we run the risk of being overconfident that, despite the changing state of the world, the actions with the highest expected goodness presently will still be the actions with the highest expected goodness in the future, be that the long-term one or the near-term one.
    • (why?): The cause-area of global health has well defined metrics of goodness, i.e. the subset of the altruistic market that deals with altruism in global health is likely close to being efficient.
      • Idea Support 3: There is little cause to suspect that since altruism within global health is likely close to being efficient, altruism within other cause-areas are close to efficient or can even be made efficient, given their domain-specific uncertainties.
    • Idea Support 4: How well “it’s possible to do a lot of good with a relatively small expenditure of resources” generalizes beyond global health is unclear, and should likely not be a standard belief for other cause-areas. The expected goodness of actions in global health is contingent upon the present world state, which will change (as altruism in global health progresses and becomes more efficient, there will be diminishing returns in the expected goodness of the actions we take today to further global health)
    • Action Update 1: Given the altruistic efficiency and clarity within global health, and given people’s support for it, it makes sense to introduce EA’s altruist market in global health to newcomers; however, we should not “trick” them into thinking EA is solely or mostly about altruism in global heath - rather, we should frame EA’s altruist market in global health as an example of what a market likely close to being efficient can look like.
Comment by rodeo_flagellum on What YouTube channels do you watch? · 2022-06-01T13:38:23.780Z · EA · GW

Thank you for doing this. 

Even though aggregating what media forum members learn from and interact with seems obviously useful, I am surprised this hasn't been done more frequently (I have not seen a form of this nature, but only have a fractional sample of what's out there). 

I am very interested to see what you find (partially to find some new content to absorb) and hope that many people fill out this form. 

Comment by rodeo_flagellum on Why You Should Earn to Give in Tulsa, OK, USA · 2022-05-29T19:56:29.293Z · EA · GW

Thank you for sharing this experience. It upweights the idea of me moving to another state, partially on the basis of grant relocation programs.

I remember seeing, in the past, that Vermont would pay remote workers 10k USD to relocate (here). I can't find much on this now, but did find that Vermont has a New Relocating Worker Grant (here)

QUALIFYING RELOCATION EXPENSES

Upon successful relocation to Vermont and review of your application, the following qualifying relocation expenses may be reimbursed:  

  • Closing costs for a primary residence or lease deposit and one month rent,
  • Hiring a moving company,
  • Renting moving equipment,
  • Shipping,
  • The cost of moving supplies

Incentives are paid out as a reimbursement grant after you have relocated to Vermont. Grants are limited and available on a first-come, first-served basis. 

There are probably states other than OK or VT that do such a thing.  

Comment by rodeo_flagellum on Revisiting the karma system · 2022-05-29T16:25:16.693Z · EA · GW

This post has a fair number of downvotes but is also generating, in my mind, a valuable discussion on karma, which heavily guides how content on EAF is disseminated. 

I think it would be good if more people who've downvoted share their contentions (it may well be the case that those who've already commented contributed the contentions). 

Comment by rodeo_flagellum on Who wants to be hired? (May-September 2022) · 2022-05-28T17:22:56.279Z · EA · GW

Location: New Jersey, USA

Remote: Yes

Willing to relocate: Likely Yes

Skills:

  - Machine Learning (Python, TensorFlow, Sklearn): Familiar with creating custom NNs in Keras,  properly using packaged ML algorithms, and (mostly) knowing what to use and when. I haven’t reproduced an ML paper in full, but probably could after a decent amount of time. I am in the process of submitting a paper on ensemble learning for splice site prediction to IEEE Access (late submission). 

  - Python, R, HTML, CSS: I am competent  in Python (5 years experience), and am familiar with R, HTML, and CSS. My website: https://rodeoflagellum.github.io

  - Forecasting: Top 75 on Metaculus (rank 54). I believe I am slightly above average at making and updating forecasts. Look through my comments for some applications of time series models. 1st place FluSight Challenge (influenza forecasting)

  - Writing: Examples (many incomplete) can be found on my website. One is the essay I wrote on Forecasting Designer Babies, which placed among the top submissions in the Impactful Forecasting Prize (on EAF). I am current writing about people attitudes towards human gene-editing. 

  - Education: BA Math and Neuroscience 

Resume: Available upon request

Email: rodeoflagellum AT gmail DOT com

Further Notes:

   - Cause Areas: (presently) Longtermism / Global Priorities / X-Risk Reduction > AI Alignment > Biosecurity > Global Health and Wellbeing. I am interested in the notion of "civilizational stability and resilience", but am too unfamiliar with work in this area to comment further.

  - Availability: Likely available 1 month post-offer (July). Very likely available 2 months post-offer. 

  - Roles I'm Looking For: Full time remote research work that involves some mix of forecasting, modeling, math/statistics, tool building, and data analysis, but predominantly involves literature review and synthesis. 

  - Experience with EA: Lurker on EAF since early 2018. Helped create a student org. and lead X-risk discussions. Giving What We Can Pledge since 2021. 
Comment by rodeo_flagellum on Introducing Asterisk · 2022-05-27T14:12:19.383Z · EA · GW

Thank you for commenting this! I have not previously heard of Unjournal, but believe it's very likely that I will try to use this for feedback (decided after taking a ~5 min look at the link). 

Comment by rodeo_flagellum on On being ambitious: failing successfully & less unnecessarily · 2022-05-27T14:09:12.574Z · EA · GW

Thank you for writing this!

Here are some of my notes / ideas I wrote while reading.

This “celebrating failures” notion is a celebration of both the audacity to try and the humility to learn and change course. It’s a great ideal. I wholeheartedly support it

Something I remembered when reading this was the idea, which most people here might have been exposed to at one point or another but might have forgotten, that “Adding is favored over subtracting in problem solving” (https://www.nature.com/articles/d41586-021-00592-0).

I believe making it easier for people, organizations, etc… to remember subtractive solutions exist and to implement them would probably be a GOOD thing in EA. I think collectively reinterpreting failures, in some select instances, as subtractive solutions could further this.

However, I fear that without taking meaningful steps towards it we’ll fall far short of this ideal, resulting in people burning out, increased reputational risks for the community, and ultimately, significantly reduced impact.

For one thing, private sector entrepreneurship can be a very toxic environment (which is partly why I left).

While I am not familiar with private sector entrepreneurship and the types of burnout it engenders in people, I am grateful for the occasional (availability heuristic working here, I am only recollecting the past year or so of posts) mental-health post on EAF. These posts upweight the importance of taking breaks and doing other, less taxing activities in fighting the urge to ruthlessly optimize your behavior to get as many things done as possible.

I also find that after a break or walk, I have an easier time with the work I was doing. For me, and probably for many others, long-term effectiveness at thought-work requires some periods of low-intensity activity.

Interesting link https://www.fuckupnights.com/; I didn’t know such a thing existed.

I feel as though this list can be expanded somewhat.

So how can we avoid these pitfalls? A few thoughts:

  • We can’t expect too many people to be entrepreneurs.
  • We need to be careful that we don’t put people in situations where they’re set up to fail unnecessarily.
  • We can’t just celebrate the failures of those who eventually succeed.

To be more clear, are the pitfalls in “high-risk, high-reward approach[es] to achieving great successes” the following?

  • Rapid turnover rate, due to abundant failures and burnout, among other things like stress and that the work is demoralizing
  • If a failure is celebrated, it is mostly likely a failure that was followed by a noticeable success, which is likely not characteristic of most failures

Here are two additional points that might be helpful in avoiding these pitfalls and that might also be instrumental towards “help[ing] us fail more gracefully, fail less grotesquely… or even better, fail less (while still achieving more).”

  • We shouldn’t expect non-entrepreneurs to maintain stellar performance in entrepreneurial positions or tasks for too long
  • We should keep track records of our failures and performance inadequacies, rather than shying away from them, so that they can be routinely prevented

In my experience, one of the most valuable things from pitches and meetings with investors is that even if they don’t fund you, they often give constructive feedback — you’ll know why they didn’t.

I’ve been coming across the idea of “garnering more feedback” recently within the EAF community:

  • https://forum.effectivealtruism.org/posts/GskGj9wCzLdP8WgmT/an-easy-win-for-hard-decisions
  • https://forum.effectivealtruism.org/posts/iPqHdRYGCNj5bTn52/help-with-the-forum-wiki-editing-giving-feedback-moderation
  • https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection

Overall, I think increasing the amount and quality of feedback on posts, job rejections, grant applications, etc… is something very much worth moving towards.

I agree there are not enough “Appropriate systems to match people up with the right support and advice.”, but I also believe that there are outlets that do exist but which might not be very visible or easily findable (perhaps 80000 Hours 1 on 1 advice calls could fall into this category).

Quotas and “explicitly funding such public goods” sound like decent solutions, especially if they’re combined (a quota would limit your 10-20% community-feedback public-good funded time, ensuring it doesn’t turn into more than you can handle). I wonder how many EA staff would have to offer 10-20% of their time to make a real difference in the current paucity of feedback.

Another solution might be to have more collaborations between organizations or individuals. I often find that when I co-author or work with another person on a project, I’m typically exposed to new and useful resources and sometimes refine my skills in ways I hadn’t considered prior.

Subjectively evaluating two enterprises in terms of their “funded to the tune of $X” measure seems like a good example of Goodhart’s law creeping in. I think more detailed and extensive transparency between funders and fundees would control overselling, but this is easier stated than implemented.

Wrt to “Some examples I’ve encountered of pretty understandable risk avoidance:”, I think accumulating the outcomes (again, including the failed ones) of risky scenarios that people have faced in the EA community might enable others to make better decisions. For example, making people’s stories of a career change or transition into EA (a lot of the AMA career posts on EAF fulfill this purpose) more visible and then accumulating these stories in a “collection of risky decisions for EAs” page might do well in this regard.

Paid training strikes me as something that is neglected and, if scaled, might really help with the earlier point “We need to be careful that we don’t put people in situations where they’re set up to fail unnecessarily.”.

I would be happy to see the “Making enemies instead of friends” made into its own post.

Again, thank you for writing this, Luke!

Comment by rodeo_flagellum on Getting GPT-3 to predict Metaculus questions · 2022-05-06T13:10:15.706Z · EA · GW

What do you think would occur if you added in the 1st or 2nd most upvoted, recent comments in the GPT-3 description, following the question? 

I think it might make the difference on some questions with high forecaster volume, but might detract from the accuracy on questions with  lower forecaster volume. 

Comment by rodeo_flagellum on Has anyone actually talked to conservatives* about EA? · 2022-05-05T23:20:26.760Z · EA · GW

I believe my father qualifies as "conservative" (I don't have a clear definition for "conservative", and age is a confounding factor in this case, but that he was a Trump voter in 2020, generally opposes immigration, and loves meat indicate him as conservative), and have discussed EA ideals  and concepts with him at length over the span of several years. 

He supports altruism, and in general believes that altruistic practices  (he mainly discusses global health and development) could be more effective. On this note, he believes EA is "good". However, when considering longtermist causes / x-risk, he differs from what I believe to be the community consensus in that he believes nuclear risk and natural risks pose a greater threat than bioterrorism (nothing concerning lab leaks have been brought up) and risk from AI. 

I asked him if he believes he would have been part of EA had it existed when he was 15-25, and he replied that he might have in the context of global health and development. 

I would be interested in an extension question for this post: Have EAs asked their conservative family members or parents for their thoughts on EA or adjacent concepts?

Comment by rodeo_flagellum on What do we want the world to look like in 10 years? · 2022-04-21T14:21:37.704Z · EA · GW

Thank you for writing this post and for including those examples. 

To address the first part of your  "Meta" comment at the bottom of the post, I think that, were I to do this exercise with my peers, it would not cost much time or energy, but could potentially generate ideas for desirable states of humanity's future that might result in some of my or my peers' attention temporarily being reallocated to a different cause. This reallocation might take the form of some additional querying on the Internet  that  might not have otherwise occurred, or might take the form of a full week's or month's work being redirected towards learning about some new topic, perhaps leading to some dissemination of the findings in writing.  So, doing the exercise, or some similar version of it, you've described above seems minimally  valuable enough to give it a try, and perhaps even experiment it with.

In terms of exercise formats, if I were to implement "sessions to generate desirable states of humanity for the 5yr, 10yr, 25yr, etc... future", I would probably get together with my peers each month, have everyone generate ~10 ideas, then pool the ideas in a Google Doc, and then together, prune duplicates, combine similar ideas, come up with ways to make the ideas more concrete, and resolve any conflicting ideas. If I am not able to get my peers together on a monthly basis, I would probably do something similar to what I have described, and then perhaps post the ideas in a shortform. 

In my own work, I already do this to a degree; usually I have a list of things to write or learn about, and add a subjective (x% | y% | z%) rating, where x means how motivated I am to do it, y means how valuable I think work on this topic is, and z means how difficult it would be for me to work on it, to each project idea. To supplement exercises in generating descriptions of desirable states for  humanity in the coming years, it would probably be easy enough to add some quick subjective estimate of importance to each idea when it's generated. Also, a mechanism for generating desirable states for humanity could be looking at macroscopic issues for humanity, (off the top of my head, not ordered in terms of importance) - {Aging, That humans war, Aligning AI, Human coordination failures, Injury and disease, Wellbeing, Earth Isn't Safe (natural risks, including anthr. climate change), Resource distribution, Energy and resource supplies, Biorisks} - and then coming up with ideas for "what civilization would look like if this issue were addressed", or something similar. 

Comment by rodeo_flagellum on A quick poll of 105 people at EAG on Friday evening · 2022-04-16T15:36:49.762Z · EA · GW

Not sure if you are in fact seeing this, but presently I see 3 posts with a similar title. The two previous ones had "105" in the title. Just making sure you know this. Also, thank you for posting this. Quick survey results are usually nice to see. 

Comment by rodeo_flagellum on $100 bounty for the best ideas to red team · 2022-03-18T19:59:53.592Z · EA · GW

Red-team - "Examine the magnitude of impact that can be made from contributing to EA-relevant Wikipedia pages. Might it be a good idea for many more EA members to be making edits on Wikipedia?" 

Rough answer  - "I suspect it's the case that only a small proportion of total EA members make edits to the EAF-wiki. Perhaps the intersection of EA members and people who edit Wikipedia is about the same size, but my intuition is that this group is even smaller than the EAF wiki editors. Given that Wikipedia seems to receive  a decent amount of Internet traffic (I assume this probably also applies to EA-relevant pages on Wikipedia), it is very likely the case that contributing to Wikipedia is a neglected activity within EA; should an effort be made by EA members to contribute more to Wikipedia, the EA community might grow or might become more epistemically accessible, which seem like good outcomes."

Comment by rodeo_flagellum on Preprint is out! 100,000 lumens to treat seasonal affective disorder · 2021-11-14T13:58:54.895Z · EA · GW

I was thinking more about price in terms of carbon cost, but this should follow from the USD calculation, assuming that this is roughly proportional to some quantity of CO2 released. My prior knowledge on wattage was lacking, so I guessed 100k lumen for ~8-12 hours per day to consume more electricity than it actually does. 

Comment by rodeo_flagellum on Preprint is out! 100,000 lumens to treat seasonal affective disorder · 2021-11-14T03:42:49.212Z · EA · GW

I haven't read the paper, but this sounds interesting. There was a time when I purchased this 10,000 lumen lamp to ail my poor sentiment during the sophomore year of college. The power in my room blew out once I purchased a second lamp, and I was left without electricity in my dorm for 2 weeks. Imagine treating everyone with SAD - 100,000 lumens each. I wonder how much electricity this would be collectively. Each person would need around this amount of light for some duration of some interval of the year. What would be the net positive impact of this intervention? Would the increased productivity and mood conferred outweigh the costs of the massive electrical usage? I can imagine that it might be the case that simply living somewhere else during the worst parts of the year for people with SAD is better than a 100,000 lumen room, which I assume uses much more electrical power than most rooms. 

Comment by rodeo_flagellum on rodeo_flagellum's Shortform · 2021-09-30T22:11:48.861Z · EA · GW

As of September 30th, 2021, 80000 Hours lists ageing under Other longtermist issues, which means that, at the moment, it is not one of their Highest priority areas. 

Despite this, I am interested in learning more about research on longevity and ageing. The sequence Gears of Ageing, Laura Deming's Longevity FAQ, and the paper Hallmarks of Aging, are all on my reading list. 

Relatedly, my friends have sometimes inquired how long I would like to live, if I could hypothetically live invincibly for however long I wanted, and I have routinely defaulted to the answer: "10,000 years". I have not expended much thought as to why this number comes to mind, but it may have to do with the fact that the first known permanent settlements occurred roughly 10,000 years ago (assuming I recall this accurately), and that I thought it'd be interesting to see where human civilization is in this amount of time (starting from when I was born). 

Several of Aubrey de Grey's talks on Gerontology and ageing have also resonated with me. From Wikipedia:

In 2008 Aubrey de Grey said that in case of suitable funding and involvement of specialists there is a 50% chance, that in 25-30 years humans will have technology saving people from dying of old age, regardless of the age at which they will be at that time.[26] His idea is to repair inside cells and between them all that can be repaired using modern technology, allowing people to live until time when technology progress will allow to cure deeper damage. This concept got the name "longevity escape velocity".

In one TED talk, he made the case that ageing research was highly neglected, but I can't recall just how neglected. Given that I do not want to die, I really would like to see a cultural shift towards prioritizing anti-ageing research. 

There may be a strong negative impact on humanity's long-term and/or short-term potential as a result of extending people's lifespans, but I think that whether the magnitude of positive impact (reduction of existential risk, improvements to collective well-being) of this intervention/technology/research outweighs the negative impact is still highly uncertain in my mind. 

Maybe writing a future-history (a story that traces the societal changes engendered by hypothetical sequences of scientific/cultural advancements) on different scenarios for anti-ageing research breakthroughs and implementations could stir the community into thinking more about its potential (for existential risk increase or reduction, among other things). 

Comment by rodeo_flagellum on What's something that every EA community builder should have thought about? · 2021-09-30T20:51:59.837Z · EA · GW

I think that the poor outcomes you listed - causing reputational damage, spreading a wrong version of EA, over-focusing or under-focusing on certain cause areas, giving bad career advice, etc... - are on the mark, but might not entirely stem from EA community builders not taking the time to understand EA principles. 

For example, I can imagine a scenario where an EA community builder is not apt at presenting cause-areas, but understands the cause-area landscape very well. Perhaps as a result of their poor communication skills (and maybe also from a lack of being self-assured), some individuals on the edge of adopting EA in this particular org. begin to doubt the EA community builder and eventually leave. 

Back to the question. I think that group leaders, including EA community builders, don't often take the time to empathize or comprehend what the topic of the group means to each of the members in the group. 

The question of how this organization, movement, cause, etc... (in this case EA, and EA cause-areas) fits into group member X's life is useful in that it can be predictive of how committed they are, or of how long they'll stick around. 

In my personal experience coming to understand EA, and in my experience helping others at my undergraduate institution understand EA principles, I have noticed that there are a few close (highly involved) individuals, and many other, less involved individuals in the periphery. 

Many times, a lot of work that was expended by the highly involved individuals in trying to get the less involved individuals more involved could have been prevented by communicating the content of the group's activities more clearly. Regularly making sure that everyone is on the same page (literally just bring it up in conversation) can help to reduce the damage caused by the EA community builder. 

Practically speaking, exercises that would likely achieve this outcome of better communication could be: asking each member of the group what EA means to them, having each member present their case or analysis for why their cause-area is more pressing than other cause-areas, and having anonymous surveys to make sure there is a consensus among the group on their understanding of EA principles and making an impact.

Comment by rodeo_flagellum on Open Thread: August 2021 · 2021-08-05T01:37:21.123Z · EA · GW

I posted this on LW as well, but I have being reading content from EA forum + LW sporadically for the last several years; only recently, though, did I find myself visiting here several times per day, and have made an account given my heightened presence. 

I am graduating with a B.A. in Neuroscience and Mathematics this January. My current desire is to find remote work (this is important to me) that involves one or more of: [machine learning, mathematics, statistics, global priorities research]. 

In spirit of the post The topic is not the content, I would like to spend my time (the order is arbitrary) doing at least some of the following: discussing research with highly motivated individuals, conducting research on machine learning theory, specifically relating to NN efficiency and learnability, writing literature reviews on cause areas, developing computational models and creating web-scraped datasets to measure the extent of a problem or the efficacy of potential solution, and recommending courses-of-action (based on my assessments generated from the previous listed entity). 

Generally, my skill set and current desires lead me to believe that I will find advancing the capabilities of machine learning systems, quantifying and defining problem afflicting humans, and synthesizing research literature to inform action, all fulfilling, and that I will be effective in working on these things done as well. 

It is my plan to attend graduate school for one of [machine learning, optimization, computer science] at some point in life (my estimate is around the age of 27-30), but I would first like to experiment with working at an EA affiliated organization (global priorities research) or in industry doing machine learning research. I am aware that it is difficult to get a decent research position without a Master's or PhD, but I believe it is still worth trying for. I have worked on research projects in computational neuroscience/chemistry for one company and three different professors at my school, but none of these projects turned into publications. This summer, I am at a research internship and am about to submit my research on ensemble learning for splice site prediction for review in the journal Bioinformatics - I am 70% confident that this work will get published, with me as the first author. Additionally, my advisor said he'd be willing to work with me to publish a dataset of 5000 fossil images I've taken of various fossils from my collection. While this work is not in machine learning theory, it increases my capacity for being hired and is helping me refine my competence as a researcher / scientist.  

I am planning on attending EA Global in October, but this is dependent upon how significant an issue COVID is at that point.

Comment by rodeo_flagellum on Research into people's willingness to change cause *areas*? · 2021-07-29T17:49:19.483Z · EA · GW

Thank you for your kind words. I will ping you midday-evening Eastern time on Friday if I see no reply. I am going to make a full post (probably by this evening), so please reply to that instead of in this comment thread, if possible. Hope you have a nice day.

Comment by rodeo_flagellum on Research into people's willingness to change cause *areas*? · 2021-07-29T17:47:20.499Z · EA · GW

After some more thought, I've decided that I am going to create a separate post for this. 

I was hesitant because (1) I wasn't sure whether something of this nature already existed, and I just hadn't seen it (there doesn't seem to be any work on the particular question of cause-switching) and (2) I wasn't sure how related the Rethink Priorities 2019 Cause-prioritization survey was to this idea, but it seems to me now that this line of work could be distinct enough to continue pursuing.

Given that the forum's reasoning and creative abilities can more easily be accessed by making a full post, I will go ahead and do so. The post will consist of a considerably expanded version of my previous comment. 

edit: I added Aaron Gertler / Davis_Kingsley to a draft; I am looking to publish the post by tomorrow morning.

Comment by rodeo_flagellum on Open Thread: July 2021 · 2021-07-24T23:24:48.041Z · EA · GW

Thank you for writing this: I'm interested by how design and marketing can influence people into thinking something has authority, weight, believability. 

I am also interested in this, but it was not quite that clear in my mind until I read your phrasing of it. 

Beyond this, I think the mapping between the (structure & content of a question) and (the answers to the question that people give) is very interesting. Studying this idea could be useful if one wants to know how survey questions can be optimized to reduce the disparity between subjective evaluations of a behavior or action and the answerer's actual behavior. Or it could be useful if one wants to know which question phrasings or sequence of questions can most accurately evaluate how depressed or fulfilled a person is, across their mood distribution and across general human mood distributions.  

Comment by rodeo_flagellum on Research into people's willingness to change cause *areas*? · 2021-07-24T22:36:24.070Z · EA · GW

Thank you for posting this question. It has spurred me to consider taking some action; I am now interested in creating a survey on this topic, and then submitting it to my college.  

I am interested in advice on this idea in general (once each section has been read) and on each of the individual sections listed below (execution details, hanging variables, proposals for survey questions, narrowing / expanding the scope, redefining the objectives, etc...) .

Should you find that this has the potential to be effective or informative, I am interested in receiving help and discussing the content/execution of the survey. Should you think this is a waste of effort, or should you have any criticisms, please notify me, I would greatly appreciate the feedback. 

Objective and Measurements

The objectives  (in order) of this survey are to evaluate (1) which issues people believe are worth donating to, coming into the survey, (2) what people believe it would take for them to change what they believe is worth donating to, (3) how people evaluate EA affiliated organizations' cause-area selections, and (4) how people change their evaluations of which issues are worth donating to once they are exposed to EA affiliated organizations' measurements / reasoning of the importance of the cause-areas (e.g. scale, neglectedness, solvability ratings for a cause or metrics attached to a problem area) .

Questions by Objective

These are initial conceptualizations of questions  for each of the objectives listed above. Some questions depend on previously asked questions, so there are multiple sequences of questions that can be used. There are many other ways this survey can be created, so please do not think that I am set on these questions, or even on this particular ordering or set of objectives. 

(1) which issues people believe are worth donating to, coming into the survey

  • What issues are worth donating money to?
  • Please list the top five issues you think are worth donating money to.
  • If you had money to donate, which issues would you donate to?
  • Please list the top five issues you think are worth donating money to. Specify why each is important to you personally and to humanity as a whole.
  • Please list the top five issues you think are worth donating money to. Specify why these are the top five issues humanity should focus on solving.
  • (follow up): Please list some other issues that are maybe slightly less pressing, but are still important.
  • You have $[quantity of funds] you must donate to some issue/s:  What causes would you donate the money to? How much would you donate to each cause?

(2) what people believe it would take for them to change what they believe is worth donating to

  • What would it take for you to consider [iterate through EA affiliated organizations' cause-areas or a subset of causes areas, where the subset could differ between surveys] to be in your (top five) or (selection) of causes worth donating to?
  • Why isn't [iterate through EA affiliated organizations' cause-areas or a subset of causes areas, where the subset could differ between surveys] in your list of causes worth donating to?

(3) how people evaluate EA affiliated organizations' cause-area selections

[iterate through EA affiliated organizations' cause-areas or a subset of causes areas, where the subset could differ between surveys] 

  •  Which of these causes shouldn't be considered causes? Why? 
  •  What would you change about this list? Why?
  • Please select the elements of this list that should be removed, and add the ones that you think are missing. Please provide some reasoning behind your decisions.

(4) how people change their evaluations of which issues are worth donating to once they are exposed to EA affiliated organizations' measurements / reasoning of the importance of the cause-areas

x = [iterate through EA affiliated organizations' cause-areas or a subset of causes areas, where the subset could differ between surveys] 

y = [a quantitative negative or positive outcome]

  • What if you knew working on [x] would/might result in [y]; does this change your cause-area selection?
  • Quantitative statement about the pressingness of [x]; would you now consider donating to this? 
  • Quantitative statement about the pressingness of [x];  do you think this is pressing enough to replace one of your original cause-areas to donate to?

Implementation

Given my circumstances, I am 75% confident that I will be able to submit a survey on this topic to at least two of the following departments [Economics, Neuroscience, Computer Science, Mathematics, Sociology, Anthropology, Philosophy] in my college. Additionally, I am somewhat confident that I will be able to convince EA groups at two other colleges in the area to host this survey in at least one department. Finally, there is a small chance that my non-EA affiliated friends will be able to host this survey and, should the survey be formulated well, a small chance that some members of this forum will take the survey or consider spreading it. 

Across all of these, I estimate that I can get at least 50 survey responses. 

Some Final Remarks

I can envision this survey beginning with a scenario where people divide 1,000,000 hypothetical dollars between cause-areas they personally find important. Then they'd choose how to reallocate funds in subsequent questions that pertain to different objectives (e.g. after reading about the reasoning EA affiliated organization use to select cause-areas, they'd have the option to reallocate money to another cause). A situation where the participant reallocates hypothetical money  to donate upon learning new information could be a short survey-game, similar to Explorable Explanations , but perhaps of a slightly shorter duration. 

Thank you for taking a look at this; I will wait for some feedback before taking more serious steps to conduct this survey (I will still be thinking about question phrasing, objectives, and implementation details).