Thanks a lot, I feel like this post could prove to be really usefuly for me, especially with giving this pattern a nice handle. I very much relate to stressing myself about having impact with my research. This led to me feeling averse towards trying to think about new "useful" research projects, which plausible decreases my research productivity quite a bit.
Relatedly, I'm currently reading "Why Greatness Cannot Be Planned: The Myth of the Objective" by Ken Stanley and Joel Lehman , where they argue that, among others, innovation and research is best achieved by aiming for what's interesting and not what makes progress on a more concrete objective. I don't yet have formed an opinion if I should avoid having impact at the forefront of my day-to-day thinking about research, but I found the idea refreshing that I might just focus on my interest and apply the impact-filter much more sparsely.
 a nice interview about the book can be found here: https://braininspired.co/podcast/86/jason-schukraft on Differences in the Intensity of Valenced Experience across Species
There are lots of potential points of contact. The most obvious is that to determine an individual's possible intensity range of valenced experience, we have to think about the most intense (in the sense of most positive and most negative) experiences available to that individual. I don't have a view about how long-tailed the distribution of pleasures and pains is in humans, but I agree that it's a question worth investigating. And if there are differences in how long-tailed the distribution of valenced experiences is across species, that would entail differences in possible (though not necessarily characteristic) intensity range across species.
Happy to speak to something more specific if you had a particular question in mind.jason-schukraft on Differences in the Intensity of Valenced Experience across Species
Thanks for the clarification, Brian!asaf-ifergan on Excited altruism
Great article!jason-schukraft on Differences in the Intensity of Valenced Experience across Species
It’s plausible to assign split-brain patients 2x moral weight because it’s plausible that split-brain patients contain two independent morally relevant seats of consciousness. (To be clear, I’m just claiming this is a plausible view; I’m not prepared to give an all-things-considered defense of the view.) I take it to be an empirical question how much of the corpus callosum needs to be severed to generate such a split. Exploring the answer to this empirical question might help us think about the phenomenal unity of creatures with less centralized brains than humans, such as cephalopods.jason-schukraft on Differences in the Intensity of Valenced Experience across Species
This seems like a pretty good reason to reject a simple proportion account
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If there’s one thing I’ve learned from thinking about differences in the intensity of valenced experience, it’s that brains are really, really complicated and messy. Perhaps that’s the reason I’m less moved by the type of thought experiments you’ve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, it’s hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. I’m tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.asaf-ifergan on Excitement, hope, and fulfillment
I fully agree with you on that, and from my humble experience, it's rare for people in EA to be interested in doing good purely from a cold and calculated point of view. A lot of us probably had the will to do good much earlier in life and long before we got to EA, and for us Effective Altruism is just the way in which we follow our ever-existing passion to do good.
I also think we should make sure people who stumble upon us don't get the idea that we're not doing this because we're passionate about it. That can and does alienate a pretty substantial amount of people that discover EA, from my own anecdotal experiences with friends and newer community members.
Highlighting content that talks about motivation and excitement, and presenting it to people who are new to EA, might help us to:
1. Prevent people from feeling disconnected from our mission.
2. Be more appealing to people who have a strong desire to do good but are not very analytical or comfortable with the type of content we usually highlight. After we appeal to their emotion and establish common ground - we're all hopeful and excited to do good - then we can start talking about the HOW.
Just want to say I find this topic really exciting and think your report is a great contribution to the discussion.
I hope Founders Pledge takes the plunge and creates a long-term investment fund. I’m sure the experience of doing so will be valuable and I think there will be significant informational value of having such a fund exist. I’m excited to donate to it!nathan on Nathan Young's Shortform
It would be good to easily be able to export jobs from the EA job board.nathan on Nathan Young's Shortform
I suggest at some stage having up and downvoting of jobs would be useful.