Posts

Fine-Grained Karma Voting 2022-09-26T18:58:22.050Z
Why Wasting EA Money is Bad 2022-09-22T01:45:02.133Z
How To Actually Succeed 2022-09-12T22:33:27.895Z
How have nuclear winter models evolved? 2022-09-11T22:40:58.367Z
Is there a “What We Owe The Future” fellowship study guide? 2022-09-01T01:40:48.584Z
Is there any research or forecasts of how likely AI Alignment is going to be a hard vs. easy problem relative to capabilities? 2022-08-14T15:58:17.589Z
How I Came To Longtermism On My Own & An Outsider Perspective On EA Longtermism 2022-08-07T02:42:28.518Z
How long does it take to undersrand AI X-Risk from scratch so that I have a confident, clear mental model of it from first principles? 2022-07-27T16:58:24.773Z
Is there an EA Discord Group? 2022-07-14T01:42:17.806Z
Is Our Universe A Newcomb’s Paradox Simulation? 2022-05-15T07:28:17.426Z
Abortion For Effective Altruists 2022-05-06T13:25:18.966Z
Help Me Choose A High Impact Career!!! 2022-05-06T06:14:38.961Z
Who are the leading advocates, and what are the top publications on broad longtermism? 2022-04-28T02:57:30.647Z
Which Post Idea Is Most Effective? 2022-04-25T04:47:52.284Z

Comments

Comment by Jordan Arel on Cause area: Short-sleeper genes · 2022-09-26T15:37:18.860Z · EA · GW

Thanks tou for writing this. This is an incredibly good idea for a cost-effective cause in my opinion. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself, further investigation could have an especially high upside if successful.

Comment by Jordan Arel on Orexin and the quest for more waking hours · 2022-09-26T15:28:07.612Z · EA · GW

Thanks Christian! This was a well-written initial foray. I need about 9 1/2 hours of sleep every night and it is possibly the single greatest obstacle to achievement I face. I think for long-sleeper people like myself this could have an especially high upside if effective. I will definitely look into it more.

Comment by Jordan Arel on 7 Learnings and a Detailed Description of an AI Safety Reading Group · 2022-09-26T14:51:02.293Z · EA · GW

Thanks for sharing these learnings and for running this group Ninell! I would have been moderately less likely to start start studying AIS on my own if it were not for this group, and I appreciate how thoughtful you were and how much work you put into this group and this post.

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-25T18:24:17.174Z · EA · GW

Very good point. Considering last dollar spent or marginal dollar spent lowers these numbers by quite a lot - though I think even an order of magnitude still gives you quite high numbers

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-25T18:07:04.961Z · EA · GW

Yes, I think it is a very difficult and perhaps neccesarily uneasy balancing act, at least for those whose main or sole priority is to maximize impact. Minimum viable self-care is quite problematic, but it is not plausible we can maximize impact without any sacrifice whatsoever either

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-23T20:28:17.894Z · EA · GW

In the GiveWell article I quoted they estimate an uber-5-year-old life saved costs $7000, for 37 DALYs,  which equals about $189 per life year saved. But if it was actually $4500 per life as you suggest, that would be closer to $121 per life year saved or about 5 months of life for $50 instead of 3 months as I said, but I would rather err on the conservative side.

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-23T03:41:16.947Z · EA · GW

Damn. Yeah I guess I implicitly think like this a lot, I feel very torn between telling you it’s okay don’t worry about it each person has their own comfort level, versus yeah, it’s real and those are real people and we’re really sacrificing their lives for petty pleasures.

I think a few things that help me:

  1. Personally I feel I have much higher leverage with direct work rather than donations, so while money is a consideration it isn’t as important as time and focus on what’s highest leverage. Also, with direct work you can sometimes get sharply increasing returns, an effective entrepreneur or content creator may be many orders of magnitude better than an unsuccessful one. This may or may not apply to you.

  2. I don’t feel things I can spend money on is a primary determinant of my happiness . Most luxuries on the hedonic treadmill don’t actually significantly make me happier long-term, what makes me happy is doing healthy things like diet & exercise (which also improve my productivity), spending time with people I love, and most of all, living by my values and knowing I am doing my best to help those in need (and so being able to help them a massive amount is a positive)

  3. I don’t believe other people are full separate or different than myself. In some profound and deep sense helping them feels like helping myself, firstly enlightened self-interest where it feels good and makes me happier to help others, but in another sense maybe we fundamentally are the same universal consciousness behind each mask of individuality, a position called “open individualism.” Basically my consciousness is literally the same consciousness in each conscious being. Sorry if it sounds a little new-agey, but it really does help me not feel like I’m sacrificing so much, even if there’s only a small chance it’s true, since I have such absurd leverage the selfish expected value that it might be true could still bex extremely high.

Hope this helps, let me know your thoughts!

Comment by Jordan Arel on Announcing “Effective Dropouts” · 2022-09-22T06:09:53.740Z · EA · GW

This was amazing. As a professional dropout, I would like to join your organization so that I can immediately quit it.

I have dropped out of college 3 1/2 times now, the 1/2 time was during Covid when I didn’t quite start the school year before dropping out and deciding to become a homeless vagabond.

I always wanted to become an Ivy League dropout, USC isn’t Ivy League but close enough, it’s way more expensive than most Ivy League schools at least, and now I feel much more confident in whatever I do next. Lots of great entrepreneurs were dropouts from fancy pants schools. I think accomplishment is very closely correlated to dropping out, so just imagine how promising I must be having dropped out such an unusually high number of times.

In all seriousness, I am extremely proud of having dropped out so many times. Each time it was the right decision, and I believe dropping out of things in general when there is a higher expected value option available shows a willingness to kick the sunk-cost fallacy in the pants and follow your dreams. Especially if those dreams include being a homeless Covid vagabond living in a tent in your friend’s backyard.

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-22T05:53:27.036Z · EA · GW

Wow. This is a really great concrete story of the benefit of signaling. Yeah, I find it so fascinating how Effective Altruism has evolved, and I really love all parts of it and think it is a very natural progression which somewhat mirrors my own. It is really unfortunate that not everyone sees this whole context, and I agree worth putting some effort into managing impressions even if it is in a sense a sort of marketing “pacing” which gradually introduces more advanced concepts rather than throwing out some of the crazier sounding bits of EA right off the bat

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-22T04:11:14.338Z · EA · GW

Great point, I thought I could go in more detail but I actually originally intended to make this post a fraction of the length it ended up being already. But that could be a great companion post, maybe a list of specific ways that we could be frugal and a detailed analysis of when it would make sense to spend money in order to save more valuable time and resources. And I appreciate those links will definitely check them out!

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-22T04:07:19.655Z · EA · GW

I like this! It’s a very clean solution that saves a lot of time and hassle. Maybe the downside is that it takes away some autonomy and feels a little paternalistic and onerous to have a list of rules, but I think it could be simple enough and is not an unreasonable ask such that the benefits may outweigh the downsides.

Comment by Jordan Arel on Why Wasting EA Money is Bad · 2022-09-22T04:03:37.193Z · EA · GW

Hm, yeah I thought about that but was thinking the way the grammar worked out it wouldn’t really make sense to interpret it as the EA Funds project. But after getting this feedback I think there is a low enough cost it makes sense to change the name, so I did!

Comment by Jordan Arel on How have nuclear winter models evolved? · 2022-09-14T15:16:12.322Z · EA · GW

Ah yes, I think this was what was referred to in the book. Thank you!

Comment by Jordan Arel on Is there a “What We Owe The Future” fellowship study guide? · 2022-09-02T18:30:51.136Z · EA · GW

This is awesome, thanks Aris!!

Comment by Jordan Arel on How and when should we incentivize people to leave EA bubbles and explore? · 2022-08-22T18:15:58.045Z · EA · GW

I agree this seems like a huge problem. I noticed that even though I am extremely committed to longtermism and have been for many years, the fact that I am skeptical of AI risk seems to significantly decrease my status in longtermist circles, there is a significant isularity and resistance to EA gospel being criticized, and little support for those seeking to do so

Comment by Jordan Arel on How I Came To Longtermism On My Own & An Outsider Perspective On EA Longtermism · 2022-08-07T16:23:34.259Z · EA · GW

Thank you!

Yes, I agree teaching meditation in schools could be a very good idea, I think the tools are very powerful. Apparently Robert Wright Who wrote the excellent book “why Buddhism is true“ among other books, has started a project called the apocalypse aversion project which he talked about with Rob Wiblin on an episode of 80,000 hours, One of the main ideas being that if we systematically encourage mindfulness practice we could broadly reduce existential risk.

I think you’re right, EA can be a bit inscrutable and there are definitely some benefits to being appealing to a wider popular audience, though there may also be downsides to not focusing on the EA audience

Comment by Jordan Arel on How long does it take to undersrand AI X-Risk from scratch so that I have a confident, clear mental model of it from first principles? · 2022-08-04T01:22:28.093Z · EA · GW

Thank you, this is very much what I was looking for

Comment by Jordan Arel on How long does it take to undersrand AI X-Risk from scratch so that I have a confident, clear mental model of it from first principles? · 2022-08-04T01:17:20.105Z · EA · GW

Thanks! Already taking this course

Comment by Jordan Arel on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-07-11T05:25:05.375Z · EA · GW

Thank you Dony, Denis, and Matt! I really enjoyed reading this post, and excited about the idea. Looking forward to seeing what posts are submitted!

Comment by Jordan Arel on Impact markets may incentivize predictably net-negative projects · 2022-06-28T21:11:36.529Z · EA · GW

I think that retroactive Impact Markets may be a net negative for many x-risk projects, however, I also think that in general Impact Markets may significantly reduce x-risk.

I think you have to bear in mind that if this project is highly successful, it has the potential to create a revolution in the funding of public goods. If humanity achieves much better funding and incentive mechanisms for public goods, this could create a massive increase in the efficiency of philanthropy.

It is hard to know how beneficial such a system would be, but it is not hard to see how mutilpying the effectiveness of philanthropy and public good provision could make society function much better by improving education, coordination, mental health, moral development, health, etc., and increases in these public goods could broadly improve humanity’s ability to confront many x-risks.

I think it may make sense for Impact Markets to find ways of limiting or excluding x-risk projects, but I think abandoning Impact Markets altogether would be a mistake, and considering their massive upsides I cannot say I agree that they are net negative in expectation, even without excluding x-risk projects.

Comment by Jordan Arel on Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter · 2022-05-17T01:35:02.885Z · EA · GW

I love this post! Solarpunk was my first intuition as well, I think there is a lot of good evidence that green and natural environments support happiness and productivity, and so I don’t think it is actually out of alignment with utilitarianism or EA at all.

I have a theory of reality which makes aesthetics the fundamental force of the universe. To demonstrate this, if effective altruism is successful in colonizing space and ends up determining the shape of the future of the universe, then this “shape” will be whatever aesthetic shape we have determined creates maximum utility.

I think aesthetics is a much better fundamental for utilitarianism than pleasure, which intuitively seems quite base and basic. Therefore, I agree that aesthetics are exceedingly important in figuring out what future we want to create.

Comment by Jordan Arel on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-15T19:10:23.529Z · EA · GW

Thank you! Yes, I’m pretty new here, and now that you say that I think you’re right, anthropics makes more sense.

I am inclined to think the main thing required to be an observer would be enough intelligence to ask whether one is likely to be the entity one is by pure chance, and this doesn’t necessarily require consciousness, just the ability to assess likelihood one is in a simulation into one’s decision calculus.

I had not thought about the possibility that future beings are mostly conscious, but very few are intelligent enough to ask the question. This is definitely a possibility. Though if the vast majority of future beings are unintelligent, you might expect there to be far fewer simulations of intelligent beings like ourselves, somewhat cancelling this possibility out.

So yeah, since I think most future beings (or at least a very large number) will most likely be intelligent, I think the selection affects do likely apply.

Comment by Jordan Arel on Is Our Universe A Newcomb’s Paradox Simulation? · 2022-05-15T18:48:57.746Z · EA · GW

Thank you for this reply!

Yes, the resolution of other moral patients is something I left out. I appreciate you pointing this out because I think it is important, I was maybe assuming something like that longtermists are simulated accurately and that everything else has much lower resolution such as only being philosophical zombies, though as I articulate this I’m not sure that would work. We would have to know more about the physics of the simulation, though we could probably make some good guesses.

And yes, it becomes much stronger if I am the only being in the universe, simulated or otherwise. There are some other reasons I sometimes think the case for solipsism is very strong, but I never bother to argue for them, because if I’m right then there’s no one else to hear what I’m saying anyways! Plus the problem with solipsism is that to some degree everyone must evaluate it for themselves, since the case for it may vary quite a bit for different individuals depending on who in the universe you find yourself as.

Perhaps you are right about AI creating simulations. I’m not sure they would be as likely to create as many, but they may still create a lot. This is something I would have to think about more.

I think the argument with aliens is that perhaps there is a very strong filter such that any set of beings who evaluate the decision will come to the conclusion that they are in a simulation, and so any thing that has the level of intelligence required to become spacefaring would also be intelligent enough to realize it is probably in a simulation and so it’s not worth it. Perhaps this could even apply to AI.

It is, I admit, quite an extreme statement that no set of beings would ever come to the conclusion that they might not be in a simulation, or would not pursue longtermism on the off-chance that they are not in a simulation. But on the other hand, it would be equally extreme not to allow the possibility that we are in a simulation to affect our decision calculus at all, since it does seem quite possible - though perhaps the expected value of the simulation is ttoo small to have much of an effect, except in the universe where the universe is tiled with meaning-maximizing hedonium of the most important time in history and we are it.

I really appreciate your comment on CDT and EDT as well. I felt like they might give the same answer, even though it also “feels” somewhat similar to a Necomb’s Paradox. I think I will have to Study decision theory quite a bit more to really get a handle on this.

Comment by Jordan Arel on Help Me Choose A High Impact Career!!! · 2022-05-06T15:54:46.883Z · EA · GW

Thank you so much Michelle, this reflection is really useful. It feels like a reflection of what I already know, and yet having it reflected back from the outside is very helpful, makes it feel more real and clear somehow. Much appreciated!!

Comment by Jordan Arel on Help Me Choose A High Impact Career!!! · 2022-05-06T15:52:19.617Z · EA · GW

Thanks, I don’t think I fully appreciated the importance of that. Just updated it above, and will share that version with others!

Comment by Jordan Arel on An easy win for hard decisions. · 2022-05-06T06:33:32.288Z · EA · GW

Thank you so much for this!! Incredibly helpful and inspired This Post - feedback appreciated!!

Comment by Jordan Arel on Which Post Idea Is Most Effective? · 2022-04-27T10:18:33.852Z · EA · GW

Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime!

I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up.

Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this!

Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like.

Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)

Comment by Jordan Arel on Which Post Idea Is Most Effective? · 2022-04-27T10:03:42.498Z · EA · GW

Yes! I think the main threats are hard to predict, but mostly involve terrorism with advanced technology, for example weaponized blackholes, intentional grey goo, super coordinated nuclear attacks, and probably many, many other hyper-advanced technilogies we can’t even conceive of yet. I think if technology continues to accellerate it could get pretty bad pretty fast, and even if we’re wrong about AI somehow, human malevolence will be a massive challenge.

Comment by Jordan Arel on Which Post Idea Is Most Effective? · 2022-04-27T09:59:07.221Z · EA · GW

Thanks William! This feedback is super valuable. Yes I think the massive scalable community building project would be novel and it actually ties in with the donor contest as well. Glad to know this would be useful! And good thought, I think writing about my own story will be easiest as well. And I will definitely write about broad longtermism, it is one of my main areas of interest.

Comment by Jordan Arel on This innovative finance concept might go a long way to solving the world's biggest problems · 2022-04-13T21:26:58.697Z · EA · GW

Thanks for writing up this idea! I think the risk management aspect of ESG is important, and this could definitely be a step in the right direction.

My main concern is that I am not sure how likely it is that there is a clear path to get investors to adopt Universal Ownership, it is not something I had heard of before. It seems to me the amount of risk reduction in the portfolio a single investor, caused by their individual marginal divestment/shareholder activism from a company with negative externalities would be quite small, so it would really only work if at least a majority of investors adopted a Universal Ownership model. Are there many investors who are adopting or taking this seriously already?

Also, to get a truly accurate pricing of externalities to maximize public/social good, each investor would ideally model and internalize the effects of externalities on ALL of society, not just their own portfolio, which would only incentivize them to consider a small fraction of the actual value investors and companies could provide to society. I realize this would be an even bigger ask of investors, but my hope is that there is an alternative social stock market or public goods market that systemically, financially rewards positive externalities and taxes negative externalities by design.

That said I could be wrong, would definitely be excited to see something like this gain more traction as it would be much better than what we currently have and I think it is possible something like this could gradually become more popular, especially if there was at least a small but reliable increase in value for investors by better accounting for risk which outweighs costs of modeling and is not countered by displacement effects.

Comment by Jordan Arel on Open Thread: Spring 2022 · 2022-04-08T05:15:08.062Z · EA · GW

Hey everyone! Just joined EA a few months ago and was very fortunate to attend EAGx Bostone recently! I could not be more excited about discovering this community!!!

I’m doing two fellowships and working on a marketing project team in my university EA USC group.

I feel very strongly about utilitarianism, am interested in physics, and as a result came to longtermism several years ago on my own. I actually wrote a book called “Ways to Save The World” essentially about innovative broad strategies to sustainably, systemically reduce existntial risk. Really excited to share it with the EA community and have my ideas challenged and improved by fellow highly intelligent, rational do-gooders!

Comment by Jordan Arel on Questions That Lead to Impactful Conversations · 2022-03-30T07:49:37.360Z · EA · GW

This was awesome!

Here’s a question I love. Peter Thiel's favorite interview question to job candidates:

“What important truth do very few people agree with you on?”