Posts

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" 2021-01-05T02:18:27.901Z
80,000 Hours user survey closes this Sunday 2020-09-08T17:37:20.525Z
Some promising career ideas beyond 80,000 Hours' priority paths 2020-06-26T10:34:11.912Z
Problem areas beyond 80,000 Hours' current priorities 2020-06-22T12:49:48.166Z
Essential facts and figures -- COVID-19 2020-04-20T18:33:50.565Z
Thoughts on 80,000 Hours’ research that might help with job-search frustrations 2019-04-16T18:51:04.319Z

Comments

Comment by ardenlk on What is going on in the world? · 2021-01-19T02:40:25.936Z · EA · GW

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

Comment by ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T17:51:01.131Z · EA · GW

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Comment by ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T02:50:33.134Z · EA · GW

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

Comment by ardenlk on What does it mean to become an expert in AI Hardware? · 2021-01-12T03:34:18.557Z · EA · GW

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are able to meet their hardware needs. In other words, depending on the companies it could make sense to advise industry in addition to the EA community.

University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. Industry: See section 4 for a list of possible companies to work at.

For these two career ideas I'd just add -- what is implicit here I think but maybe worth making explicit -- that it'd be important to be highly selective and pretty canny about what research topics/companies you work with in order to specifically help AI be safer and more beneficial.

These experiences will probably update my thoughts on my career significantly.

Seems right - and if you were to write an update at that point I'd be interested to read it!

Comment by ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-11T20:14:39.241Z · EA · GW

Thanks!

Comment by ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-09T16:46:49.834Z · EA · GW

Hey Aaron, I know this is from a while ago and your head probably isn't in it, but I'm curious if you have any intuitions on whether analogues of the successful techniques you list do/don't apply to making career changes or other actions besides giving to charity.

Also really appreciating the forum tags lately -- really nice to be able to search by topic!

Comment by ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:56:28.304Z · EA · GW

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you're most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that's right might depend on your metaethics -- if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don't really think that's right as a matter of metaethics, though I'm not sure.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2021-01-05T23:44:53.394Z · EA · GW

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

Comment by ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:29:37.856Z · EA · GW

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

Comment by ardenlk on Can I have impact if I’m average? · 2021-01-04T21:54:31.068Z · EA · GW

Thanks for posting this -- I think this might be a pretty big issue and I'm glad you've had success helping reduce this misconception by talking to people!

As for explanations as to why it is happening, I wonder if in addition to what you said, it could be that because EA emphasises comparing impact between different interventions/careers etc. so heavily, people just get in a really compare-y mindset, and end up accidentally thinking that comparing well to other interventions is itself what matters, instead of just having more impact. I think improved messaging could help.

Comment by ardenlk on Kelsey Piper on "The Life You Can Save" · 2021-01-04T21:30:34.530Z · EA · GW

Thanks Aaron, I wouldn't read this if you hadn't posted it, and I think it contains good lessons on messaging.

Comment by ardenlk on Careers Questions Open Thread · 2020-12-13T00:12:11.134Z · EA · GW

Hi Anafromthesouth,

This is just an idea, but I wonder if you could use your data science and statistics skills to help nonprofits or foundations working on important issues (including outside the EA community) better evaluate their impact or otherwise make more informed choices. (If those skills need sharpening, taking courses seems sensible.) From the name it sounds like this could dovetail with your work in your masters', but I don't actually know anything about that kind of programme.

I guess it sounds to me like going back to academic stuff isn't what you want to do, and it would probably be a bit tough with the 5 year publication gap (though I don't know if that's as much of a thing in neurosciene as in other disciplines), and doesn't work as well with your master's -- so if it were me I think I'd try to double down on the stats and data science stuff.

Comment by ardenlk on Careers Questions Open Thread · 2020-12-13T00:03:58.367Z · EA · GW

I agree with what the others below have written, but wanted to just add:

If you aim for entrepreneurship, which it sounds like you might want to, I think it makes sense to stay open to the possibility that in addition to building companies that could also mean things like running big projects within existing companies, starting a nonprofit, running a big project in a nonprofit, or even running a project in a govnerment agency if you can find one with enough flexibility.

Comment by ardenlk on Where are you donating in 2020 and why? · 2020-12-12T20:50:19.520Z · EA · GW

Yes, I do think they had room for more funding, but could be wrong. My view was based on (1) a recommendation from someone whose judgement on these things I think is informed and probably better than most people's including mine, who thought the Biden Victory Fund was the highest impact thing to donate to this year, (2) an intuition that the DNC/etc. wouldn't put so much work into fundraising if more money didn't benefit their chances of success, and (3) the way the Biden Victory Fund in particular structured the funds it received, which was to distribute it among the Biden campaign, the DNC, and state parties (in order of priority), which it said how it would do more precisesly, except that it would change the distrbution if the results would have resulted in "excessive" amounts going to certain orgs.

Comment by ardenlk on What are you grateful for? · 2020-11-28T05:34:14.520Z · EA · GW

I'm grateful for all the people in the EA community who write digests, newsletters, updates, highlights, research summaries, abstracts, and other vehicles that help me keep abreast of all the various developments.

I'm also grateful for there being so much buzzing activity in EA that such vehicles are so useful/essential!

Comment by ardenlk on Where are you donating in 2020 and why? · 2020-11-26T20:28:56.271Z · EA · GW

I am not that confident this was the right decision (and will be curious about people's views, though I can't do anything about it now), but I already gave most of 10% of my income this year (as per my GWWC pledge) to the 'Biden Victory Fund.' (The rest went to the Meta Fund earlier in the year). I know Biden's campaign was the opposite of neglected, but I thought the imporance and urgency of replacing Trump as the US president swamped that consideration in the end (I think having Republicans in the White House, and especially Trump, is very bad for the provision of global public and coordination-relient goods). I expect to go back to giving to non-political causes next year.

I am still considering giving to the Georgia senate race with some of my budget for next year, because it seems so high 'leverage' on US electoral reform, which would (I think) make it easier for Democrats to get elected in the future and (I hope) make the US's democracy function better long-term. For example, there's an electoral reform bill that seems much more likely to pass if Democrats control the senate.

The quality of these choices depends on substantive judgements that in US politics Democrats make better choices for the world than Republicans, and that continued US global leadership would be better than the alternative with regard to things like climate change, AI, and biorisks. I think both of these things are true, but could be wrong!

Comment by ardenlk on What actually is the argument for effective altruism? · 2020-10-07T11:12:44.941Z · EA · GW

I think adding a maximizing premise like the one you mention could work to assuage these worries.

Comment by ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-25T19:18:30.540Z · EA · GW

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

Comment by ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T13:30:27.484Z · EA · GW

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

Comment by ardenlk on 80,000 Hours user survey closes this Sunday · 2020-09-12T14:35:32.540Z · EA · GW

Thanks for filling out the survey and for the kind words!

Comment by ardenlk on Asking for advice · 2020-09-05T20:18:15.943Z · EA · GW

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

Comment by ardenlk on Improving disaster shelters to increase the chances of recovery from a global catastrophe · 2020-09-04T17:13:34.274Z · EA · GW

Cheers!

Comment by ardenlk on EA reading list: Scott Alexander · 2020-08-09T07:48:33.635Z · EA · GW

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:49.565Z · EA · GW

Great! Linked.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:21.949Z · EA · GW

Just to let you know I've revised the blurb in light of this. Thanks again!

Comment by ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:46:45.520Z · EA · GW

We also had this choice with our other problems and other paths posts, and decided against the listicle style, basically for the reasons you say. I think there is a nacent/weak norm, and think it makes sense to uphold it. The main argument against is that is actually kind of helpful to know if something is a long list or a short list -- esp if I have a small bit of time and won't want to start something long.

Comment by ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:41:23.976Z · EA · GW

Thank you for writing this up!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:14:16.804Z · EA · GW

Hey Michael,

Thanks (as often) for this list! I'm wondering, might you be up for putting it into a slightly more fomal standalone post or google doc that we could potentially link to from the blurb?

Really love how you're collecting resources on so many different important topics!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:12:41.976Z · EA · GW

Thanks for these points! Very encouraging that you can do this work from such a variety of disciplines. I'll revise the blurb in light of this.

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:11:23.972Z · EA · GW

Interesting! I think this might fall under global priorities research, which we have as a 'priority path' -- but it's not really talked about in our profile on that, and I agree it seems like it could be a good straetgy. I'll take a look at the priority path and consider adding something about it. Thanks!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:08:11.575Z · EA · GW

Thanks so much Rohin for this explanation. It sounds somewhat persuasive to me, but I don't feel in a psoition to have a good judgement on the matter. I'll pass this on to our AI specialists to see what they think!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:07:36.077Z · EA · GW

Thanks Max -- I'll pass this on!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:55:54.813Z · EA · GW

Hi Brian,

In general, we have a heuristic according to which issues that primarily affect people in countries like the US are less likely to be high impact for more people to focus on at the margin than issues that primiarly affect others or affect all people equally. While criminal justice does affect people in other countries as well, it seems like most of the most promising interventions are country-, and especially US-, specific -- including the interventions Open Phil recommends, like those discussed here and here. The main reason for this heuristic is that these issues are likely to be less neglected (even if they're still neglected relative to how much attention they should receive in general), and likely to affect a smaller number of people. Does that make sense?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:48:23.122Z · EA · GW

Hi Tobias, we've added "governance of outer space" on your recommendation. Thanks!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T15:18:32.948Z · EA · GW

Hi Rohin,

Thanks for this comment. I don't know a lot about this area, so I'm not confident here. But I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

I guess I think this first becuase bugs seem capable of being big deals in this context (maybe I'm wrong there?), and because it seems like there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Curious to hear if/why you disagree!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:39:00.129Z · EA · GW

Thanks for this feedback (and for the links)!

Comment by ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:35:19.368Z · EA · GW

Hm - interesting suggestion! The basic case here seems pretty compelling to me. One question I don't know the answer to is how predicable countries trajectories are -- like how much would a niave extrapolation have predicted the current balance of power 50 years ago? If very unpredictable it might not be worth it in terms of EV to bet on the extrapolation. But

I feel more intuitievely excited about trying to foster home grown EA communities in a range of such countries, since many of the people working on it would probably have reasons to be in and focus on those countries anyway because they're from there.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:23:33.239Z · EA · GW

Thanks! I'm seeing that I sometimes only used links that worked on the 80k site. Fixing the issue now.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:20:02.794Z · EA · GW

Hi Will,

To be honest, I'm not that confident in wild animal welfare being on the 'other longtermist' list rather than the 'other global' list -- we had some internal discussion on the matter and opinions differed.

Basically it's on 'other longtemrmist' because the case for it contributing to spreding positive values seems stronger to me than in the case of the other global problems. In some sense working on any issue spreds positive values, but wild animal welfare is sufficiently 'weird' that it's success as a cause area seems more likely to disrupt people's intuitive views than successes of other areas, which might be particularly useful for spreading postitive values/moral philosophy progress. In particular, the rejection of "natural = good" seems like it could be a unique and useful contribtuion. I also find the analogy of wild animals and other forms of consciousness that we might find ourselves influencing (alien life? Artificial consciousnesses?) somewhat compelling, such that getting our heads straight on wild animal welfare might help prepare us for that.

Comment by ardenlk on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T08:02:20.110Z · EA · GW

Thank you for pointing out ea.greaterwrong.org! I've had the problem of not being able to wayback forum posts before.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:11:01.844Z · EA · GW

Hey jackmalde, interesting idea -- though I think I'd lean against writing it. I guess the main reason is something like: There are quite a few issues to explore on the above list so if someone is searching around for something (rather than if they have something in mind already), they might be able to find an idea there. I guess despite what I said to Michael above, I do want people to see it as some positive signal if something's on the list. Having a list of things not on the list would probably not add a lot, because the reasons would just be pretty weak things like "brief investigation + asking around didn't make this seem compelling acc to our assumptions". Insofar as soeone was already thinking of working on something and they saw that, they probably wouldn't take it as much reason to change course. Does that make sense?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:01:02.854Z · EA · GW

Thanks! Helpful pointers.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:00:11.241Z · EA · GW

Hey atlasunshrugged,

I'm afraid I don't know the answers to your specific questions. I agree that there are things worse than great power conflict, and perhaps China becoming the dominent world power could be one of those things. FWIW although war between the US and China does seem like one of the more worrying scinarios at the moment, I meant the description problem to be broader than that and include any great power war.

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T10:00:34.453Z · EA · GW

Hey Michael,

Glad you've found it helpful, and thanks for these resource lists! I'm adding them to our inernal list of resources. Anything you've read from them you think it'd be particularly good to add to the above blurbs?

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:45:50.244Z · EA · GW

This is great, thanks!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:44:59.967Z · EA · GW

Thanks Pablo -- I agree we should discuss risks to EA more. It seems like it should be a natural part of 'building effective altruism' to me. I wonder why we don't discuss it more in that area. Maybe people are afraid it will seem self-indulgent?

I think I'd worry about how to frame it in 80k content because our stuff is very outward-facing and people who aren't already part of the community might not respond well to it. But that's less of an issue with forum posts, etc.

I'd also guess most people's estimates for EA going away or becoming much less valuable in the next 10 years are lower than yours. Want to expand a bit on why you think it's as high as you do?

Thanks for bringing this up and also for the list of places this has been discussed!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T09:27:31.854Z · EA · GW

Fixed, thanks!

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T21:05:01.653Z · EA · GW

Hey Michael -- there isn't such a list, though we did consider and decide not to include a number of problems in the process of putting this together. I definately think that "X and Y are on the list so Z, which wasn't mentioned explicitly, is also likely a good area" would be a bad inference! But there are also probably lots of issues that we didn't even consider so something not being on the list is probably at best a weak negative signal. [Edit: I shouldn't have said "at best" -- it's a weak negative signal.]

Comment by ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T19:12:19.924Z · EA · GW

fixed, thanks!

Comment by ardenlk on How hot will it get? · 2020-04-29T14:38:56.342Z · EA · GW

Thanks, this is helpful!