Posts

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" 2021-01-05T02:18:27.901Z
80,000 Hours user survey closes this Sunday 2020-09-08T17:37:20.525Z
Some promising career ideas beyond 80,000 Hours' priority paths 2020-06-26T10:34:11.912Z
Problem areas beyond 80,000 Hours' current priorities 2020-06-22T12:49:48.166Z
Essential facts and figures -- COVID-19 2020-04-20T18:33:50.565Z
Thoughts on 80,000 Hours’ research that might help with job-search frustrations 2019-04-16T18:51:04.319Z

Comments

Comment by Ardenlk on Disentangling "Improving Institutional Decision-Making" · 2021-09-26T16:00:25.948Z · EA · GW

Nice post : )

I mostly agree with your points, though am a bit more optimistic than it seems like you are about untargeted, value-neutral IIDM having a positive impact.

Your skepticism about this seems to be expressed here:

And yet, it seems possible that there are some institutions that cause an overwhelming amount of harm (e.g. the farming industry or some x-risk-increasing endeavors like gain-of-function research), and that the value-neutral version of IIDM fails to take that into account.

I think this is true, but it still seems like the aims of institutions are pro-social as a general matter -- x-risk and animal suffering in your examples are side effects that aren't means to the ends of the institutions, which are 'increase biosecuirty' and 'make money', and if improving decisionmaking helps orgs get at their ends more efficiently then we should think they will have fewer bad side effects if they have better decisonmaking. Also generally orgs' aims (e.g. "make money") will presuppose the firm's, and therefore humanity's survival, so it still seems good to me as a general matter for orgs to be able to pursue their aims more effectively.

Comment by Ardenlk on All Possible Views About Humanity's Future Are Wild · 2021-07-16T16:17:05.222Z · EA · GW

Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

Comment by Ardenlk on All Possible Views About Humanity's Future Are Wild · 2021-07-13T21:33:51.947Z · EA · GW

This is fantastic.

This doesn't take away from your main point, but it would be some definate amount less wild if we won't start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.

Comment by Ardenlk on [3-hour podcast] Michael Huemer on epistemology, metaethics, EA, utilitarianism and infinite ethics · 2021-03-28T08:36:08.250Z · EA · GW

Thanks for posting this - I actually haven't listened to this ep but I just listened to the science of pleasure episode and thought it was fantastic, and wouldn't have found it without this post. My only wish was that you'd asked him to say specifically what he meant by conscious. I'll def listen to other episodes now.

Comment by Ardenlk on Some quick notes on "effective altruism" · 2021-03-25T09:28:28.549Z · EA · GW

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

  • It seems like it's been pretty successful? EA has grown a lot under the term, including attracting some great people, and despite having some very controverisal ideas hasn't faced that big of a backlash yet. Hard to know what the counterfactual would be, but it seems non-obvious it would be better.
  • It actually sounds non'ideological' to me if what that means is being comitted to certain ideas of what we should do and how we should think-- it sounds like it's saying 'hey, we want to do the effective and altruistic thing. We're not saying what that is.' it sounds more open, more like 'a question' than many -isms.

Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community.

I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.

Comment by Ardenlk on Clarifying the core of Effective Altruism · 2021-01-30T20:48:29.059Z · EA · GW

I really like this post! I'm sympathetic to the point about normativity. I particualrly think the point that movements may be able to suffer from not being demanding enough is a potentially really good one and not something I've thought about before. I wonder if there are examples?

For what it's worth, since the antecedent "if you want to contrinute to the common good" is so minimal, ben's def feels kind of near-normative to me -- like it gets someone on the normative hook with "mistake" unless they say "well I jsut don't care about the common good", and then common sense morality tells them they're doing something wrong... so it's kind of like we don't have to explicitly?

Also, I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum. And then it's like... does that really get you out of the cited worries with maximising? Like it still means that "doing a lot of good" will be not good enough a lot of the time (as long as there's still something else you could do that would do much more good), which I think could still run into at least the 2nd and 3rd worries you cite with having maximising in there?

Comment by Ardenlk on My Career Decision-Making Process · 2021-01-29T10:38:25.131Z · EA · GW

Thanks for this quick and detailed feedback shaybenmoshe, and also for your kind words!

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information.

Yes. We decided to go "ideas/information-first" for various reasons, which has upsides but also downsides. We are hoping to mitigate the downsides by having practical, career-planning resources more emphasised alongside Key Ideas. So in the future the plan is to have better resources on both kinds of things, but they'll likely be separated somewhat -- like here are the ideas [set of articles], and here are the ways to use them in your career [set of articles]. We do plan to introduce the ideas first though, which we think are important for helping people make the most of their careers. That said, none of this is set in stone.

Another important point is that I don't like, and disagree with the choice of, the emphasis on longtermism and AI safety. Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety. More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis. A minority of them felt like me, and the majority understood it as "all you can meaningfully do in EA is AI safety", which was very discouraging for them. I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.

We became aware of the AI safety problem last year -- we've tried to deemphasie AI Safety relative to other work since to make it clearer that, although it's our top choice for most pressing problem and therefore what we'd recommend people work on if they could work on anything equally successfully, that doesnt' mean that it's the only or best choice for everyone (by a long shot!). I'm hoping Key Ideas no longer gives this impression, and that our lists of other problems and paths might help show that we're excited about people working on a variety of things.

Re: Longtermism, I thnk our focus on that is just a product of most people at 80k being more convinced of longtermism's truth/importance, so a longer conversation!

Another point is that the "Global priorities" section takes a completely top-to-bottom approach. I do agree that it is sometimes a good approach, but I think that many times it is not. One reason is the tension between opportunities and cause areas which I already wrote about. The other is that some people might already have their career going, or are particularly interested in a specific path. In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices. Instead, in these situations it would be much better to help people to optimize their current path towards more important goals.

I totally agree with this and think it's a problem with Key Ideas. We are hoping the new career planning process we've released can help with this, but also know that it's not the most accessible right now. Other things we might do: improve our 'advice by expertise' article, and try to make clear in the problems section (similar to the point about ai safety above) that we're talking about what is most pressing and therefore best to work on for the person who could do anything equally successfully, but that career capital and personal fit mean that's not going to be true of the reader, so while we think the problems are important for them to be aware of and an important input to their personal prioritisation, it's not the end of it.

I disagree with the principle of maximizing expected value, and definitely don't think that this is the way it should be phrased as part of the "the big picture".

Similar to longtermism (and likely related) - it's just our honest best guess at what is at least a good decision rule, if not the decision rule.

I really liked the structure of the previous career guide. It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries. Furthermore, its modularity made it very easy to read the parts you are interested in. The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.

Mostly agree with this. We're planning to split key ideas into several articles that are much easier to navigate, but we're having trouble making that happen as quickly as we would like. One thing is that lots of people skipped around the career guide, so we think many readers prefer a more 'shopping'-like experience (like a newspaper) than the career guide had anyway. We're hoping to go for a hybrid in the future.

Comment by Ardenlk on My Career Decision-Making Process · 2021-01-24T09:35:52.043Z · EA · GW

Hey shaybenmoshe, thanks for this post! I work at 80,000 Hours, so I'm especially interested in it from a feedback perspective. Michelle has already asked for your expended thoughts on cybersecurity and formal verification, so I'll skip those -- would you also be up for expanding on why the Key Ideas page seems less helpful to you vs. the older career guide?

Comment by Ardenlk on What is going on in the world? · 2021-01-19T02:40:25.936Z · EA · GW

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T17:51:01.131Z · EA · GW

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T02:50:33.134Z · EA · GW

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

Comment by Ardenlk on What does it mean to become an expert in AI Hardware? · 2021-01-12T03:34:18.557Z · EA · GW

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are able to meet their hardware needs. In other words, depending on the companies it could make sense to advise industry in addition to the EA community.

University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. Industry: See section 4 for a list of possible companies to work at.

For these two career ideas I'd just add -- what is implicit here I think but maybe worth making explicit -- that it'd be important to be highly selective and pretty canny about what research topics/companies you work with in order to specifically help AI be safer and more beneficial.

These experiences will probably update my thoughts on my career significantly.

Seems right - and if you were to write an update at that point I'd be interested to read it!

Comment by Ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-11T20:14:39.241Z · EA · GW

Thanks!

Comment by Ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-09T16:46:49.834Z · EA · GW

Hey Aaron, I know this is from a while ago and your head probably isn't in it, but I'm curious if you have any intuitions on whether analogues of the successful techniques you list do/don't apply to making career changes or other actions besides giving to charity.

Also really appreciating the forum tags lately -- really nice to be able to search by topic!

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:56:28.304Z · EA · GW

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you're most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that's right might depend on your metaethics -- if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don't really think that's right as a matter of metaethics, though I'm not sure.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2021-01-05T23:44:53.394Z · EA · GW

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:29:37.856Z · EA · GW

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

Comment by Ardenlk on Can I have impact if I’m average? · 2021-01-04T21:54:31.068Z · EA · GW

Thanks for posting this -- I think this might be a pretty big issue and I'm glad you've had success helping reduce this misconception by talking to people!

As for explanations as to why it is happening, I wonder if in addition to what you said, it could be that because EA emphasises comparing impact between different interventions/careers etc. so heavily, people just get in a really compare-y mindset, and end up accidentally thinking that comparing well to other interventions is itself what matters, instead of just having more impact. I think improved messaging could help.

Comment by Ardenlk on Kelsey Piper on "The Life You Can Save" · 2021-01-04T21:30:34.530Z · EA · GW

Thanks Aaron, I wouldn't read this if you hadn't posted it, and I think it contains good lessons on messaging.

Comment by Ardenlk on Careers Questions Open Thread · 2020-12-13T00:12:11.134Z · EA · GW

Hi Anafromthesouth,

This is just an idea, but I wonder if you could use your data science and statistics skills to help nonprofits or foundations working on important issues (including outside the EA community) better evaluate their impact or otherwise make more informed choices. (If those skills need sharpening, taking courses seems sensible.) From the name it sounds like this could dovetail with your work in your masters', but I don't actually know anything about that kind of programme.

I guess it sounds to me like going back to academic stuff isn't what you want to do, and it would probably be a bit tough with the 5 year publication gap (though I don't know if that's as much of a thing in neurosciene as in other disciplines), and doesn't work as well with your master's -- so if it were me I think I'd try to double down on the stats and data science stuff.

Comment by Ardenlk on Careers Questions Open Thread · 2020-12-13T00:03:58.367Z · EA · GW

I agree with what the others below have written, but wanted to just add:

If you aim for entrepreneurship, which it sounds like you might want to, I think it makes sense to stay open to the possibility that in addition to building companies that could also mean things like running big projects within existing companies, starting a nonprofit, running a big project in a nonprofit, or even running a project in a govnerment agency if you can find one with enough flexibility.

Comment by Ardenlk on Where are you donating in 2020 and why? · 2020-12-12T20:50:19.520Z · EA · GW

Yes, I do think they had room for more funding, but could be wrong. My view was based on (1) a recommendation from someone whose judgement on these things I think is informed and probably better than most people's including mine, who thought the Biden Victory Fund was the highest impact thing to donate to this year, (2) an intuition that the DNC/etc. wouldn't put so much work into fundraising if more money didn't benefit their chances of success, and (3) the way the Biden Victory Fund in particular structured the funds it received, which was to distribute it among the Biden campaign, the DNC, and state parties (in order of priority), which it said how it would do more precisesly, except that it would change the distrbution if the results would have resulted in "excessive" amounts going to certain orgs.

Comment by Ardenlk on What are you grateful for? · 2020-11-28T05:34:14.520Z · EA · GW

I'm grateful for all the people in the EA community who write digests, newsletters, updates, highlights, research summaries, abstracts, and other vehicles that help me keep abreast of all the various developments.

I'm also grateful for there being so much buzzing activity in EA that such vehicles are so useful/essential!

Comment by Ardenlk on Where are you donating in 2020 and why? · 2020-11-26T20:28:56.271Z · EA · GW

I am not that confident this was the right decision (and will be curious about people's views, though I can't do anything about it now), but I already gave most of 10% of my income this year (as per my GWWC pledge) to the 'Biden Victory Fund.' (The rest went to the Meta Fund earlier in the year). I know Biden's campaign was the opposite of neglected, but I thought the imporance and urgency of replacing Trump as the US president swamped that consideration in the end (I think having Republicans in the White House, and especially Trump, is very bad for the provision of global public and coordination-relient goods). I expect to go back to giving to non-political causes next year.

I am still considering giving to the Georgia senate race with some of my budget for next year, because it seems so high 'leverage' on US electoral reform, which would (I think) make it easier for Democrats to get elected in the future and (I hope) make the US's democracy function better long-term. For example, there's an electoral reform bill that seems much more likely to pass if Democrats control the senate.

The quality of these choices depends on substantive judgements that in US politics Democrats make better choices for the world than Republicans, and that continued US global leadership would be better than the alternative with regard to things like climate change, AI, and biorisks. I think both of these things are true, but could be wrong!

Comment by Ardenlk on What actually is the argument for effective altruism? · 2020-10-07T11:12:44.941Z · EA · GW

I think adding a maximizing premise like the one you mention could work to assuage these worries.

Comment by Ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-25T19:18:30.540Z · EA · GW

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

Comment by Ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T13:30:27.484Z · EA · GW

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

Comment by Ardenlk on 80,000 Hours user survey closes this Sunday · 2020-09-12T14:35:32.540Z · EA · GW

Thanks for filling out the survey and for the kind words!

Comment by Ardenlk on Asking for advice · 2020-09-05T20:18:15.943Z · EA · GW

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

Comment by Ardenlk on Improving disaster shelters to increase the chances of recovery from a global catastrophe · 2020-09-04T17:13:34.274Z · EA · GW

Cheers!

Comment by Ardenlk on EA reading list: Scott Alexander · 2020-08-09T07:48:33.635Z · EA · GW

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:49.565Z · EA · GW

Great! Linked.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:21.949Z · EA · GW

Just to let you know I've revised the blurb in light of this. Thanks again!

Comment by Ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:46:45.520Z · EA · GW

We also had this choice with our other problems and other paths posts, and decided against the listicle style, basically for the reasons you say. I think there is a nacent/weak norm, and think it makes sense to uphold it. The main argument against is that is actually kind of helpful to know if something is a long list or a short list -- esp if I have a small bit of time and won't want to start something long.

Comment by Ardenlk on Some history topics it might be very valuable to investigate · 2020-07-08T12:41:23.976Z · EA · GW

Thank you for writing this up!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:14:16.804Z · EA · GW

Hey Michael,

Thanks (as often) for this list! I'm wondering, might you be up for putting it into a slightly more fomal standalone post or google doc that we could potentially link to from the blurb?

Really love how you're collecting resources on so many different important topics!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:12:41.976Z · EA · GW

Thanks for these points! Very encouraging that you can do this work from such a variety of disciplines. I'll revise the blurb in light of this.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:11:23.972Z · EA · GW

Interesting! I think this might fall under global priorities research, which we have as a 'priority path' -- but it's not really talked about in our profile on that, and I agree it seems like it could be a good straetgy. I'll take a look at the priority path and consider adding something about it. Thanks!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:08:11.575Z · EA · GW

Thanks so much Rohin for this explanation. It sounds somewhat persuasive to me, but I don't feel in a psoition to have a good judgement on the matter. I'll pass this on to our AI specialists to see what they think!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-05T09:07:36.077Z · EA · GW

Thanks Max -- I'll pass this on!

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:55:54.813Z · EA · GW

Hi Brian,

In general, we have a heuristic according to which issues that primarily affect people in countries like the US are less likely to be high impact for more people to focus on at the margin than issues that primiarly affect others or affect all people equally. While criminal justice does affect people in other countries as well, it seems like most of the most promising interventions are country-, and especially US-, specific -- including the interventions Open Phil recommends, like those discussed here and here. The main reason for this heuristic is that these issues are likely to be less neglected (even if they're still neglected relative to how much attention they should receive in general), and likely to affect a smaller number of people. Does that make sense?

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T14:48:23.122Z · EA · GW

Hi Tobias, we've added "governance of outer space" on your recommendation. Thanks!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T15:18:32.948Z · EA · GW

Hi Rohin,

Thanks for this comment. I don't know a lot about this area, so I'm not confident here. But I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

I guess I think this first becuase bugs seem capable of being big deals in this context (maybe I'm wrong there?), and because it seems like there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Curious to hear if/why you disagree!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:39:00.129Z · EA · GW

Thanks for this feedback (and for the links)!

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-06-28T11:35:19.368Z · EA · GW

Hm - interesting suggestion! The basic case here seems pretty compelling to me. One question I don't know the answer to is how predicable countries trajectories are -- like how much would a niave extrapolation have predicted the current balance of power 50 years ago? If very unpredictable it might not be worth it in terms of EV to bet on the extrapolation. But

I feel more intuitievely excited about trying to foster home grown EA communities in a range of such countries, since many of the people working on it would probably have reasons to be in and focus on those countries anyway because they're from there.

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:23:33.239Z · EA · GW

Thanks! I'm seeing that I sometimes only used links that worked on the 80k site. Fixing the issue now.

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-28T10:20:02.794Z · EA · GW

Hi Will,

To be honest, I'm not that confident in wild animal welfare being on the 'other longtermist' list rather than the 'other global' list -- we had some internal discussion on the matter and opinions differed.

Basically it's on 'other longtemrmist' because the case for it contributing to spreding positive values seems stronger to me than in the case of the other global problems. In some sense working on any issue spreds positive values, but wild animal welfare is sufficiently 'weird' that it's success as a cause area seems more likely to disrupt people's intuitive views than successes of other areas, which might be particularly useful for spreading postitive values/moral philosophy progress. In particular, the rejection of "natural = good" seems like it could be a unique and useful contribtuion. I also find the analogy of wild animals and other forms of consciousness that we might find ourselves influencing (alien life? Artificial consciousnesses?) somewhat compelling, such that getting our heads straight on wild animal welfare might help prepare us for that.

Comment by Ardenlk on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T08:02:20.110Z · EA · GW

Thank you for pointing out ea.greaterwrong.org! I've had the problem of not being able to wayback forum posts before.

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:11:01.844Z · EA · GW

Hey jackmalde, interesting idea -- though I think I'd lean against writing it. I guess the main reason is something like: There are quite a few issues to explore on the above list so if someone is searching around for something (rather than if they have something in mind already), they might be able to find an idea there. I guess despite what I said to Michael above, I do want people to see it as some positive signal if something's on the list. Having a list of things not on the list would probably not add a lot, because the reasons would just be pretty weak things like "brief investigation + asking around didn't make this seem compelling acc to our assumptions". Insofar as soeone was already thinking of working on something and they saw that, they probably wouldn't take it as much reason to change course. Does that make sense?

Comment by Ardenlk on Problem areas beyond 80,000 Hours' current priorities · 2020-06-24T17:01:02.854Z · EA · GW

Thanks! Helpful pointers.