Posts

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" 2021-01-05T02:18:27.901Z
80,000 Hours user survey closes this Sunday 2020-09-08T17:37:20.525Z
Some promising career ideas beyond 80,000 Hours' priority paths 2020-06-26T10:34:11.912Z
Problem areas beyond 80,000 Hours' current priorities 2020-06-22T12:49:48.166Z
Essential facts and figures -- COVID-19 2020-04-20T18:33:50.565Z
Thoughts on 80,000 Hours’ research that might help with job-search frustrations 2019-04-16T18:51:04.319Z

Comments

Comment by Ardenlk on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T21:24:07.732Z · EA · GW

you might already be planning on dong this, but it seems like you increase the chance of getting a winning entry if you advertise this competition in a lot of non-EA spaces. I guess especially technical AI spaces e.g. labs, universities. Maybe also trying to advertise outside the US/UK. Given the size of the prize it might be easy to get people to pass on the advertisement among their groups. (Maybe there's a worry about getting flack somehow for this, though. And also increases overhead to need to read more entries, though sounds like you have some systems set up for that which is great.)

In the same vein I think trying to lower the barriers to entry having to do with EA culture could be useful - e.g. +1 to someone else here talking about allowing posting places besides EAF/LW/AF, but also maybe trying to have some consulting researchers/judges who find it easier/more natural to engage in non-analytic-philosophy-style arguments.

Comment by Ardenlk on "Doing Good Best" isn't the EA ideal · 2022-09-24T21:01:07.121Z · EA · GW

This isn't the main point of this post, but I feel like it's a common criticism of EA so feel like it might be useful to voice my disagreement with this part:

It's also the case that individual maximization is rarely optimal for groups. Capitalism harnesses maximization to provide benefits for everyone, but when it works, that leads to diversity in specializations, not crowding into the single best thing. To the extent that people ask "how can I be maximally impactful," I think they are asking the wrong question - they are part of a larger group, and part of the world as a whole, and they can't view their impact as independent from that reality.

I think viewing yourself as an individual is not in tension with viewing yourself as part of a whole. Your individual actions constitute a part of the whole's actions, and they can influence other parts of that whole. If everyone in the whole did maximize the impact of their actions, the whole's total impact would also be maximized.

diversity in specializations, not crowding into the single best thing.

100% agree. But again I don't think that's in tension with thinking in terms of where one as an individual can do the most good - it's just that for different people, that's different places.

Comment by Ardenlk on Is there more guidance/resources on what to do if one feels like they are part of a sugnificant minority that should focus on potentially pressing global issues beyond current 80 000h priorities? · 2022-09-07T17:32:55.666Z · EA · GW

Hi Elskivi,

Arden from 80,000 Hours here.

I think I’m part of that significant minority but cannot really find any further help or enough material regarding those topics from an EA angle, for example safeguarding democracy, risks of stable totalitarianism, risks from malevolent actors, global public goods etc.

Unfortunately there aren’t many materials on those issues -- they are mostly even more neglected (at least from a longtermist perspective) than issues like AI safety.

The resources I do know about are linked from the mini profiles on the page -- e.g. https://forum.effectivealtruism.org/posts/aSzxoj7irC5jNHceB/how-likely-is-world-war-iii for great power conflict, and https://www.sentienceinstitute.org/blog/the-importance-of-artificial-sentience for artificial sentience. I think there should be something for each of the listed problems, and the readings often have ‘further resources’ of their own.

We’re also working on filling out the mini profiles more, but the truth is not much work has been done on these areas generally from an longtermist or generally EA perspective (that I know of at least), so I’d guess there won’t be a ton more resources like you’re looking for soon.

Thus getting started on issues like these probably means doing research to figure out what the best interventions seem to be, e.g. by looking into what people outside EA are doing on them and where the most promising gaps seem to be, and then trying to get started filling them - either by working in an existing org that works on the issue, doing further research (e.g. as an academic or in a think tank), or starting a project of your own (it’ll depend a lot on the issue). So it may take considerable entrepreneurial spirit (and willingness to try things that don’t end up working) to make headway on some of the issues.

Comment by Ardenlk on EA is about maximization, and maximization is perilous · 2022-09-06T09:59:36.295Z · EA · GW

I strongly agree with some parts of this post, in particular:

  • I think integrity is extremely important, and I like that this post reinforces that.
  • I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.

On the other hand, I disagree with some of it -- and thought I'd push back especially given that there isn't much pushback in the comments here:

I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have - a constant current that we need to swim against.

I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:

  1. As individuals, the desire to be liked by and get along with others, including people inside and outside of EA

  2. As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality

  3. The desire to live a normal life full of the normal recreational, familial, and cultural stuff

  4. As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.

  5. Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation

  6. Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way

  7. Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique

We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:

  1. The desire to seem hardcore that you mentioned

  2. Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)

  3. Selection effects-- EA tends to attract people who are really into consistency and following arguments wherever they lead (though I'd guess this is getting weaker over time bc of the above effects).

  4. Maybe other things?

I do agree that we should try hard to guard against bad maximising - but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.

Also, moral and empirical uncertainty strongly favour moderation and pluralism -- so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.


Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:

If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X - a terrible idea unless you’re really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.)

EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.

Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.

Comment by Ardenlk on Preventing an AI-related catastrophe - Problem profile · 2022-08-30T11:12:11.625Z · EA · GW

(Responding on Benjamin's behalf, as he's away right now):

Agree that it's hard to know what works in AI safety + it's easy to do things that make things worse rather than better. My personal view is that we should expect the field of AI safety to be overall good because people trying to optimise for a thing will overall move things in its direction in expectation even if they sometimes move away from it by mistake. It seems unlikely that the best thing to do is nothing, given that AI capabilities are racing forward regardless.

I do think that the difficulty of telling what will work is a strike against pursuing a career in this area, because it makes the problem less tractable, but it doesn't seem decisive to me.

Agree that a section on this could be good!

Comment by Ardenlk on What 80000 Hours gets wrong about solar geoengineering · 2022-08-29T23:21:31.922Z · EA · GW

No problem of course - in-depth is great, and thanks for the offer to chat! Agree this is important to get right. I'll pass this on to the author and he'll get back in touch if it seems helpful to talk through : )

Comment by Ardenlk on What 80000 Hours gets wrong about solar geoengineering · 2022-08-29T21:07:15.623Z · EA · GW

Hey! Arden here, from 80,000 Hours. Thanks for this in-depth feedback! The author of our climate change profile is away right now, so we'll take a look at this in a couple of weeks.

Comment by Ardenlk on Announcing a contest: EA Criticism and Red Teaming · 2022-06-07T12:51:54.977Z · EA · GW

just an appreciation comment. I think this post was very well written and handled tricky questions well, especially the Q&A section.

And this seems great to highlight:

We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions.

Comment by Ardenlk on On funding, trust relationships, and scaling our community [PalmCone memo] · 2022-06-02T17:06:41.143Z · EA · GW

Why would the community average dropping mean we go bust? I'd think our success is more related to the community total. Yes, there are some costs to having more people around who don't know as much, but it's further claim that these would outweigh the benefits.

Comment by Ardenlk on Bad Omens in Current Community Building · 2022-05-14T20:24:56.452Z · EA · GW

I found this post really useful (and persuasive), thank you!

One thing I I feel unconvinced about:

"Another red flag is the general attitude of persuading rather than explaining."

For what it's worth, I'm not sure naturally curious/thoughtful/critical people are particularly more put off by someone trying to persuade them (well/by answering their objections/etc.) than by them explaining an idea, especially if the idea is a normative thesis. It's weird for someone to be like "just saying the idea is that X could have horrific side effects and little upside because [argument]. Yes I believe that's right. No need to adopt any beliefs or change your actions though!" That just makes them seem like they don't take their own beliefs seriously. I'd much rather have someone say "I want to persuade you that X is bad, because I think it's important people know that so they can avoid X. OK here' goes: [argument]."

If that's right, does it mean that maybe the issue is more "persuade better"? e.g. by actually having answers when people raise objections to the assumptions being made?

At the opening session [Alice] disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.

Seems like the issue here is more being unpersuasive, rather than too zealous or not focused enough of explaining.

Comment by Ardenlk on How I torched my biggest career opportunity so far · 2022-05-11T19:10:31.352Z · EA · GW

This post is great - for the reasons AndreaM wrote, and additionally for reflecting on specific things you did right to put yourself in a position to have the opportunity (which have hopefully also put you in a better position than you would be otherwise even ex-post).

We need more stories like this, as well as stories of people going for the highest EV thing, which still seems highest-EV to them in hindsight, when it didn't work out. : )

Comment by Ardenlk on Effective altruism’s odd attitude to mental health · 2022-04-29T16:43:12.266Z · EA · GW

Sorry, I should have been more mindful of how the brevity of my comment might come off. I didn't mean to suggest the question doesn't come down to what's most cost-effective, which I agree it does. I was trying to point to the explanation for my differing attitudes to the priority of mental health when thinking the cause area of making the ea community more effective vs. the cause area of present people's wellbeing more generally, which I'd guess is also the primary explanation for other people's differing attitudes, which is: debilitating and easily treatable physical illnesses are not that common among EAs, which is why they aren't a high priority for helping the EA community be more effective.

Comment by Ardenlk on Effective altruism’s odd attitude to mental health · 2022-04-29T14:07:31.921Z · EA · GW

If malaria and other easily preventable/treatable debilitating physical issues were common among EAs, I'd guess that should be a much higher priority to address than poor mental health among EAs.

Comment by Ardenlk on Pre-announcing a contest for critiques and red teaming · 2022-03-28T09:27:53.155Z · EA · GW

makes sense! yeah as long as this is explicit in the final announcement it seems fine. I also think "what's the best argument against X (and then separately do you buy it?)" could be a good format.

Comment by Ardenlk on Pre-announcing a contest for critiques and red teaming · 2022-03-27T13:08:55.069Z · EA · GW

Cool! Glad to see this happening.

One issue I could imagine is around this criterion (which also seems like the central one!)

Critical — the piece takes a critical or questioning stance towards some aspect of EA, theory or practice

Will the author need to end up disagreeing with the piece of theory or practice for the piece to qualify? If so, you're incentivizing people to end up more negative than they might if they were to just try to figure out the truth about something that they were at first unsure of the truth/prudence of.

E.g. if I start out by thinking "I'm not sure that neglectedness should be a big consideration in EA, I think I'll write a post about it" and then I think/learn more about it in the course of writing my post (which seems common since people often learn by writing), I'll be incentivized to end up at "yep we should get rid of it" vs. "actually it does seem important after all".

Maybe you want that effect (maybe that's what it means to red team?) but it seems worth being explicit about so that people know how to interpret people's conclusions!

Comment by Ardenlk on Grantmaking is more like a skill than a path · 2022-03-08T23:50:06.111Z · EA · GW

Arden here from 80,000 Hours - just an update: Ollie showed me this draft before posting and I thought he was right about a bunch of it, so we adjusted the write-up to put more emphasis on becoming skilled in an area before becoming a grantmaker in it being the ideal, plus added his 4 bullet point list to our section on assessing your fit.

We didn't want to move away from calling it a "path" because we use that term to describe jobs/sets of jobs that one can do for many years, which we think could be among highest impact phases of one's career, which this seems to fit.

Comment by Ardenlk on In current EA, scalability matters · 2022-03-04T16:14:19.977Z · EA · GW

FWIW the way I conceptualise this situation is that cost effectiveness is still king, but: spending a dollar is a lot less expensive in terms of 'true cost' than it used to be, because it implies the inability to fund another thing to a greatly reduced extent (which is the real cost of spending money).

This in turn means that spending time/labour to find new opportunities is relatively more expensive than it used to be vs. the true cost of spending a dollar, which is why we want to take opportunities that have a much larger dollar spend:labor/time ratio than we used to.

If an opportunity is not scalable, that means it has a lot of labour/time costs that are hidden, because once you use up the opportunity you have to find another one before you can keep having impact, which costs labour/time, whereas scalable opportunities don't have that. Therefore they're cheaper in true cost, therefore more cost effective at the same level of effectiveness.

I don't think I'm disagreeing with you -- but this feels like the conceptually cleaner way of thinking about it for my brain.

Comment by Ardenlk on Disentangling "Improving Institutional Decision-Making" · 2021-09-26T16:00:25.948Z · EA · GW

Nice post : )

I mostly agree with your points, though am a bit more optimistic than it seems like you are about untargeted, value-neutral IIDM having a positive impact.

Your skepticism about this seems to be expressed here:

And yet, it seems possible that there are some institutions that cause an overwhelming amount of harm (e.g. the farming industry or some x-risk-increasing endeavors like gain-of-function research), and that the value-neutral version of IIDM fails to take that into account.

I think this is true, but it still seems like the aims of institutions are pro-social as a general matter -- x-risk and animal suffering in your examples are side effects that aren't means to the ends of the institutions, which are 'increase biosecuirty' and 'make money', and if improving decisionmaking helps orgs get at their ends more efficiently then we should think they will have fewer bad side effects if they have better decisonmaking. Also generally orgs' aims (e.g. "make money") will presuppose the firm's, and therefore humanity's survival, so it still seems good to me as a general matter for orgs to be able to pursue their aims more effectively.

Comment by Ardenlk on All Possible Views About Humanity's Future Are Wild · 2021-07-16T16:17:05.222Z · EA · GW

Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

Comment by Ardenlk on All Possible Views About Humanity's Future Are Wild · 2021-07-13T21:33:51.947Z · EA · GW

This is fantastic.

This doesn't take away from your main point, but it would be some definate amount less wild if we won't start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.

Comment by Ardenlk on [3-hour podcast] Michael Huemer on epistemology, metaethics, EA, utilitarianism and infinite ethics · 2021-03-28T08:36:08.250Z · EA · GW

Thanks for posting this - I actually haven't listened to this ep but I just listened to the science of pleasure episode and thought it was fantastic, and wouldn't have found it without this post. My only wish was that you'd asked him to say specifically what he meant by conscious. I'll def listen to other episodes now.

Comment by Ardenlk on Some quick notes on "effective altruism" · 2021-03-25T09:28:28.549Z · EA · GW

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

  • It seems like it's been pretty successful? EA has grown a lot under the term, including attracting some great people, and despite having some very controverisal ideas hasn't faced that big of a backlash yet. Hard to know what the counterfactual would be, but it seems non-obvious it would be better.
  • It actually sounds non'ideological' to me if what that means is being comitted to certain ideas of what we should do and how we should think-- it sounds like it's saying 'hey, we want to do the effective and altruistic thing. We're not saying what that is.' it sounds more open, more like 'a question' than many -isms.

Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community.

I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.

Comment by Ardenlk on Clarifying the core of Effective Altruism · 2021-01-30T20:48:29.059Z · EA · GW

I really like this post! I'm sympathetic to the point about normativity. I particualrly think the point that movements may be able to suffer from not being demanding enough is a potentially really good one and not something I've thought about before. I wonder if there are examples?

For what it's worth, since the antecedent "if you want to contrinute to the common good" is so minimal, ben's def feels kind of near-normative to me -- like it gets someone on the normative hook with "mistake" unless they say "well I jsut don't care about the common good", and then common sense morality tells them they're doing something wrong... so it's kind of like we don't have to explicitly?

Also, I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum. And then it's like... does that really get you out of the cited worries with maximising? Like it still means that "doing a lot of good" will be not good enough a lot of the time (as long as there's still something else you could do that would do much more good), which I think could still run into at least the 2nd and 3rd worries you cite with having maximising in there?

Comment by Ardenlk on My Career Decision-Making Process · 2021-01-29T10:38:25.131Z · EA · GW

Thanks for this quick and detailed feedback shaybenmoshe, and also for your kind words!

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information.

Yes. We decided to go "ideas/information-first" for various reasons, which has upsides but also downsides. We are hoping to mitigate the downsides by having practical, career-planning resources more emphasised alongside Key Ideas. So in the future the plan is to have better resources on both kinds of things, but they'll likely be separated somewhat -- like here are the ideas [set of articles], and here are the ways to use them in your career [set of articles]. We do plan to introduce the ideas first though, which we think are important for helping people make the most of their careers. That said, none of this is set in stone.

Another important point is that I don't like, and disagree with the choice of, the emphasis on longtermism and AI safety. Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety. More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis. A minority of them felt like me, and the majority understood it as "all you can meaningfully do in EA is AI safety", which was very discouraging for them. I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.

We became aware of the AI safety problem last year -- we've tried to deemphasie AI Safety relative to other work since to make it clearer that, although it's our top choice for most pressing problem and therefore what we'd recommend people work on if they could work on anything equally successfully, that doesnt' mean that it's the only or best choice for everyone (by a long shot!). I'm hoping Key Ideas no longer gives this impression, and that our lists of other problems and paths might help show that we're excited about people working on a variety of things.

Re: Longtermism, I thnk our focus on that is just a product of most people at 80k being more convinced of longtermism's truth/importance, so a longer conversation!

Another point is that the "Global priorities" section takes a completely top-to-bottom approach. I do agree that it is sometimes a good approach, but I think that many times it is not. One reason is the tension between opportunities and cause areas which I already wrote about. The other is that some people might already have their career going, or are particularly interested in a specific path. In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices. Instead, in these situations it would be much better to help people to optimize their current path towards more important goals.

I totally agree with this and think it's a problem with Key Ideas. We are hoping the new career planning process we've released can help with this, but also know that it's not the most accessible right now. Other things we might do: improve our 'advice by expertise' article, and try to make clear in the problems section (similar to the point about ai safety above) that we're talking about what is most pressing and therefore best to work on for the person who could do anything equally successfully, but that career capital and personal fit mean that's not going to be true of the reader, so while we think the problems are important for them to be aware of and an important input to their personal prioritisation, it's not the end of it.

I disagree with the principle of maximizing expected value, and definitely don't think that this is the way it should be phrased as part of the "the big picture".

Similar to longtermism (and likely related) - it's just our honest best guess at what is at least a good decision rule, if not the decision rule.

I really liked the structure of the previous career guide. It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries. Furthermore, its modularity made it very easy to read the parts you are interested in. The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.

Mostly agree with this. We're planning to split key ideas into several articles that are much easier to navigate, but we're having trouble making that happen as quickly as we would like. One thing is that lots of people skipped around the career guide, so we think many readers prefer a more 'shopping'-like experience (like a newspaper) than the career guide had anyway. We're hoping to go for a hybrid in the future.

Comment by Ardenlk on My Career Decision-Making Process · 2021-01-24T09:35:52.043Z · EA · GW

Hey shaybenmoshe, thanks for this post! I work at 80,000 Hours, so I'm especially interested in it from a feedback perspective. Michelle has already asked for your expended thoughts on cybersecurity and formal verification, so I'll skip those -- would you also be up for expanding on why the Key Ideas page seems less helpful to you vs. the older career guide?

Comment by Ardenlk on What is going on in the world? · 2021-01-19T02:40:25.936Z · EA · GW

Maybe: the smartest species the planet and maybe the universe has produced is in the early stages of realising it's responsible for making things go well for everyone.

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T17:51:01.131Z · EA · GW

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-13T02:50:33.134Z · EA · GW

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

Comment by Ardenlk on What does it mean to become an expert in AI Hardware? · 2021-01-12T03:34:18.557Z · EA · GW

So cool to see such a thoughtful and clear writeup of your investigation! Also nice for me since I was involved in creating them to see that 80k's post and podcast seemed to be helpful.

I think [advising on hardware] would involve working at one of the industries like those listed above and maintaining involvement in the EA community.

What I know about this topic is mostly exhausted by the resources you've seen, but for what it's worth I think this could also be directed at making sure that AI companies that are really heavily prioritising safety are able to meet their hardware needs. In other words, depending on the companies it could make sense to advise industry in addition to the EA community.

University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. Industry: See section 4 for a list of possible companies to work at.

For these two career ideas I'd just add -- what is implicit here I think but maybe worth making explicit -- that it'd be important to be highly selective and pretty canny about what research topics/companies you work with in order to specifically help AI be safer and more beneficial.

These experiences will probably update my thoughts on my career significantly.

Seems right - and if you were to write an update at that point I'd be interested to read it!

Comment by Ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-11T20:14:39.241Z · EA · GW

Thanks!

Comment by Ardenlk on Literature Review: Why Do People Give Money To Charity? · 2021-01-09T16:46:49.834Z · EA · GW

Hey Aaron, I know this is from a while ago and your head probably isn't in it, but I'm curious if you have any intuitions on whether analogues of the successful techniques you list do/don't apply to making career changes or other actions besides giving to charity.

Also really appreciating the forum tags lately -- really nice to be able to search by topic!

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:56:28.304Z · EA · GW

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you're most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that's right might depend on your metaethics -- if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don't really think that's right as a matter of metaethics, though I'm not sure.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2021-01-05T23:44:53.394Z · EA · GW

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

Comment by Ardenlk on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T23:29:37.856Z · EA · GW

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

Comment by Ardenlk on Can I have impact if I’m average? · 2021-01-04T21:54:31.068Z · EA · GW

Thanks for posting this -- I think this might be a pretty big issue and I'm glad you've had success helping reduce this misconception by talking to people!

As for explanations as to why it is happening, I wonder if in addition to what you said, it could be that because EA emphasises comparing impact between different interventions/careers etc. so heavily, people just get in a really compare-y mindset, and end up accidentally thinking that comparing well to other interventions is itself what matters, instead of just having more impact. I think improved messaging could help.

Comment by Ardenlk on Kelsey Piper on "The Life You Can Save" · 2021-01-04T21:30:34.530Z · EA · GW

Thanks Aaron, I wouldn't read this if you hadn't posted it, and I think it contains good lessons on messaging.

Comment by Ardenlk on Careers Questions Open Thread · 2020-12-13T00:12:11.134Z · EA · GW

Hi Anafromthesouth,

This is just an idea, but I wonder if you could use your data science and statistics skills to help nonprofits or foundations working on important issues (including outside the EA community) better evaluate their impact or otherwise make more informed choices. (If those skills need sharpening, taking courses seems sensible.) From the name it sounds like this could dovetail with your work in your masters', but I don't actually know anything about that kind of programme.

I guess it sounds to me like going back to academic stuff isn't what you want to do, and it would probably be a bit tough with the 5 year publication gap (though I don't know if that's as much of a thing in neurosciene as in other disciplines), and doesn't work as well with your master's -- so if it were me I think I'd try to double down on the stats and data science stuff.

Comment by Ardenlk on Careers Questions Open Thread · 2020-12-13T00:03:58.367Z · EA · GW

I agree with what the others below have written, but wanted to just add:

If you aim for entrepreneurship, which it sounds like you might want to, I think it makes sense to stay open to the possibility that in addition to building companies that could also mean things like running big projects within existing companies, starting a nonprofit, running a big project in a nonprofit, or even running a project in a govnerment agency if you can find one with enough flexibility.

Comment by Ardenlk on Where are you donating in 2020 and why? · 2020-12-12T20:50:19.520Z · EA · GW

Yes, I do think they had room for more funding, but could be wrong. My view was based on (1) a recommendation from someone whose judgement on these things I think is informed and probably better than most people's including mine, who thought the Biden Victory Fund was the highest impact thing to donate to this year, (2) an intuition that the DNC/etc. wouldn't put so much work into fundraising if more money didn't benefit their chances of success, and (3) the way the Biden Victory Fund in particular structured the funds it received, which was to distribute it among the Biden campaign, the DNC, and state parties (in order of priority), which it said how it would do more precisesly, except that it would change the distrbution if the results would have resulted in "excessive" amounts going to certain orgs.

Comment by Ardenlk on What are you grateful for? · 2020-11-28T05:34:14.520Z · EA · GW

I'm grateful for all the people in the EA community who write digests, newsletters, updates, highlights, research summaries, abstracts, and other vehicles that help me keep abreast of all the various developments.

I'm also grateful for there being so much buzzing activity in EA that such vehicles are so useful/essential!

Comment by Ardenlk on Where are you donating in 2020 and why? · 2020-11-26T20:28:56.271Z · EA · GW

I am not that confident this was the right decision (and will be curious about people's views, though I can't do anything about it now), but I already gave most of 10% of my income this year (as per my GWWC pledge) to the 'Biden Victory Fund.' (The rest went to the Meta Fund earlier in the year). I know Biden's campaign was the opposite of neglected, but I thought the imporance and urgency of replacing Trump as the US president swamped that consideration in the end (I think having Republicans in the White House, and especially Trump, is very bad for the provision of global public and coordination-relient goods). I expect to go back to giving to non-political causes next year.

I am still considering giving to the Georgia senate race with some of my budget for next year, because it seems so high 'leverage' on US electoral reform, which would (I think) make it easier for Democrats to get elected in the future and (I hope) make the US's democracy function better long-term. For example, there's an electoral reform bill that seems much more likely to pass if Democrats control the senate.

The quality of these choices depends on substantive judgements that in US politics Democrats make better choices for the world than Republicans, and that continued US global leadership would be better than the alternative with regard to things like climate change, AI, and biorisks. I think both of these things are true, but could be wrong!

Comment by Ardenlk on What actually is the argument for effective altruism? · 2020-10-07T11:12:44.941Z · EA · GW

I think adding a maximizing premise like the one you mention could work to assuage these worries.

Comment by Ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-25T19:18:30.540Z · EA · GW

Thanks this is super helpful -- context is I wanted to get a rough sense of how doable this level of "getting up to speed" is for people.

Comment by Ardenlk on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T13:30:27.484Z · EA · GW

Hey Michael, thanks for detailing this. Do you have a sense of how long this process took you approximately?

Comment by Ardenlk on 80,000 Hours user survey closes this Sunday · 2020-09-12T14:35:32.540Z · EA · GW

Thanks for filling out the survey and for the kind words!

Comment by Ardenlk on Asking for advice · 2020-09-05T20:18:15.943Z · EA · GW

I wonder whether other people also like to have deadlines asked for for their feedback or have specific dates suggested for meeting? Sometimes I prefer to have someone ask for feedback within a week than within 6 months (or as soon as is convenient), because it forces me to get it off my to-do list. Though it's best of both worlds if they also indicate that if I can't do it in that time it's ok.

Comment by Ardenlk on Improving disaster shelters to increase the chances of recovery from a global catastrophe · 2020-09-04T17:13:34.274Z · EA · GW

Cheers!

Comment by Ardenlk on EA reading list: Scott Alexander · 2020-08-09T07:48:33.635Z · EA · GW

Thanks! This post caused me to read 'beware systemic change', which I hadn't before and am glad I did.

I know this post isn't about that piece specifically, but I had a reaction and I figured 'why not comment here? It's mostly to record my own thoughts anyway.'

It seems like Scott is associating a few different distinctions with the distinction between the titular distinction, (1) 'systemic vs. non-systemic'

These are: (2) not necessarily easy to measure vs. easy to measure (3) controversial ('man vs. man') vs. universially thought of as good or neutral.

These are related but different. I think the thing that actually produces the danger Scott is worired about is (3). (Of course you could worry that movement on (2) will turn EA into an ineffectual, wishy-washy movement, but that doesn't seem as much Scott's concern)

I asked myself: to what extent has EA (as it promised to in 2015) moved toward systemic change? Toward change that's not necessarily easy to measure? Toward controversial change?

80K's top priority problem areas (causes) are:

  • AI safety (split into tech safety and policy)
  • Biorisk
  • Building EA
  • global priorities research
  • improving inst decisionmaking
  • preventing extreme climate change
  • preventing nuclear war

These are all longtermist causes. Then there's the other two very popular EA causes:

  • ending factory farming
  • global health

Of the issues on this list, only the AI policy bit of AI safety and bulding EA seem to be particularly controversial change. I say AI policy is controversial becuase it favors the US over China as practiced by EA, and presumably people in China would think that's bad, and building EA seems controversial because some people think EA is deeply confused/bad (though it's not as controversial as the stuff Scott mentions in the post I think). But 'building EA' was always a cause within EA, so only the investment in AI policy represents a move towrard the controversial since Scott's post.

(Though maybe I'm underestimating the controversialness of things like ending factory farming -- obviously some people think that'd be positively bad...but I guess I'd guess that's more often of the 'this isn't the best use of resources' variety of positive badness.)

Of the problems listed above, only ending factory farming and improving global health are particularly measurable. So it does seem like we've moved toward the less-easily-measured (with the popularization of longtermism probably).

Are any of the above 'systemic'? Maybe Scott associated this concept with the left halves of distinctions (2) and (3) because it's harder to tell what's systemic vs. not. But I guess I'd say again the AI policy half of AI safety, builidng EA, and improving institutional decisionmaking are systemic issues. (Though maybe systemic interventions will be needed to address some of the others, e.g., nuclear security.)

So it's kind of interesting that even though EA promised to care about systemic issues, it mostly didn't expand into them, and only really expanded into the less easily measurble. Hopefully Scott would also be heartened that the only substantial expansion into the realm of the controversial seems to be AI policy.

If that's right as a picture of EA, why would that be? Maybe because although EA has tried to tackle a wider range of kinds of issues, it's still pretty mainstream within EA that working on politically controversial causes is not particularly fruitful. Maybe because people are just better than Scott seems to think they are at taking into account the possibility of being on the wrong side of stuff when directly estimating the EV on working on causes, which has resulted in shying away from controversial issues.

In part 2 of Scott's post there's the idea that if we pursue systemic change we might turn into something like the Brookings institution, and that that would be bad because we'd lose our special moral message. I feel a little unsure of what the special moral message is that Scott is referring to in the post that is necessarily different between brookings-EA and bednet-EA, but I think it has something to do with stopping periodically and saying "Wait are we getting distracted? Do we really think that this thing is the most good we can do with $2,000 when we could with high confidence save someone's life if we gave it to AMF instead?" At least, that's the version of the special moral message that I agree is really important and distinctive.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:49.565Z · EA · GW

Great! Linked.

Comment by Ardenlk on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-07-09T21:04:21.949Z · EA · GW

Just to let you know I've revised the blurb in light of this. Thanks again!