Posts

A huge opportunity for impact: movement building at top universities 2021-12-14T14:37:13.448Z
'Existential Risk and Growth' Deep Dive #3 - Extensions and Variations 2020-12-20T12:39:11.984Z
Urgency vs. Patience - a Toy Model 2020-08-19T14:13:32.802Z
Expected Value 2020-07-31T13:59:54.861Z
Poor meat eater problem 2020-07-10T08:13:11.628Z
Are there superforecasts for existential risk? 2020-07-07T07:39:24.271Z
AI Governance Reading Group Guide 2020-06-25T10:16:25.029Z
'Existential Risk and Growth' Deep Dive #1 - Summary of the Paper 2020-06-21T09:22:06.735Z
If you value future people, why do you consider near term effects? 2020-04-08T15:21:13.500Z

Comments

Comment by Alex HT on 7 traps that (we think) new alignment researchers often fall into · 2022-09-28T14:42:10.809Z · EA · GW

I claim that you can get near the frontier of alignment knowledge in ~6 months to a year. 

How do you think people should do this?

Comment by Alex HT on Reasons I’ve been hesitant about high levels of near-ish AI risk · 2022-07-22T10:09:17.850Z · EA · GW

I really appreciate you writing this. Getting clear on one's own reasoning about AI seems really valuable, but for many people, myself included, it's too daunting to actually do. 

If you think it's relevant to your overall point, I would suggest moving the first two footnotes (clarifying what you mean by short timelines and high risk) into the main text. Short timelines sometimes means <10 years and high risk sometimes means >95%

I think you're expressing your attitude to the general cluster of EA/rationalist views around AI risk typified by eg. Holden and Ajeya's views (and maybe Paul Christiano's, I don't know) rather than a subset of those views typified by eg. Eliezer (and maybe other MIRI people and Daniel Kokotajlo, I don't know).  To me, the main text implies you're thinking about the second kind of view, but the footnotes are about the first. 

And different arguments in the post apply more strongly to different views. Eg

  • Fewer 'smart people disagree' about the numbers in your footnote than about the more extreme view. 
  • I'm not sure Eliezer having occasionally been overconfident, but got the general shape of things right is any evidence at all against >50% AGI in 30 years or >15% chance of catastrophe this century (though it could be evidence against Eliezer's very high risk view).
  • The Carlsmith post you say you roughly endorse seems to have 65% on AGI in 50 years, with a 10% chance of existential catastophe overall. So I'm not sure if that means your conclusion is 
    • 'I agree with this view I've been critically examining'  
    •  'I'm still skeptical of 30 year timelines with >15% risk, but I roughly endorse 50 year timelines with 10% risk'
    • 'I'm skeptical of 10 year timelines with >50% risk, but I roughly endorse 30-50 year timelines with 5-20% risk'
    • Or something else 
Comment by Alex HT on Does "calibrated probability assessment" training work? · 2022-07-06T23:56:17.932Z · EA · GW

This seems like a good place to look for studies:

The research I’ve reviewed broadly supports this impression. For example:

  • Rieber (2004) lists “training for calibration feedback” as his first recommendation for improving calibration, and summarizes a number of studies indicating both short- and long-term improvements on calibration.4 In particular, decades ago, Royal Dutch Shell began to provide calibration for their geologists, who are now (reportedly) quite well-calibrated when forecasting which sites will produce oil.5
  • Since 2001, Hubbard Decision Research trained over 1,000 people across a variety of industries. Analyzing the data from these participants, Doug Hubbard reports that 80% of people achieve perfect calibration (on trivia questions) after just a few hours of training. He also claims that, according to his data and at least one controlled (but not randomized) trial, this training predicts subsequent real-world forecasting success.
Comment by Alex HT on Introducing the Fund for Alignment Research (We're Hiring!) · 2022-07-06T13:45:53.817Z · EA · GW

Are these roles visa eligible, or do candidates need a right to work in the US already? (Or can you pay contractors outside of the US?)

Comment by Alex HT on What is the new EA question? · 2022-03-03T00:13:20.714Z · EA · GW

[A quick babble based on your premise]

What are the best bets to take to fill the galaxies with meaningful value?

How can I personally contribute to the project of filling the universe with value, given other actors’ expected work and funding on the project?

What are the best expected-value strategies for influencing highly pivotal (eg galaxy-affecting) lock-in events?

What are the tractable ways of affecting the longterm trajectory of civilisation? Of those, which are the most labour-efficient?

How can we use our life’s work to guide the galaxies to better trajectories?

Themes I notice

  • Thinking in bets feels helpful epistemically, though the lack of feedback loops is annoying
  • The object of attention is something like ‘civilisation’, ‘our lightcone’, or ‘our local galaxies’
  • The key constraint isn’t money, but it’s less obvious what it is (just ‘labour’ or ‘careers’ doesn’t feel quite right)
Comment by Alex HT on Concrete Biosecurity Projects (some of which could be big) · 2022-02-01T14:39:29.579Z · EA · GW

We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in relative[1] terms).

Imagine all six of these projects was implemented to a high standard. How robust do you think the world would be to catastrophic biorisk? Ie. how sufficient do you think this list of projects is? 

Comment by Alex HT on A huge opportunity for impact: movement building at top universities · 2022-01-17T18:20:39.477Z · EA · GW

The job application for the Campus Specialist programme has been published. Apologies for the delay

Comment by Alex HT on A huge opportunity for impact: movement building at top universities · 2021-12-21T12:02:20.971Z · EA · GW

Hi Elliot, thanks for your questions.

Is this indicative of your wider plans?/ Is CEA planning on keeping a narrow focus re: universities?

I’m on the Campus Specialist Manager team at CEA, which is a sub-team of the CEA Groups team, so this post does give a good overview of my plans, but it’s not necessarily indicative of CEA’s wider plans. 

As well as the Campus Specialist programme, the Groups team runs a Broad University Group programme staffed by Jessica McCurdy with support from Jesse Rothman. This team provides support for all university groups regardless of ranking through general group funding and the EA Groups Resource Centre. The team is also launching UGAP (University Groups Accelerator Program) where they will be offering extra support to ~20 universities this semester. They plan to continue scaling the programme each semester.

Outside of university groups, Rob Gledhill joined the Groups team last year to work specifically on the city and national Community Building Grants programme, which was funding 10 total full-time equivalent staff (FTE) as of September (I think the number now is slightly higher). 

Additionally, both university groups and city/national groups can apply to the EA Infrastructure Fund

Besides the Groups team, CEA also has:

  • The Events team, which runs EAG(x)
  • The Online team, which runs this forum, EA.org, and EA virtual programmes
  • The Operations team, which enables the whole of CEA (and other organisations under the legal entity) to run smoothly
  • The Community Health team, which aims to reduce risks that could cause the EA community to lose out on a lot of value, and to preserve the community’s ability to grow and produce value in the future

Basically, I see two options 1) A tiered approach whereby "Focus" universities get the majority of attention 2) "Focus" universities get all of CEA's attention at the exclusion of all of universities. 

Across the Groups team, Focus universities currently get around half of the team's attention, and less than half of funding from grants. We’re planning to scale up most areas of the Groups team, so it’s hard to say exactly how the balance will change. Our guiding star is figuring out how to create the most “highly-engaged EAs” per FTE of staff capacity. However, we don’t anticipate Focus universities getting all of the Groups team’s attention at the exclusion of all other universities, and it’s not the status quo trajectory. 

Do you plan on head hunting for these roles? 

Off the top of my head there's a few incredibly successful university groups that have successfully flourished under their own volition (e.g. NTNU, PISE). There's likely people in these groups who would be exceptionally good at community growth if given the resources you've described above, but I suspect that they may not think to apply for these roles. 

Some quick notes here:

  • We are planning to do active outreach for these roles.
  • I agree that someone who has independently done excellent university group organising could be a great fit for this role.
  • CEA supports EA NTNU via a Community Building Grant (CBG) to EA Norway.
  • Also, quite a few group organisers have reached out to me since posting this, which makes me think people in this category might be quite likely to apply anyway.
  • But I think it’s still worth encouraging people to apply, and clarifying that you don’t need to have attended a focus university to be a Campus Specialist

Do you plan on comparing the success of the project, against similar organisations?

There are many organisations that aim to facilitate and build communities on University campuses. There are even EA adjacent organisations, i.e. GFI. It makes sense to me to measure the success of your project against these (especially GFI), as they essentially provide a free counterfactual regarding a change of tactics. 

I ask this because I strongly suspect GFI will show stronger community building growth metrics than CEA. They provide comprehensive and beautifully designed resources for students. They public and personable (i.e. they have dedicated speakers who speak for any audience size (at least that's what it appears to me)). And they seem to have a broader global perspective (so perhaps I am a bit bias). But in general they seem to have "the full package" which CEA is currently missing.

I agree having clear benchmarks to compare our work to is important. I’m not familiar with GFI’s community building activities. It seems fairly likely to me that the Campus Specialist team at CEA has moderately different goals to GFI, such that our community growth metrics might be hard to compare directly. 

To track the impact of our programmes, the Campus Specialist team looks at how many people at our Focus universities are becoming “highly-engaged EAs” - individuals that have a good understanding of EA principles, show high quality reasoning, and are taking significant actions, like career plans, based on these principles. As mentioned in the post, our current benchmark is that Campus Specialists can help at least eight people per year to become highly engaged. 

One interesting component to point out is that while I think our end goal is clear - creating highly-engaged EAs - we believe we’re still pretty strongly in the ‘exploration mode’ of finding the most effective tactics to achieve this. As a result, we want to spend less of our time in the Campus Specialist Programme standardising resources, and more time encouraging innovation and comparing these innovations against the core model. 

By contrast, our University Group Accelerator Programme is a bit more like GFI’s programme as it has more structured tactics and resources for group leaders to implement. Jessica, who is running the programme, has been in touch with GFI to exchange lessons learned and additional resources.

Can you expand on how much money you plan on spending on each campus? 

I noticed you say "managing a multi-million dollar budget within three years of starting" can you explain what exactly this money is going to be spent on? Currently this appears to me (perhaps naively) to be an order of magnitude larger than the budget for the largest national organisations. How confident are you that  you will follow through on this? And how confident are you that spending millions of dollars on one campus is more efficient than community building across 10 countries? 

How confident are you that  you will follow through on this?

  • This depends on what Campus Specialists do. It’s an entrepreneurial role and we’re looking for people to initiate ambitious projects. CEA would enthusiastically support a Campus Specialist in this scaling if it seemed like a good use of resources.
  • I’m pretty confident that if a Campus Specialist had a good use of $3mil/year in 2025 CEA would fund it.
  • Will a Campus Specialist have a good use of $3mil/year in 2025? Probably. One group is looking to spend about $1m/year already (with programmes that benefit both their campus and the global community, via online options).
     

Can you explain what exactly this money is going to be spent on? 

I can’t tell you exactly what this money will be spent on, as this depends on what projects Campus Specialists identify as high priority. Some possible examples:

  • Prestigious fellowships or scholarships
  • Lots of large, high-quality retreats e.g. using an external events company to save organiser time
  • Renting a space for students to co-work
  • Running a mini-conference every week (one group has done this already - they have coworking, seminar programmes, a talk, and a social every week of term, and it seems to have been very good for engagement, with attendance regularly around 70 people). I could imagine this being even bigger if there were even more concurrent ‘tracks’
  • Seed funding for students to start projects
  • Salaries for a team of ten
  • Travel expenses for speakers
  • Bootcamps for in-demand skills
  • Running an EAGx at the university
  • Research fellowships over the summer for students (like SERI or CERI, though they need not be in the -ERI format)

The ultimate goal across all of these programs is to find effective ways to create “highly-engaged EAs.” 

And how confident are you that spending millions of dollars on one campus is more efficient than community building across 10 countries? 

I’m not sure this is the right hypothetical to be comparing - CEA is supporting community building across 10 countries*. We are also looking to support 200+ universities. I think both of those things are great. 

I think the relevant comparison is something like ‘how confident are you that spending millions of dollars on one campus is more efficient than the EA community’s last (interest-weighted) dollar?’

My answer depends exactly on what the millions of dollars would be spent on, but I feel pretty confident that some Campus Specialists will find ways of spending millions of dollars on one campus per year which are more efficient (in expectation) than the EA community’s last (interest-weighted) dollar. 
 

*I listed out the first ten countries that came to mind where I know CEA supports groups: USA, Canada, Germany, Switzerland, UK, Malaysia, Hong Kong (via partnership), Netherlands, Israel, Czech Republic. (This is not an exhaustive list.)


 

Comment by Alex HT on A huge opportunity for impact: movement building at top universities · 2021-12-21T11:54:29.903Z · EA · GW

Thanks for this comment and the discussion it’s generated! I’m afraid I don’t have time to give as detailed response as I would like, but here are some key considerations:

  • In terms of selecting focus universities, we mentioned our methodology here (which includes more than just university rankings, such as looking at alumni outcomes like number of politicians, high net worth individuals, and prize winners).
  • We are supporting other university groups - see my response to Elliot below for more detail on CEA’s work outside Focus universities.
  • You can view our two programmes as a ‘high touch’ programme and a ‘medium touch’ programme. We’re currently analysing which programme creates the most highly-engaged EAs per full-time equivalent staff member (FTE) (our org-wide metric).
  • In the medium term, this is the main model that will likely inform strategic decisions, such as whether to expand the focus university list.


However, we don’t think this is particularly decision-relevant for us in the short term. This is because:  

  • At the moment, most of our Focus universities don’t have Campus Specialists.
  • You don’t need to have gone to a Focus university to be a Campus Specialist.
  • So we think qualified Campus Specialists won’t be limited by the number of opportunities available.
Comment by Alex HT on A huge opportunity for impact: movement building at top universities · 2021-12-20T10:00:15.846Z · EA · GW

Thanks Vaidehi!

One set of caveats is that you might not be a good fit for this type of work (see what might make you a good fit above). For instance: 

  • This is a role with a lot of autonomy, so if you prefer more externally set structure, this role probably isn’t a good fit for you
  • If you find talking to people about EA ideas difficult or uncomfortable, this may be a bad fit
  • You might be  a good fit for doing field building, but prefer doing so with another age range (e.g. mid career, high school)

Some other things people considering this path might want to take into consideration:

  • If you would like to enter a non-EA career that is looking for traditional markers of prestige, is extremely competitive, and you have a current opportunity that won’t come around later, then being a campus specialist might be less good than directly entering that career or doing more signalling (although we think that the career capital from this route is better than most people think). This might be true for some specific post-undergrad awards in policy or unusual entrepreneurial opportunities - like having a co-founder with seed funding.
  • If you think it’s likely we’re in a particularly pivotal moment in the next 5-10 years – for example if you have extremely short AI timelines (with a median distribution of <5-10 years), then you might think that the benefits of doing outreach to talented individuals might not come to fruition. (But we think that this option can be good even for people with relatively short timelines - i.e. - 15-20 years.)
  • You might not feel compelled by the data in multiplier arguments, or you might think you’ll crowd out someone who would be better at generating multipliers compared to you.


 

Comment by Alex HT on We're Redwood Research, we do applied alignment research, AMA · 2021-10-05T06:31:55.494Z · EA · GW

What factors do you think would have to be in place for some other people to set up some similar but different organisation in 5 years time?

I imagine this is mainly about the skills and experience of the team, but also interested in other things if you think that's relevant

Comment by Alex HT on We're Redwood Research, we do applied alignment research, AMA · 2021-10-05T06:30:23.254Z · EA · GW

This looks brilliant, and I want to strong-strong upvote!

What do you foresee as your biggest bottlenecks or obstacles in the next 5 years? Eg. finding people with a certain skillset, or just not being able to hire quickly while preserving good culture.

Comment by Alex HT on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T15:57:00.502Z · EA · GW

What if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm

Comment by Alex HT on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T10:12:08.415Z · EA · GW

An obvious question which I'm keen to hear people's thoughts on - does MAD work here? Specifically, does it make sense for the EA forum users with launch codes to commit to a retaliatory attack? The obvious case for it is deterrence. The obvious counterarguments are that the Forum could  go down for a reason other than a strike from LessWrong, and that once the Forum is down, it doesn't help us to take down LW (though this type of situation might be regular enough that future credibility makes it worth it)

 

Though of course it would be really bad for us to have to take down LW, and we really don't want to. And I imagine most of us trust the 100 LW users with codes not to use them :)

Comment by Alex HT on The importance of optimizing the first few weeks of uni for EA groups · 2021-09-22T07:15:43.220Z · EA · GW

This is great!  I'm tentatively interested in groups trying outreach slightly before the start of term. It seems like there's a discontinuous increase in people's opportunity cost when they arrive at university - suddenly there are loads more cool clubs and people vying for their attention. Currently, EA groups are mixed in with this crowd of stuff. 

One way this could look is running a 1-2 week residential course for offer holders the summer before they start at university (a bit like SPARC or Uncommon Sense).  

To see if this is something a few groups should be doing, it might be good for one group to try this and then see how many core members of the group come out of the project, compared to other things like running intro fellowships. You could roughly track how much time each project took to get a rough sense of the time-effectiveness. 

This might have some of the benefits you list for outreach at the start of term, but the additional benefit of having less competition. This kind of thing also has some of the benefits of high school outreach talked about here, but avoids some of the downsides - attendees won't be minors, and we already know their university destination.  There might be a couple of extra obstacles, like advertising the course to all the offer-holders, and some kind of framing issue to make sure it didn't feel weird, but I think these are surmountable. 

I'm not sure whether 'EA' would necessarily be the best framing here - there are four camps that I know of (SPARC, ESPR, Uncommon Sense, and Building a Better Future) and none of them use a direct EA framing, but all seem to be intended to create really impactful people long-term. (But maybe that means it's time to try an EA camp!)

Pretty unsure about all of this though - and I'm really keen to hear things I might be missing!

Comment by Alex HT on [PR FAQ] Sharing readership data with Forum authors · 2021-08-09T11:48:50.598Z · EA · GW

I think I'd find this really useful

Comment by Alex HT on Towards a Weaker Longtermism · 2021-08-09T11:22:48.127Z · EA · GW

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> 'more economic growth' --> 'more innovation')

For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. Charlotte Siegmann incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community's skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I'm not sure that actually translates to more skill-hours going towards longtermist causes).

But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it's creating a community and culture of founding impact-oriented nonprofits, not because [it's better for shrimp/there's less lead in paint/fewer children smoke tobacco products].  Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It's hard to come up with a good thought experiment here to test this intuition. 

One hypothetical is 'would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well's Maximum Impact Fund'. This is confusing though, because I'm not sure how important extra funding is in these areas. Another hypothetical is 'would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)'. This is confusing because I expect 10,000 people working on effective animal advocacy to have some  effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time.  They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.

If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.

I wonder if some of the reasons people don't hold the view I do is some combination of (1) 'this feels weird so maybe it's wrong' and (2) 'I don't want to be unkind to people working on neartermist causes'. 

I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I'm not sure how much longtermism actually falls into this category. 

  • The idea is not that new, and there's been quite a lot of energy devoted to criticising the ideas. I don't know what others in this thread think, but I haven't found much of this criticism very convincing.
  • Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
  • Strong longtermism doesn't imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can't get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn't prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don't spend any resources on that at all. (This is similar to Eliezer's point above).

I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn't make people who work on other causes feel bad. However, I think it's possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don't think these kinds of social considerations have any bearing on which cause to prioritise working on. 

I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it's weird, or it feels  difficult, or we're not completely sure. We make tradeoffs even when it feels really hard - like working on reducing existential risk instead of  helping people in extreme poverty or animals in factory farms today.

I feel like I also need to clarify some things:

  • I don't try to get everyone I talk to to work on longtermist things. I don't think that would be good for the people I talk to, the EA community, or the longterm future
  • I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
  • My all-things-considered view is a bit more moderate than this comment suggests, and I'm eager to hear Darius', Ben's, and others views on this
Comment by Alex HT on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-04T16:45:15.004Z · EA · GW

Nice, thanks for these thoughts.

But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. 

Ah sorry I think I was unclear. I meant 'capacity-building' in the narrow sense of 'getting more people to work on AI' eg. by building the EA community, rather than building civilisation's capacity eg. by improving institutional decision-making. Did you think I meant the second one? I think the first one is more analogous to capital as building the EA community looks a bit more like investing (you use some of the resource to make more later)  

Comment by Alex HT on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-03T09:12:10.890Z · EA · GW

This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?

I'm a bit more interested in the question 'how much longtermist labor should be directed towards capacity-building vs. 'direct' work (eg. technical AIS research)?' than the question 'how much longtermist money should be directed towards spending now vs. investing to save later?'

I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. (Or put another way, I think OpenPhil doesn't pick their savings rate based on their timelines, but based on whether they can find good projects. As individuals, our resource allocation problem is to either try to give OpenPhil marginally better direct projects to fund or marginally better capacity-building projects to fund.)

[Also aware that you were just building this model to test whether the claim about AI timelines affecting the savings rate makes sense, and you weren't trying to capture labor-related dynamics.]

Comment by Alex HT on How large can the solar system's economy get? · 2021-07-01T09:47:18.536Z · EA · GW

Also this: https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/

Comment by Alex HT on How large can the solar system's economy get? · 2021-07-01T09:44:37.534Z · EA · GW

This seems relevant: https://www.overcomingbias.com/2009/09/limits-to-growth.html

Comment by Alex HT on Non-consequentialist longtermism · 2021-06-05T10:19:52.228Z · EA · GW

https://globalprioritiesinstitute.org/andreas-mogensen-staking-our-future-deontic-long-termism-and-the-non-identity-problem/ 

Comment by Alex HT on Non-consequentialist longtermism · 2021-06-05T10:08:06.161Z · EA · GW

I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"

Comment by Alex HT on Is there evidence that recommender systems are changing users' preferences? · 2021-04-13T10:21:07.835Z · EA · GW

 Is The YouTube Algorithm Radicalizing You? It’s Complicated.

Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

Comment by Alex HT on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-11T18:28:32.635Z · EA · GW

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

Comment by Alex HT on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-12T11:16:29.383Z · EA · GW

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

Comment by Alex HT on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-10T11:00:13.526Z · EA · GW

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

Comment by Alex HT on Should I transition from economics to AI research? · 2021-02-28T19:42:59.334Z · EA · GW

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

Comment by Alex HT on Running an AMA on the EA Forum · 2021-02-18T22:01:35.604Z · EA · GW

Agree with Marisa that you'd be well suited to do an AMA

Comment by Alex HT on How can non-biologists contribute to wild animal welfare? · 2021-02-18T08:32:03.896Z · EA · GW

Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.

Comment by Alex HT on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-12T19:56:48.288Z · EA · GW

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists). 

I'm also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views

 our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.”

But, I do agree that some longtermists do think 

  • there are likely to be very transformative events soon eg. within 50 years
  • in the long run, if they go well, these events will massively improve the human condition 

And there's some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes. 

Comment by Alex HT on Ecosystems vs Projects in EA Movement Building · 2021-02-10T15:33:54.794Z · EA · GW

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
Comment by Alex HT on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-09T13:36:10.536Z · EA · GW

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves.  And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.

Clarity

In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism. 

Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!

I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else. 

The thesis of the book (for people reading this comment, and to check my understanding)

“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”

“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”

Utilitarianism (Edit: I think Tyle has added a better reading of this section below)

  • This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
  • In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn't really feel like this was argued for (please correct me if I'm wrong!)
  • I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
  • I would have preferred if these parts of the book were clear about exactly what the argument was
  • I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)

Millennialism

  • “A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
  • Longtermism does not say our current world is replete with suffering and death
  • Longtermism does not say the world will be transformed soon
  • Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
  • Therefore, longtermism does not meet the stated definition of a millennialist movement
  • Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

Mere Ripples

  • Some things are bigger than other things
  • That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
  • If you can make a good big thing happen or make a good small thing happen you can make  more good by making the big thing happen
  • That doesn't mean the small thing is not important, but it is smaller than the big thing
  • I feel confused

White Supremacy

  • The book quotes this section from Beckstead’s Thesis:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

The book goes on to say:

In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”

On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.

  • I'm pretty sure the book isn't using 'white supremacist' in the normal sense of the phrase. For that reason, I'm confused about this, and would appreciate answers to these questions
    • The Beckstead quote ends ‘other things being equal’. Doesn't that imply that the claim is not 'overall, it's better to save lives in rich countries than poor countries' but 'here is an argument that pushes in favour of saving lives in rich countries over poor countries'?
    • Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
      • What if the poor people are white and the rich people are not white?
      • Why do  rich-nation government health services not meet this definition of white supremacy?
  • I'd also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase

Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)

  • The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
  • The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
  • The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
  • It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer
Comment by Alex HT on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T18:51:55.214Z · EA · GW

I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:

  • Do you think the field is progressing ‘well’, however you define ‘well’? 
  • What skills/types of people do you think AI forecasting needs?
  • What does progress look like in the field? Eg. does it mean producing a more detailed report, getting a narrower credible interval, getting better at making near-term AI predictions...(relatedly, how do we know if we're making progress?)
  • Can you make any super rough predictions like ‘by this date I expect we’ll be this good at AI forecasting’? 
Comment by Alex HT on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T18:42:59.640Z · EA · GW

I'd be keen to hear your thoughts on AI forecasting-forecasting. It seems like progress is being made on  forecasting AI timelines.  Can you say a bit about how quick that progress is and what progress looks like. 

Comment by Alex HT on Lessons from my time in Effective Altruism · 2021-01-18T16:40:16.715Z · EA · GW

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

Comment by Alex HT on Can people be persuaded by anything other than an appeal to emotion? · 2021-01-02T19:58:30.024Z · EA · GW

My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.

I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA

Comment by Alex HT on Should Effective Altruists Focus More on Movement Building? · 2020-12-30T13:13:35.019Z · EA · GW

Thanks for writing this and contributing to the conversation :)

Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.

I do think the salience of movement building has been raised elsewhere eg:

Having said that, I share the feeling that movement building seems underrated. Given how impactful it seems, I would expect more EAs to want to use their careers to work on movement building.

One resolution to this apparent conflict is that the fraction of people who can be good at movement building long-term might be smaller than it first seems. For lots of the interventions that you suggest, strong social skills and a strong understanding of EA concepts seem important, as well as some general executional or project management ability. Though movement builders don’t necessarily have to be excellent in any of these domains, they have to be at least pretty good at all of them. They also have to be interested enough in all of them to do movement building. This narrows down the pool of people who can work in movement building. 

Another possible reason is that  within the EA community movement building careers are generally seen as less prestigious than more ‘direct’ kinds of work and social incentives play a large role in career choice. For example, some people would be more impressed by someone doing technical AI safety research than by someone building talent pipelines into AI safety, even if the second one has more impact.

Also, as Aaron says, a lot of direct work has helpful movement building effects. 

I also agree with Aaron that looking at funding is a bit complicated with movement building, partly because movement building is probably cheaper than other things, but also that it can be hard to tease apart what's movement building and what's not. 

Comment by Alex HT on A case against strong longtermism · 2020-12-18T12:08:34.413Z · EA · GW

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Comment by Alex HT on Introducing High Impact Athletes · 2020-12-01T21:25:43.786Z · EA · GW

Thanks! I appreciate it :)

It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.

Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end up with a fairly homogeneous network eg. because your profession or university is homogenous. Sounds like Marcus is in this category himself (if tennis is mainly white, and his network is mainly tennis players).

Comment by Alex HT on Introducing High Impact Athletes · 2020-12-01T09:28:37.757Z · EA · GW

Was this meant as a reply to my comment or a reply to Ben's comment?

I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:55:50.920Z · EA · GW

Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:55:02.081Z · EA · GW

I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:09:39.307Z · EA · GW

It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).

Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in

Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the combination of them both is pretty worrying and I'd personally be in favour of changing it.

Edited to address downvotes: Obviously, it is not bad in itself that the team if the team is all white, and I'm not implying that any deliberate filtering for white people has gone on. I just think it's something to be aware of - both for PR reasons (avoiding look like white saviours) and for more substantive reasons (eg. building a movement and sub-movements that can draw on a range of experiences)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:55.385Z · EA · GW

Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.

Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:35.190Z · EA · GW

I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?

(I'm not saying you should mention your take on longtermism on the website.)

Comment by Alex HT on Introducing High Impact Athletes · 2020-11-30T14:08:18.146Z · EA · GW

This is really cool! Thanks for doing this :)

Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)

Comment by Alex HT on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T13:46:09.095Z · EA · GW

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

Comment by Alex HT on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T13:38:19.293Z · EA · GW

"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists."

Comment by Alex HT on [deleted post] 2020-11-04T14:18:07.547Z

Thanks for writing this! I and an EA community builder I know found it interesting and helpful.

I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:

  • OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)

  • OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to compete, even if the groups are officially collaborating. In any case, they groups will be competing for attention of the altruistically motivated people at the university

  • Because OFTW isn't cause neutral, it might not be a great introduction to EA. For some people, having lots of exposure to OFTW might even make them less receptive to EA, because of anchoring on a specific cause. As you say "Since it is a cause-specific organization working to alleviate extreme global poverty, that essentially erases EA’s central work of evaluating which causes are the most important." I agree with you that trying to impartially work out which cause is best to work on is core to EA

  • OFTW's direct effects (donations to end extreme poverty) may not be as uncontroversially good as they seem. See this talk by Hilary Greaves from the Student Summit: https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism

-OFTW outreach could be so broad and shallow that it doesn't actually select that strongly for future dedicated EAs. In a comment below, Jack says "OFTW on average engages a donor for ~10-60 mins before they pledge (and pre-COVID this was sometimes as little as 2 mins when our volunteers were tabling)". Of course, people who take that pledge will be more likely to become dedicated EAs than the average student, but there are many other ways to select at that level