The History of AI Rights Research 2022-08-27T08:14:55.165Z
7 essays on Building a Better Future 2022-06-24T14:28:07.508Z
New study on whether animal welfare reforms affect wider attitudes towards animals 2022-05-02T10:00:21.870Z
Effective strategies for changing public opinion: A literature review 2021-11-09T14:09:28.893Z
Prioritization Questions for Artificial Sentience 2021-10-18T14:07:10.197Z
EA movement building: Should you run an experiment? 2021-10-05T15:42:34.852Z
Evidence from two studies of EA careers advice interventions 2021-09-29T15:47:38.162Z
Key Lessons From Social Movement History 2021-06-30T17:05:38.684Z
New skilled volunteering board for effective animal advocacy 2021-06-18T12:27:50.976Z
Social Movement Lessons from the Fair Trade Movement 2021-04-02T10:51:43.982Z
The Importance of Artificial Sentience 2021-03-03T17:17:48.921Z
Effective animal advocacy bottlenecks surveys 2021-01-13T13:38:52.491Z
Technical research for animal product alternatives skills profile 2020-11-30T15:56:08.880Z
Animal product alternatives for-profit roles spot-check 2020-11-19T07:31:35.113Z
Jamie_Harris's Shortform 2020-10-17T07:00:08.848Z
A Brief Overview of Recruitment and Retention Research 2020-10-06T14:21:17.332Z
Careers advising calls and an online course about impact-focused animal advocacy 2020-09-18T13:37:20.832Z
Careers (to help animals) in politics, policy, and lobbying 2020-08-30T10:37:13.870Z
Health Behavior Interventions Literature Review 2020-07-24T16:21:08.754Z
Social Movement Lessons from the US Prisoners' Rights Movement 2020-07-22T12:10:39.884Z
What Interventions Can Animal Advocates Use To Build Community In Their Country? 2020-07-17T17:42:29.040Z
Animal Advocacy Careers advice 2020-07-06T12:56:05.867Z
The Effects of Animal-Free Food Technology Awareness on Animal Farming Opposition 2020-05-16T07:30:35.987Z
Which institutional tactics can animal advocates use? 2020-04-29T14:11:22.174Z
Effective Animal Advocacy Nonprofit Roles Spot-Check 2020-03-31T15:22:36.283Z
Research on developing management and leadership expertise 2020-03-05T16:57:42.422Z
Introducing the Sentience Institute Podcast 2019-12-05T18:12:44.012Z
Survey data on the moral value of sentient individuals compared to non-sentient environmental systems 2019-10-27T07:00:00.000Z
A short survey on bottlenecks in effective animal advocacy from nine attendees of Effective Altruism Global London 2019-10-24T07:00:00.000Z
Effective animal advocacy movement building: a neglected opportunity? 2019-06-11T20:33:50.415Z
How tractable is changing the course of history? 2019-05-22T15:29:49.195Z
A case study for animal-focused local EA movement building: Effective Animal Altruism London 2019-01-23T22:09:32.308Z
Event Review: EA Global: London (2018) 2018-12-17T22:29:35.324Z
Book Review: The End of Animal Farming (Jacy Reese, 2018) 2018-12-17T22:26:34.669Z


Comment by Jamie_Harris on Software Engineer: what to do with 3 days of volunteering? · 2022-10-02T14:29:14.116Z · EA · GW

Animal Advocacy Careers' "skilled volunteering" board has a few things that might be relevant in the "other technical" section.

Comment by Jamie_Harris on Quantified Intuitions: An epistemics training website including a new EA-themed calibration app · 2022-10-02T14:09:36.310Z · EA · GW

This seems cool!

When I saw the word "app" I assumed 'oh cool I can download this on my phone and maybe I'll be tempted to fiddle with it in spare moments similarly to how I get tempted to scroll social media.' Seems it's just on a website for now? I'm less optimistic that I'll remember / get tempted to use it in this format.

(Not a criticism, just a reflection.)

Comment by Jamie_Harris on EA Survey 2020: Geography · 2022-09-17T11:24:10.356Z · EA · GW

Hello! I'm wondering if it's possible to share (an anonymised version of) the full dataset by country? I'm trying to use number of EAs in any given country as an input into thinking how easy it would be to hire someone for an EA talent search programme in a given country. But at the moment I can't easily compare between, say, India, South Africa, Saudi Arabia, and Pakistan.


Comment by Jamie_Harris on Celebrations and gratitude thread · 2022-09-15T21:23:58.790Z · EA · GW

I'm not sure if this is helpful or annoying to hear at this stage, but I found a hack to search resolved comments:

Open the full comments list in the top right --> Manually open the search function with the three dots thing at the top of the page. (I.e. don't just ctrl+f ): this opens the search function you get on your browser ordinarily, rather than the search function within Google Docs specifically.

I'm not sure if this makes sense. Can record a screen grab or take multiple screenshots to show this if it's helpful. Presumably this is browser dependent, too.

Comment by Jamie_Harris on Red teaming introductory EA courses · 2022-09-13T20:19:46.919Z · EA · GW

You might be right that we lose some people due to the form, but I expect the cost is worth it to us. The info we gather is helpful for other purposes and services we offer.

Regarding more info about the course on the sign up page: of course there's plenty of info we could add to that page, but I worry about an intimidatingly large wall of text deterring people as well.

Comment by Jamie_Harris on Red teaming introductory EA courses · 2022-09-12T22:02:44.211Z · EA · GW

Thanks! I'm keen for staying in the loop with any coordination efforts.

Although I'll note that AAC's course structure is quite different from EAVP. Its content + questions/ activities, not a reading group with facilitated meetings. (The Slack and surrounding community engagement is essentially and optional, additional support group.) I would hazard a guess that the course would score more highly on your system than most or all of the other reviewed items here but I haven't gone through the checklist carefully yet.

Comment by Jamie_Harris on Red teaming introductory EA courses · 2022-09-12T12:37:29.196Z · EA · GW

This seems really helpful, and I look forward to reviewing your comments when we next decide how to modify/update Animal Advocacy Careers' online course, which may be in a week or so's time.

(A shame we weren't reviewed, as I would have loved to see your ranking + review! But I appreciate that our course is less explicitly/primarily focused on effective altruism.)

Comment by Jamie_Harris on Preventing an AI-related catastrophe - Problem profile · 2022-09-11T22:17:17.674Z · EA · GW

I'm a fan of the profile, especially the section on " What do we think are the best arguments we’re wrong?". I thought this was well done and clearly explained.

One important category that I don't remember seeing is on wider arguments against existential risk being a priority. E.g. in my experience with 16-18 year olds in the UK, a very common response to Will MacAskill's Ted talk (that they saw in the application process) was disagreement that the future was actually on track to be positive (and hence worth saving).

More anecdotally, something that I've experienced in numerous conversations, with these people and others, is that they don't expect/believe they could be motivated to work on this problem. (e.g. due to it feeling more abstract, less visceral than other plausible priorities.)

Maybe you didn't cover these because they're relevant to much work on x-risks, rather than AI safety specifically?

Comment by Jamie_Harris on The History, Epistemology and Strategy of Technological Restraint, and lessons for AI (short essay) · 2022-08-24T21:55:37.971Z · EA · GW

It's interesting to see these lists! It does seem like there are many examples here, and I wasn't aware of many of them.

Many of the given examples relate to setbacks and restraints in one or two countries at a time. But my impression is that people don't doubt that various policy decisions or other interruptions could slow AGI development in a particular country; it's just that this might not substantially slow development overall (just handicap some actors relative to others).

So I think the harder and more useful next research step would be to do more detailed case studies of individual technologies at the international level to try to get a sense of whether restraint meaningfully delayed development at an international level?

Comment by Jamie_Harris on New study on whether animal welfare reforms affect wider attitudes towards animals · 2022-08-15T21:01:18.519Z · EA · GW

Faunalytics' summary:

Comment by Jamie_Harris on Rethink Priorities 2022 Mid-Year Update: Progress, Plans, Funding · 2022-07-28T19:17:07.186Z · EA · GW

I'm a bit confused by this bit:

"We presently have to turn down some large commissions due to lack of staff capacity, and lack of funds in place to expand our team (or to maintain the team at its current size)."

Do you charge for your commissions? I'm struggling to get my head around why the ability to take commissions could be constrained by both lack of funding and staff capacity.

Thoughts I have about what might explain it / what you might mean:

  • you don't actually charge and so more commissions just means more work for free. (Or you accept low paid commissions.)
  • commissions don't always come at convenient times so sometimes there are bursts of too much work to do / too many requests, compared to some quieter periods where researchers have to focus more on their own independently generated projects.
  • you have both the research talent and the funding, it's just that there's a time delay for hiring, onboarding etc before you can convert both components into increased capacity.

Clarification on which of these, if any, seems closest to RP's situation would be welcome. Thanks!

Comment by Jamie_Harris on The Future Might Not Be So Great · 2022-07-16T09:02:21.743Z · EA · GW

I also put my intuitive scores into a copy of your spreadsheet.

In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive.

But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though. 

Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post.

Comment by Jamie_Harris on Why I'm skeptical of moral circle expansion as a cause area · 2022-07-16T06:59:09.188Z · EA · GW

Ah okay, thanks for explaining. Sounds like by "pushing for moral circle expansion as a cause area", you meant "pushing for moral circle expansion via direct advocacy" or something more specific like that. When I and others have talked about "moral circle expansion" as something that we should aim for, we're  usually including all sorts of more or less direct approaches to achieving those goals.

(For what it's worth, I do think that the direct moral advocacy is an important component, but it doesn't have to be the only or even main one for you to think moral circle expansion is a promising cause area.)

Comment by Jamie_Harris on Why I'm skeptical of moral circle expansion as a cause area · 2022-07-14T22:26:55.628Z · EA · GW

"I think there are a lot of problems with the idea of directly pushing for moral circle expansion as a cause area -- for starters, moral philosophy might not play a large role in actually driving moral progress. "

Could you please explain to me why, in your view, this is a "problem" with moral circle expansion as a cause area? Thanks!

Comment by Jamie_Harris on Why I'm skeptical of moral circle expansion as a cause area · 2022-07-14T22:24:49.315Z · EA · GW

The middle point is pretty interesting! (I think questions of sustainability/inevitability have mixed implications for aspiring effective altruists, because they might update your views on size/importance and tractability in the opposite directions)

Comment by Jamie_Harris on Megaprojects for animals · 2022-06-18T09:50:16.955Z · EA · GW

I enjoyed this post. And I appreciated some of the explanation in the intro. E.g. I can imagine this list being inspiring for donors (and hadn't thought about it like that before).

But is it much different from a list of (non-mega) project ideas?

E.g. see this comment:

"Rethink Priorities’ first incubated charity, Insect Welfare Project (provisional name) might be an example of launching something that eventually could absorb $100M when it finds an effective intervention and scales it. The Shrimp Welfare Project might be another example."

You could apply this logic to almost any animal charity that's trying to find interventions that are both cost-effective and scalable.

Once you adopt this perspective, the question could be switched from "which megaproject ideas can we think of?" to "how rapidly will we get diminishing returns on further investment in various plausibly cost-effective project ideas?"

Comment by Jamie_Harris on Emphasize Vegetarian Retention · 2022-06-11T20:36:04.979Z · EA · GW

This is cool. What makes these polls "a better collection" in your view?

Comment by Jamie_Harris on Animal welfare orgs: what are your software and data analysis needs? · 2022-06-10T08:31:29.255Z · EA · GW

I agree it's "weird" in the sense of "unusual", but it you mean "weird" in the sense of "unhelpful", I'd be very grateful if you're happy to elaborate why you'd prefer it the other way around!

Comment by Jamie_Harris on "Big tent" effective altruism is very important (particularly right now) · 2022-06-04T16:00:48.434Z · EA · GW

I initially found myself nodding along with this post, but I then realised I didn't really understand what point you were trying to make. Here are some things I think you argue for:

  • theoretically, EA could be either big tent or small tent
  • to the extent there is a meaningful distinction, it seems better in general for EA to aim to be big tent
  • Now is a particularly important time to aim for EA to be big tent
  • Here are some things that we could do help make EA more big tent.

Am I right in thinking these are the core arguments?

A more important concern of mine with this post is that I don't really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I'm not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you're advocating for?

Comment by Jamie_Harris on Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter · 2022-06-02T15:44:57.122Z · EA · GW

I have a similar sense. Very interesting post and food for thought.

But how would better aesthetics lead to positive impact? The mechanism I'm seeing is essentially "compliance" with views commonly held within the effective altruism community, or some other form of persuasion that doesn't require understanding or agreement. There are exceptions where this would be helpful, but I expect this sort of persuasion to be net negative for effective altruism overall. (Low confidence.) As well as the post JasperGeh links to, here's a recent one making some relevant points:

Additionally, when I tried the OP's exercise of closing my eyes and imagining aesthetics for liberalism, I couldn't think of any. I asked my friend (not involved in EA but very intelligent, well-read, politically involved) to do the same and they couldn't think of anything either. The movements/ideologies that do have strong aesthetics that jump to mind seem to rely heavily on compliance rather than truth seeking, e.g. religions, communism, fascism.

Comment by Jamie_Harris on Issues with centralised grantmaking · 2022-04-30T14:01:52.541Z · EA · GW

I agree that centralised grant-making might mean that some promising projects are missed. But we're not solely interested in this? We're overall interested in:

Average cost-effectiveness per $ granted * Number of $ we're able to grant

My intuition would be that the more decentralised the grant-making process, the more $ we're able to grant.

But this also requires us to invest more talent in grant-making, which means, in practice, fewer promising people applying for grants themselves, which might non-negligibly reduce average cost-effectiveness per $ granted.

Beyond the above consideration, it seems unclear whether decentralised grant-making would overall increase of decrease the average cost-effectiveness. Sure, fewer projects above the current average cost-effectiveness would slip through the net, but so too fewer projects below the current average cost-effectiveness would slip through the net. So I'd expect these things to balance each other out roughly UNLESS we're making a separate claim that the current grantmakers are making poor / miscalibrated decisions. But at that point, this is not an argument in favour of decentralising grant-making, but an argument in favour of replacing (or competing with) the current grantmakers.

So maybe overall, decentralising grant-making would trade an increase in $ we're able to grant for a small decrease in average cost-effectiveness of granted $.

(I felt pretty confused writing these comments and suspect I've missed many relevant considerations, but thought I'd flesh out and share my intuitive concerns with the central argument of this post, rather than just sit on them.)

Comment by Jamie_Harris on Shortening & enlightening dark ages as a sub-area of catastrophic risk reduction · 2022-04-02T09:09:49.885Z · EA · GW

This seems like a cool initial exploration of a potentially important area. Thank you + well done!

The device idea seems intuitively promising.

In a talk at EAGx Oxford, someone (I forget who -- maybe Anders Sandberg) mentioned the idea that the internet archive is a tiny org with not much investment. If that whole infrastructure could be backed up somewhere and stored in a way that would survive loss of electricity/other major systems above ground and also this location publicly known / easily accessible somehow, would that achieve the same purpose?

(I have no technical knowledge about these things and for all I know, this already exists. I'm just spitballing.)

Comment by Jamie_Harris on Bibliography of EA writings about fields and movements of interest to EA · 2022-03-26T08:18:28.681Z · EA · GW

Cool list! I wrote a few other social movement case studies not on this list when I was working at Sentience Institute.

(I think they're more relevant to the farmed animal movement than to the effective altruism community but if this was an intentional rather than accidental exclusion, I would be interested to hear reasons why SI's anti-slavery and Fair Trade case studies merit inclusion here but the others don't.)

Anti abortion movement:

Anti death penalty movement:

Prisoners rights movement (less relevant, IMO):

Additionally, you highlighted the anti-nuclear movement as worthy of further study. The focus was on the proliferation of nuclear power, but J (another former SI researcher) wrote a cool case study report which includes some interesting info about the anti-nuclear movement:

Comment by Jamie_Harris on We Ran a "Next Steps" Retreat for Intro Fellows · 2022-03-12T08:49:38.464Z · EA · GW

Do you have plans for follow-up on this retreat? Actually, since I'm reading this a month after it was posted; have you done any follow-up with these people already?

(Context for the question: I'm running an in person retreat and think that the question of what follow-up to do afterwards / whether we can encourage longer-term engagement is one of my biggest uncertainties.)

Comment by Jamie_Harris on Which EA orgs provide feedback on test tasks? · 2022-02-12T11:29:37.565Z · EA · GW

Animal Advocacy Careers: with each of our hiring rounds so far (three), when we send emails to people who were invited to submit test tasks or interviews but who we're not moving forward with their applications, we say that we can provide personalised feedback if they'd like.

A surprisingly low proportion of people seem to ask for it. (Maybe half?)

When we send out these emails, we also try to give a quick sense of the common strengths of our top candidates, plus the number of people who applied or made it to the stage they made it to, in case that context helps.

Comment by Jamie_Harris on Rowing and Steering the Effective Altruism Movement · 2022-01-11T23:48:24.542Z · EA · GW

<<I also know about a handful of people who have 'jumped ship'; who, after spending many hundreds of hours working on effective altruism causes, have concluded that they are too disillusioned with the ship's direction to stay on board and no longer wish to associate with the movement. These are not people who were never going to become highly engaged in the community in the first place. They are people who have contributed enormously to the effective altruism project and could have continued if they had been more confident in its ability to move in the right direction.>>

I'd love to hear more about this. What were these people's concerns? What sorts of things are they doing now that seem better? These questions seem relevant both to rowing and steering.

(For context, I don't know people like this. Maybe they know about important arguments I don't. Should I jump ship too?)

Comment by Jamie_Harris on Jamie_Harris's Shortform · 2021-12-29T10:21:25.187Z · EA · GW

I haven't read Moravec's book very thoroughly, but I ctrl+f'd for "simulation" and couldn't see anything very explicitly discussing the idea that we might be living in a simulation. There are a number of instances where Moravec talks about running very detailed simulations (and implying that these would be functionally similar to humans). It's possible (quite likely?) Bostrom didn't ever see the 1995 article where Moravec "shrugs and waves his hand as if the idea is too obvious." 

Either way, it seems true that (1) the idea itself predates Bostrom's discussion in his 2003 article, (2) Bostrom's discussion of this specific idea is more detailed than Moravec's.

Bostrom (2003) cited Moravec (1988), but not for this specific idea -- it's only for the idea that "One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain."

But yeah, his answer to the question "How did you come up with this?" in the 2008 article I linked to in the original post seems misleading, because he doesn't mention Moravec at all and implies that he came up with the idea himself.

Comment by Jamie_Harris on World's First Octopus Farm - Linkpost · 2021-12-26T21:22:53.993Z · EA · GW

Some of your thinking and estimates here seem reasonable and useful! I just want to pick up on one small subsection that surprised me:

"As in, say a country like Spain outlaws the practice of farming octopus (which in itself may be pretty unlikely), then I think a big multinational company like Nueva Pescanova (the company claiming to start the first commercial octopus farm) perhaps just goes to some other country they work in (and they are present in 20ish). "

Why did this surprise me / why are our intuitions different?

I think there might be some difference in optimism about the value of legislation.

I expect that preventative action is much more tractable than retrospective action to abolish an industry that has already been developed. E.g. see "It is probably easier to abolish a practice through legislation if that practice is not in regular use" here: . So challenging the first example of an unusually negative (and/or unusually unpopular) practice seems especially important. If fighting this specific company encourages a battle that spirals across several countries and results in legislation in several places but fails to stop this specific company farming octopus somewhere then I imagine I would consider that to be a major victory.

Relatedly, I have in my head a model where anti octopus farming legislation in one country makes anti octopus farming legislation in another. E.g. see "Once influential international bodies adopt a value, they may exert pressure on institutions in other parts of the world to adopt the same value" at the same link as above.

Alternatively, maybe it's because you're focusing on helping the octopuses in question in this specific farm. Whereas my concern is not: "(how) can we prevent Nueva Pescanova farming and selling 3,000 tons of farmed octopus per year," but "(how) can we prevent (octopus) farming?"

Comment by Jamie_Harris on EA outreach to high school competitors · 2021-12-26T18:07:04.865Z · EA · GW

Glad to see discussion and suggestions for ways to reach out to people currently still at school! Thanks for the contribution.

"I want to discuss a possible modification to the strategy of high school outreach - specifically targeting high-level STEM (+logic, philosophy, and debate) competitors. It seems that this narrowing down would select for people who would be more likely to act on EA ideas."

My sense is that this slightly misrepresents the current landscape. I think that, when it comes to school outreach, there are many possible combinations of the following variables:

(1) age of target audience, e.g. 11-15 years old, 16-18 years old, 18 years old only, etc. (2) outreach methods and proxies that you use as indicators of promisingness, e.g. high performing schools, olympiad participants, performance on an application process, recommendation from teachers, etc. (3) format, e.g. written online advice, an online course, a summer camp, an after school club, integrated into assemblies, etc. (4) focus cause/intervention area, e.g. all of EA, longtermism, extinction risk reduction, AI safety, rationality, etc. (5) "ask" and key metrics you, e.g. changing degree programmes, signing up for a newsletter, reading X resources, joining an EA group once they reach university, etc. (6) marketing strategy, e.g. career benefits, help you land a place at uni, impactful in itself, intrinsically interesting, etc.

I think that very few of the possible permutations have been tried. So your post proposes something specific within the second variable category I offered. That seems good, and I'd be keen to see more exploration. But I don't think that there's a very extensive current "strategy of high school outreach." Given that the EA movement currently has quite a lot of funding and there are a decent number of people interested in EA movement building, I think the focus should be more on adding to the current portfolio of efforts than redirecting it.

It's possible we already agree here and I was just reading too much into your exact phrasing.

One even more nitpicky comment:

"It seems that university outreach is more effective than high school outreach according to current metrics, and that one of the main factors making high school outreach ineffective is a lack of selection."

I think I've read all the posts in the Forum's "effective altruism outreach in schools" tag, and neither of the two clauses in this summary sentence fitted well with my memory/impression. I'd be interested in elaboration, clarification, or supporting links/evidence if you're happy to provide it!

Thanks for your engagement with this important topic.

Comment by Jamie_Harris on Biosecurity needs engineers and materials scientists · 2021-12-26T17:12:39.659Z · EA · GW

An interesting post! It seems like the post is doing several things:

(1) positing some potential problems and gaps in current efforts in biosecurity. (2) suggesting some possible steps that could be taken to address them. (3) suggesting or arguing that engineers and materials scientists would be well-placed to undertake or contribute to these steps.

Your comments on all three seem plausible to me (a non-expert). But you seem to provide more links and evidence for (2) than (1) or (3).

Since (2) and (3) are dependent upon (1) being correct, I'd be interested in what sorts of evidence you have for it. E.g. what has led you to make the following claims?

  • The whole of "the problem" section, especially "Unfortunately, people with these backgrounds are currently severely lacking in biosecurity."
  • "PPE that was highly effective, easy to use, and cheap to distribute... is currently laughably neglected."
  • "relatively little time and money have gone into either implementing these technologies or identifying promising alternatives."

Regarding (3), I have similar but lower priority questions. That case seems more intuitive to me.

Comment by Jamie_Harris on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-26T12:43:57.914Z · EA · GW

This seems very useful to me. I've read books by Russel, Christiansen, and Bostrom, plus a load of other misc EA content (EA Forum, EAG, 80k, etc) about AI Alignment but wouldn't have been able to distinguish these separate strands. So for me at least, this seems like very helpful de-confusion.

A couple of questions, if you've got time:

1 In your ~30 conversations with and feedback from others, did you get much of a sense that others disagreed with your general categorisations here? That is, I'm sure that there are various ways that one could conceptually carve up the space, but did you get much feedback suggesting that yours might be wrong in some substantial way? I'm trying to get a sense if this post represents a reasonable but controversial interpretation of the landscape or if it would be widely accepted.

2 You helpfully list some existing resources for each approach. Do you have a sense of roughly how resources (e.g. number of researchers / research hours; philanthropic $s) are currently divided between these different approaches?

(3) (I'd also be interested in how you or others would see the ideal distribution of resources, but I infer from your post that there might be a lot of disagreement about that.)

Comment by Jamie_Harris on Aiming for the minimum of self-care is dangerous · 2021-12-15T23:24:27.713Z · EA · GW

I agree with some of the key ideas of your post, such as that working more than what is sustainable would be counterproductive. I also think that this is a message that some people need to be reminded of.

However, regarding your stated goal:

"My goal in this post is to convince you that trying to spend as little time as possible on fun socializing, frivolous hobbies, or other leisure is a dangerous impulse."

As I was reading the post, I thought that the lesson from some of the ideas and examples you give is not that you shouldn't aim to minimize these leisure activities. But rather that you should build in some leeway in case you have overestimated the levels of work that you can sustainably work at and underestimated the levels of leisure activities that you need.

We sometimes talk about making sure you have enough "financial runway" where, once you're over a certain threshold this enables you to take a much lower paying role than you might ordinarily take (if you weren't motivated to maximise your impact).

Maybe there's some comparable metaphor of something like "motivational runway" where, once you're over a certain threshold this enables you to work much longer/harder than you might ordinarily take (if you weren't motivated to maximise your impact).

Comment by Jamie_Harris on Aiming for the minimum of self-care is dangerous · 2021-12-15T23:10:07.896Z · EA · GW

I also have the impression that some of the most productive people I know (within the EA community specifically) work very long hours.

Comment by Jamie_Harris on Retrospective on the Summer 2021 AGI Safety Fundamentals · 2021-12-15T22:08:53.056Z · EA · GW

I'm very grateful you wrote up this detailed report, complete with various numbers and with lists of uncertainties.

As you know, I've been thinking through some partly overlapping uncertainties for some programmes I'm involved in, so feel free to reach out if you want to talk anything over.

Comment by Jamie_Harris on Coaching: An under-appreciated strategy among effective altruists · 2021-12-14T22:44:03.159Z · EA · GW

No major change in thoughts I don't think. I reached out to Lynette and it didn't seem like a great fit. I've also reached out to another coaching service mentioned on the Forum and have an introductory call soon. But I haven't been pursuing this very proactively, and haven't actually had any coaching yet so there hasn't been anything that would lead me to change my views much.

Comment by Jamie_Harris on Creating Individual Connections via the Forum · 2021-12-08T23:05:46.713Z · EA · GW

Somewhat related to the above comments so I'm putting it here:

I wonder if there's a way to automatically show users' interest in specific areas via the tagging system, e.g. by showing how many (or how much karma) they have for (1) posts with particular tags, (2) comments on posts with particular tags.

This would mean people don't have to manually update some sort of interests list.

Downsides include that

  • maybe it would make people more conscious about where they comment or post, in case it affects their stats and forum identity.
  • I would guess that lots of features like this sound good in theory but then few people end up using them
Comment by Jamie_Harris on A case for the effectiveness of protest · 2021-12-07T23:54:44.393Z · EA · GW

I think I agree with every point Rose made here.

I'll also emphasise though that I think that the post has lots of (1) cool ideas and possibilities worth digging into and (2) snippets of useful empirical evidence.

Comment by Jamie_Harris on Summary of history (empowerment and well-being lens) · 2021-12-04T13:04:12.796Z · EA · GW

History through this lens seems very different from the history presented in e.g. textbooks. For example: Many wars and power struggles barely matter. My summary doesn't so much as mention Charlemagne or William of Orange, and the fall of the Roman Empire doesn't seem like a clearly watershed event.

I think this is a useful way of thinking about history: what are the key outcomes within particular domains, and when did they happen? I think that doing this ends up highlighting certain outcomes as being especially important, and leads to some surprising reflections on how things developed at different times and in different places. You've highlighted a lot of cool stuff in this post, and I like the summary table a lot.

But I think that the approach tends to strip out much sense of causation, especially but not solely indirect causes of important outcomes.

For instance, it seems unclear to me what the counterfactual impact of various enlightenment ideas (highlighted in bold green on your timeline) would have been were it not for the French Revolution and Napoleonic Wars (included in the "other" category), which  I am under the impression did quite a lot to forcibly spread Enlightenment ideals around Europe. Perhaps, under less favourable socio-political conditions, some of the thinkers and ideals we now think of as very important and influential would never have attained the reach that they did, and ended up more as curious asides along the lines of...

Porphyry, the Greek vegetarian. Did you know that there was an ancient Greek who was (according to Wikipedia) "an advocate of vegetarianism on spiritual and ethical grounds ... He wrote the On Abstinence from Animal Food (Περὶ ἀποχῆς ἐμψύχων; De Abstinentia ab Esu Animalium), advocating against the consumption of animals, and he is cited with approval in vegetarian literature up to the present day."

Political events, changes in social and economic organisation, etc can all affect the domains and outcomes you/we care about, sometimes in difficult to perceive ways.

As another example, among the various social movements focused on some form of moral circle expansion that have been driven by allies rather than the intended beneficiaries themselves, many seem to have picked up substantially (in terms of resources and attention dedicated to them, if not also success) from the ~1960s onwards:

  • The environmental movement
  • The anti-death penalty movement
  • The anti-abortion movement
  • The children's rights movement
  • The animal rights movement
  • The Fair Trade movement (and probably other efforts to help people in the Global South)

I haven't fully got my head around what's going on there, but I suspect there are some important underlying social/economic forces that don't get picked up by studying each  "Moral progress and human/civil rights" outcome as a discrete category.


(Some overlap with SammyDMartin's point, but they phrased it in terms of "ideas on empowerment and well-being down the line", whereas my point is about causation more broadly.)

Comment by Jamie_Harris on Jamie_Harris's Shortform · 2021-12-01T14:38:44.919Z · EA · GW

Oh, nice, thanks very much for sharing that. I've cited Moravec in the same research report that led me to the Bostrom link I just shared, but hadn't seen that article and didn't read Mind Children fully enough to catch that particular idea.

Comment by Jamie_Harris on TobiasH's Shortform · 2021-11-29T10:55:11.043Z · EA · GW

Yep, you can. 

(I thought you could do it on the unpaid version too but I just checked and can't see it. I specifically remember having the functionality to use specific search filters restricted to only people within certain groups when I had recruiter Lite though.)

Comment by Jamie_Harris on Jamie_Harris's Shortform · 2021-11-29T10:51:35.092Z · EA · GW

How did Nick Bostrom come up with the "Simulation argument"*? 

Below is an answer Bostrom gave in 2008. (Though note, Pablo shares a comment below that Bostrom might be misremembering this, and he may have taken the idea from Hans Moravec.)

"In my doctoral work, I had studied so-called self-locating beliefs and developed the first mathematical theory of observation selection effects, which affects such beliefs. I had also for many years been thinking a lot about future technological capabilities and their possible impacts on humanity. If one combines these two areas – observation selection theory and the study of future technological capacities – then the simulation argument is only one small inferential step away.

Before the idea was developed in its final form, I had for a couple of years been running a rudimentary version of it past colleagues at coffee breaks during conferences. Typically, the response would be “yeah, that is kind of interesting” and then the conversation would drift to other topics without anything having been resolved.

I was on my way to the gym one evening and was again pondering the argument when it dawned on me that it was more than just coffee-break material and that it could be developed in a more rigorous form. By the time I had finished the physical workout, I had also worked out the essential structure of the argument (which is actually very simple). I went to my office and wrote it up.

(Are there any lessons in this? That new ideas often spring from the combining of two different areas or cognitive structures, which one has previously mastered at sufficiently a deep level, is a commonplace. But an additional possible moral, which may not be as widely appreciated, is that even when we do vaguely realize something, the breakthrough often eludes us because we fail to take the idea seriously enough.)"


Context for this post:

  • I'm doing some research on "A History of Robot Rights Research," which includes digging into some early transhumanist / proto-EA type content. I stumbled across this.
  • I tend to think of researchers as contributing either more through being detail oriented -- digging into sources or generating new empirical data -- or being really inventive and creative. I definitely fall into the former camp, and am often amazed/confused by the process of how people in the latter camp do what they do. Having found this example, it seemed worth sharing quickly.


*Definition of the simulation argument: "The simulation argument was set forth in a paper published in 2003. A draft of that paper had previously been circulated for a couple of years. The argument shows that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. The argument has attracted a considerable amount of attention, among scientists and philosophers as well as in the media."

Comment by Jamie_Harris on Risks from Atomically Precise Manufacturing · 2021-11-29T09:29:32.674Z · EA · GW

That's interesting. As far as I can tell, Eric Drexler was basically the person who kicked off interest + concern about this tech in the 1980s onwards.* His publications on the topic have accrued tens of thousands of citations. But Drexler's work at FHI now focuses on AI.

(I came to this year-old post because some of the early transhumanist / proto-EA content (e.g. Bostrom and Kurzweil) seems to mention nanotech very prominently, sometimes preceding discussion of superintelligent AI, and I wanted to see if any aspiring EAs were still talking about it.)


*General impression from some of the transhumanist stuff I've been reading. The Wikipedia page on nanotechnology says:

The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications. The emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter.

Comment by Jamie_Harris on What would you do if you had half a million dollars? · 2021-11-26T08:51:09.027Z · EA · GW

Would you mind linking some posts or articles assessing the expected value of the long-term future?

The most direct (positive) answer to this question I remember reading is here.

Toby Ord discusses it briefly in chapter 2 of The Precipice

Some brief podcast discussion here.

I suspect  that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don't remember specific references other than the Christiano one.

And there's suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.

Comment by Jamie_Harris on Effective strategies for changing public opinion: A literature review · 2021-11-19T07:48:37.358Z · EA · GW

Yeah I'm also a little confused about why they're using r without digging back into it in detail. But if I read it correctly, then their correlation coefficient there somehow pools together pretty weak proxies for behaviour ("attitude toward the product, attitude toward the brand," potentially also "purchase intention") with actual behaviour ("product choice").

I definitely don't think that we should pay too much attention to the findings of that particular meta-analysis when thinking about how to change attitudes or behaviour in the context of the farmed animal movement or other EA-adjacent cause areas. But it is still weakly relevant evidence and it would have been disingenuous of me not to include it, I think. (My prior was that using humour and sex appeals are both usually pretty bad ideas for serious social movements, especially the latter.)

Comment by Jamie_Harris on How to decide which productivity coach to try? · 2021-11-18T20:03:27.051Z · EA · GW

<<I lack survey design and data evaluation skills. I’d be interested in talking to people who have those skills and would be excited about applying them in a coaching context.>>

I don't have formal/academic experience in this but have some experience thinking about it an applied M&E for EA meta work sense. Feel free to message or email me if you'd like feedback on draft things or would like to discuss sometime!

Comment by Jamie_Harris on Persistence - A critical review [ABRIDGED] · 2021-11-17T22:26:50.188Z · EA · GW

I think it's really cool that you did this. It's been on my to do list to look into some persistence studies but I've not got round to it and this seems like a really helpful analysis.

How did you select the papers that you selected to review? E.g. was it due to their focus, their methodology, how well cited they were, something else, or nothing in particular? (For context I have no sense of how many papers using roughly similar methodology there are, so for all I know this could be all of them! I skimmed the preprint and didn't see a mention of this, but could have just missed it.)

Comment by Jamie_Harris on Business Coaching/Mentoring For EA Organisations · 2021-11-17T15:07:07.916Z · EA · GW

A number of effective animal advocacy nonprofits have listed "Pro bono management or leadership individual coaching/mentoring" as one of their top 3 most urgent needs in terms of pro bono support from skilled professionals. See the "leadership or senior management" section of our (Animal Advocacy Careers') skilled volunteering board.

(5 out of about 20 participating orgs selected as one of the top 3 priorities out of the more than 100 options they were given. And 5 others selected it as important and useful but not one of their top 3 priorities.)

So it's fantastic that you're offering this service! Please do use the above linked skilled volunteering board and the instructions there to connect with orgs if you're still looking for additional partners to work with. Thank you!

Comment by Jamie_Harris on EA movement building: Should you run an experiment? · 2021-10-06T15:20:24.856Z · EA · GW

Thanks Peter! 

We're actually planning to do some online ads around the re-launch of the course, and literally just received our 501(c)(3) status, so will have some Google Ad Grant money available soon. But I assume this is all too short notice to be put into effect before the launch of the course next week :P

Something to bear in mind for later cohorts though, perhaps!

Comment by Jamie_Harris on Evidence from two studies of EA careers advice interventions · 2021-10-01T16:13:57.477Z · EA · GW

Thanks Peter!

I'd like to see a more rigorous study exploring how these interventions affect career choice.

I'd love to know more detail, if you're happy to share.

However, I am not aware of any research on this

Likewise. I did do some digging for this; see the intro of the full paper for the vaguely relevant research I did find.

Comment by Jamie_Harris on Evidence from two studies of EA careers advice interventions · 2021-10-01T16:10:51.744Z · EA · GW

Thanks David! And thanks again for all your help. I agree with lots of this, e.g. differential attrition being a substantial problem and follow-ups being very desirable. More on some of that in the next forum post that I'll share next week.

(Oh, and thanks for recording!)