Posts

The EA Behavioral Science Newsletter #6 (September 2022) 2022-09-03T06:06:23.666Z
Research Summary: What works to promote charitable donations: A meta-review with meta-meta-analysis 2022-06-24T01:33:02.003Z
The EA Behavioral Science Newsletter #5 (June 2022) 2022-06-01T02:47:25.916Z
Creating a newsletter: a very quick guide with templates 2022-05-26T23:58:19.388Z
Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? 2022-04-11T03:49:02.270Z
The EA Behavioral Science Newsletter #4 2022-03-11T08:16:22.010Z
Please complete a survey to influence EU animal protection policies 2022-01-07T04:09:47.914Z
A spreadsheet/template for doing an annual review 2021-12-22T23:26:53.466Z
EA Behavioral Science Newsletter #3 released 2021-12-01T01:02:36.158Z
Announcing and seeking feedback on the READI philanthropy database project 2021-10-06T03:32:40.531Z
Announcing the EA Behavioral Science Newsletter 2021-09-01T03:56:55.677Z
Research summary: A Meta-Review of Interventions that Influence Animal-Product Consumption 2021-08-13T06:46:43.608Z
A spreadsheet with titles and links for over 1900 YouTube EA videos 2021-07-26T02:05:00.418Z
Review: What works to promote charitable donations? 2020-08-24T01:42:05.967Z
Opportunity to support a Covid19 related survey collaboration 2020-03-28T01:02:15.612Z
Requesting support for the StandAgainstCorona pledge 2020-03-17T00:47:40.298Z

Comments

Comment by PeterSlattery (Peterslattery) on High-Impact Psychology (HIPsy): Piloting a Global Network · 2022-09-29T22:26:33.686Z · EA · GW

Thanks for your work everyone! I am excited to see this develop!

Comment by PeterSlattery (Peterslattery) on The EA Behavioral Science Newsletter #6 (September 2022) · 2022-09-25T18:58:20.227Z · EA · GW

Thanks!

Comment by PeterSlattery (Peterslattery) on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T18:49:32.138Z · EA · GW

Hey Constance! Thank you for writing this. I am sorry to hear that this has been stressful. I have had several similar experiences where I felt that I was rejected or treated poorly by CEA or EA funders. Sometimes it really upset me and reduced my motivation for a period.

However, I also believe that the people at such orgs are generally very competent and good-natured, that there are things I don't know/consider which they account for. They have to do difficult work when filtering applications and leave themselves open for public criticism that is hard to defend against. I know that I would find that last part to be very difficult. 

Overall, I feel that mistakes are inevitable in these sorts of application processes - especially in assessments of more unusual or novel people/projects. However, I also feel that if you work hard and keeping having good impacts, you will usually get the deserved resources and opportunities. 

For what it is worth, I scanned your application, and I admire and appreciate your work. I hope you persevere, and that I see you at a future conference!

Comment by PeterSlattery (Peterslattery) on Quantified Intuitions: An epistemics training website including a new EA-themed calibration app · 2022-09-21T13:59:52.089Z · EA · GW

Thanks for this! I am excited to try it!

Comment by PeterSlattery (Peterslattery) on Announcing EA Pulse, large monthly US surveys on EA · 2022-09-21T13:49:17.346Z · EA · GW

Some quick thoughts:

Thanks for all your work on this, it's really great to see it finally happening! Would love it if the survey can identify and compare 'social movement' subgroups such as EA, Social justice, socialism, animal welfare etc. Could be assessed in terms of activism/participation in the subgroups and or awareness/attitude towards them.

This would be helpful in several ways. As an example, I think that it will be very helpful to better understand the relative differences in values and receptiveness to messages etc that exist between such groups and how this changes over time.

It could be interesting to explore how it changes with such groups when new books and articles are widely publicized etc.

From a movement building and impact perspective, it seems important to really understand our adjacent social movements. Where are the overlaps and disconnects in shared values? What are each groups major gripes/misconceptions etc.

I'd welcome any attempt to eventually grow this service to the point where it will allow EA orgs and researchers to easily and affordably survey large samples of key audiences (e.g., AI professionals, policy makers etc). I think that the absence of this is an upstream barrier to lots of important research and message testing.

Comment by PeterSlattery (Peterslattery) on Cause Exploration Prizes: Announcing our prizes · 2022-09-10T06:05:10.799Z · EA · GW

Thank you for working on this, and congratulations to all the winners! Just wanted to mention that I think that it could be good to have a running competition for suggesting new cause areas on the forum, with an annual awards process. Suggesting new causes seems like a valuable activity to prompt and incentivise. 

Comment by PeterSlattery (Peterslattery) on The EA Behavioral Science Newsletter #6 (September 2022) · 2022-09-04T06:49:53.899Z · EA · GW

Yeah, it's really great to see all this progress!

Comment by PeterSlattery (Peterslattery) on Psychological Obstacles to Doing Good (Better) · 2022-08-24T00:34:27.516Z · EA · GW

Nice work, this is really cool!

Comment by PeterSlattery (Peterslattery) on Resources I send to AI researchers about AI safety · 2022-07-27T15:48:03.423Z · EA · GW

Thanks for taking the time to write this up, Vael. It's going to be very useful for me for learning about, and sharing information about, AI safety in the future. 

Comment by PeterSlattery (Peterslattery) on Wanting to dye my hair a little more because Buck dyes his hair · 2022-07-24T02:34:37.943Z · EA · GW

I agree with you that we should reward impact more and I like your suggestions. I think that having more better incentives for searching for and praising/rewarding 'doers' is one model to consider. I can imagine a person in CEA being responsible for noticing people who are having underreported impact and offering them conditional grants (e.g., finacial support to transition to do more study/full time work) and providing them with recognition by posting about and praising their work in the forum.

Comment by PeterSlattery (Peterslattery) on How EA is perceived is crucial to its future trajectory · 2022-07-24T00:50:02.806Z · EA · GW

Thank you for this great write up. I completely agree with nearly everything that you have said. I'd love to see more of the recent work from Rethink and Lucius examining public awareness and receptivity to EA. I'd also like to see more audience research to understand which audiences are more or less receptive to EA, why, where they hear about us, what they think we do etc. Ready Research are also exploring opportunities in this context.

Comment by PeterSlattery (Peterslattery) on Social scientists interested in AI safety should consider doing direct technical AI safety research, (possibly meta-research), or governance, support roles, or community building instead · 2022-07-21T00:36:37.741Z · EA · GW

Thanks for this, Vael! As I said previously, here are some areas of agreement and potential disagreement.

Agreement
 

I generally agree that most people with social science PhDs should look outside of AI safety research, and I like your suggestions for where to look.

I think that the top 10% or so of social science researchers should probably try to do AI safety related research, particularly people who thrive in academic settings or lack movement building skills.

Overall, I’d encourage anyone who was equally good at AI safety SS research and AI safety movement building, to choose the latter option. AI safety movement building  probably feels higher expected impact to me than AI safety SS research because I think that movement building is the highest impact ‘instrumental’ cause area, at least until vastly more people know about, understand and work on the key concepts, arguments and needs of EA.

Potential disagreement (shared as a total non-expert, to be clear)
 

My intuition is that AI safety research is still relatively undersupplied by social science researchers compared to the ideal. I think the area could, and ideally should, absorb a lot of social science research people over the next 30 years if funding and interest scale as I expect. Maybe 5,000+, if I consider all the organisations and geographies involved. Ideally, as many as possible of these people would be EA aware and aligned.

What might this look like? For instance, in academia, I see work to i) understand if/where creating evidence and interventions might be useful (e.g, interviewing/surveying technical researchers, policymakers and organisational leaders and/or mapping their key behaviours to influences), to ii) prioritise what is important (e.g., ranking malleability of different interventions/behaviour), then doing related research. I also foresee a range of theoretical work around how we can port over concepts and theory from areas such as communication, psychology and sociology to describe, understand and optimise how human and machines interact. I also expect that there will be a lot of value in coordination to support that work. I expect to see a global distribution of research labs researcher and projects.

In government and private settings, I foresee social science researchers hired to do more ‘context bound’ work with clear connections to immediate policy decisions. Government embedded research teams who need to understand technical, political and social factors to understand how to craft effective different types of national policy. Organisationally embedded teams engaged to create organisational policies that get value from internal AI, or engagement with other organisations platforms. Lots of work in the military and defence sector.

In support of these areas, I see a lot of social science background people being useful by working as knowledge brokers to effectively translate and communicate ideas between researcher and different types of practitioners (e.g., as marketers, community builders, user researchers, or educator). Also in providing various research support structures, training and curating potential students and research assistants, setting up support infrastructure like panels of potential technical/policy research participants, starting/organising conferences and journals etc).

I also suspect that’s there's going to be lots needed that I have left out.

Overall, from a behavioural science research perspective (e.g., who needs to do what differently/what is the most important behaviour here/how do we ensure that behaviour happens), there are a lot of different behaviours, audiences contexts, and interactions, and little to no understanding of the key behaviours, actors, or ideal interventions. If this is life-on-earth-threatening-in-the-near-future stuff, then there is a lot of work to be done across a huge range of areas!

Not sure if this is an actual disagreement, so let me know. It’s useful for me to write up and share regardless, as it underpins some of my movement building plans. Feedback is welcome.

Comment by PeterSlattery (Peterslattery) on Reducing nightmares as a cause area · 2022-07-19T05:51:39.453Z · EA · GW

Thank you for joining the forum to share this.  I am sorry to hear about your nightmares. I also suffer from terrible nightmares at times. It's very dispiriting. I hope that someone gets around to working on this eventually, even if it never makes the cut as a top cause area.

Comment by PeterSlattery (Peterslattery) on Estimating the cost-effectiveness of scientific research · 2022-07-17T09:33:22.314Z · EA · GW

Thanks for this work, Falk! I am excited to test this model when I have time and to see further related developments. 

At the moment, I lack a clear sense of how this model is useful in practice. I'd like to see the model applied to justify a new project, or evaluate the returns on a previous one.

BTW, I discussed if we could use value of information for research funding/evaluation with Sam Nolan just last week. I encouraged him to speak with you about this work. It might be worth reaching out if he hasn't already. 

Comment by PeterSlattery (Peterslattery) on Criticism of EA Criticism Contest · 2022-07-17T09:25:14.134Z · EA · GW

Agree. I'd like to see you attack the  assumptions in more detail! You already changed my mind a little from the limited arguments you made.

Comment by PeterSlattery (Peterslattery) on EA, Psychology & AI Safety Research · 2022-07-05T00:41:06.008Z · EA · GW

Thanks for this, very helpful!

Comment by PeterSlattery (Peterslattery) on Product Managers: the EA Forum Needs You · 2022-06-24T05:54:11.319Z · EA · GW

Great work with the growth! I like the new ideas, and I am excited to see more improvements on the forum.  Good luck with the hire.

Comment by PeterSlattery (Peterslattery) on User-Friendly Intro Post · 2022-06-24T05:41:40.845Z · EA · GW

I  am curious why I got a down vote.  Please share why if you are comfortable.  You can leave (anonymous) feedback here: https://forms.gle/c2N8PvNZfTPtUEom7

Comment by PeterSlattery (Peterslattery) on Why EAs should normalize using Glassdoor · 2022-06-24T05:40:20.260Z · EA · GW

Thanks for sharing! This strikes me as a good idea. 

I am considering applying to a few organisations. I hadn't realised that there is a better way to get information about working at them than by talking to former or current employees. 

Reading reviews would be much more efficient. I wonder if having a link directory of all reviews (of working at) EA organisation is a worthwhile small project for someone - maybe it could be in the wiki.

Maybe worth having people at the organisation who are happy to chat about their role listed also (related to Holly's point below).

Comment by PeterSlattery (Peterslattery) on User-Friendly Intro Post · 2022-06-24T01:27:54.418Z · EA · GW

Great to see.  I am excited to see more projects/organisations which provide support to EA organisations. It's an important and neglected ecosystem. I'd also like to see an incubator set up so that it's easier to get things like this off the ground.

Comment by PeterSlattery (Peterslattery) on Vael Gates: Risks from Advanced AI (June 2022) · 2022-06-14T02:21:53.377Z · EA · GW

Thanks Vael! I will share with some people.

Comment by PeterSlattery (Peterslattery) on How effective is your Altruism? · 2022-06-06T00:37:19.479Z · EA · GW

Thanks for this, Mitra.  I only have time for a quick response, sorry.

Sorry to hear that you have had some bad initial experiences with EAs. It's a pretty big and diverse community, so please don't assume that what a few people think is widely representative of what everyone else does. 

There is general agreement that we're trying to answer the question: How can we do the most good? There is very little agreement about how best to do that - some people are more action orientated, while others prefer to be more cautious. Those who act, or write about EA, also disagree a lot. You can check out the forum to see ample evidence. It is one of the things that attracts me to the movement, actually!

Note that we are running a contest for criticism and someone has entered your submission. You have done a lot of impressive stuff, and I am definitely invested in hearing your opinions based on that experience.

If I understand you, you are making the point that that obsession with "measurement" can rule out innovative and scalable solutions to problems by over burdening them. I tend to agree that that can happen, and think that most EAs would agree.  However, I think that more than most, EAs tend to value measurement as a way to reduce the risk that our intuitions can lead us astray and to ensure that we are allocating resources efficiently.  However, individual EA's perspective on the right amount of measurement effort would vary greatly depending on the specific case. 

I recommend that you check the main EA website if you would like to get a better sense of what EA is about - you can also get sent some free books if that's more of a preference. There has been a lot of thought behind some perspectives that may initially seem wrong/confusing (or at least seemed wrong/confusing to me when I first encountered them).

Comment by PeterSlattery (Peterslattery) on Creating a newsletter: a very quick guide with templates · 2022-05-31T06:36:38.208Z · EA · GW

Thanks for sharing Alex!

Comment by PeterSlattery (Peterslattery) on Introducing EAecon: Community-Building Project · 2022-05-31T06:28:47.340Z · EA · GW

Agree that a newsletter would be good. See my recent post related to that. Would also be really interested to see more topic based communities emerge.  I think it makes sense to organise communities by cause, specialisation and locality (with more granularity at scale) so keen to see more of this. Thanks for setting it up!

Comment by PeterSlattery (Peterslattery) on A spreadsheet with titles and links for over 1900 YouTube EA videos · 2022-05-27T00:34:12.228Z · EA · GW

Thanks!

Comment by PeterSlattery (Peterslattery) on How Could AI Governance Go Wrong? · 2022-05-27T00:24:56.752Z · EA · GW

Thanks for taking the time to share this, Hayden. It was very useful.

To what extent do behavioural science and systems thinking/change matter for AI governance?
 

To give you my view: I think that nearly all outcomes that EA cares about are mediated by individual and group behaviours and decisions: Who thinks what and does what (e.g., WRT. careers, donations, and advocacy) etc. All of this occurs in a broader context of social norms and laws etc.

Based on all this, I think that it is important to understand what people think and do, why they think and do what they do, and how to change that. Also, to understand how various contextual factors such as social norms and laws affect what people think and do and can be changed.

I notice related work on areas such as climate change, and I project that similar will be needed in AI governance. However, I don't know the extent to which people working on AI governance share that view or what work, if anything, that has been done. I'd be interested to hear any thoughts that you have time to share.

Also, I'd really appreciate if you can suggest any good literature or people to engage with.
 

Comment by PeterSlattery (Peterslattery) on Release of Existential Risk Research database · 2022-05-26T23:59:22.828Z · EA · GW

I have written something up here. Thanks for your patience! 

Comment by PeterSlattery (Peterslattery) on Introducing Asterisk · 2022-05-26T23:42:15.188Z · EA · GW

Excellent! I am really glad to see this.

Comment by PeterSlattery (Peterslattery) on What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) · 2022-05-11T05:50:02.827Z · EA · GW

Thanks for replying. 

When I say I'd prefer maybe 10x as much research, at .1x the quality, I don't want to miss out on quality overall. Instead, I'd like more small scale incremental and iterative research, where the rigour and the length, increase in proportion to the expected ROI. For instance, this could involve a range of small studies that increase in quality as they show evidence, followed by a rigorous review and replication process.

I also think that the reason for a lot of the current research vomit is that we don't let people publish short and simple articles. I think that if you took most articles and pulled out their method, results and conclusion, you would give the reader about 95% of the value of the article in maybe 1/10th the space/words of the full article. 

If a researcher just had to write these sections and a wrapper rather than plan and coordinate a whole document, they might produce and disseminate their insights in 2-5% of the time that it currently takes. 

Comment by PeterSlattery (Peterslattery) on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-11T02:07:02.241Z · EA · GW

Thanks for sharing, this was helpful. As a meta point, it would be great for me if I had a way to help crowdfund some rigorous work (e.g., reviews, adversarial debates etc) exploring the arguments for and against EAs having children. I am increasingly on the fence. 

The change in my views is largely driven by emerging beliefs that i) people having EA traits (e.g., being logical, compassionate and impartial) is perhaps most strongly linked to their genetics (see behavioural genetics etc)  and ii) that funding EAs to have kids is going to be very hard to do at scale (for various coordination and PR reasons). I'd love someone to explore those and all the other arguments in more detail. Ideally a few people, some who want/have kids, and some who don't.

Comment by PeterSlattery (Peterslattery) on EA Tours of Service · 2022-05-11T01:52:56.885Z · EA · GW

Thanks for sharing. This is quite compelling.

Comment by PeterSlattery (Peterslattery) on Using TikTok to indoctrinate the masses to EA · 2022-05-11T00:05:16.078Z · EA · GW

Not sure on the post title: I assume that it's humour, but we don't want anyone to get the wrong impression. With that said, I just watched the first video, and I really liked it! It's a very engaging way of making an important point. Well done! 

I see a lot of unrealised value from disseminating key concepts and insights from within the EA movement without actually referencing the movement, so I would generally encourage this sort of thing if it is done carefully.

Comment by PeterSlattery (Peterslattery) on 2021 EA Mental Health Survey Results · 2022-05-03T00:44:22.654Z · EA · GW

Thanks for this! If you do this again, I'd love to see a comparison of mental health issues/experiences between a random sample of EAs, members of the public and members of other social movements.

Comment by PeterSlattery (Peterslattery) on What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) · 2022-04-29T06:20:25.398Z · EA · GW

See >PS> 

 

I think that inviting submissions from research in preprints or written up in EA forum posts is a good idea.

Definitely the former, but which ones? 

>PS> Yeah, the only easy options I can suggest now are to consider some of items in the BS newsletter  

 

As to EA forum posts, I guess they mainly (with some exceptions) don't have the sort of rigor that would permit the kind of review we want... And that would help unjournal evaluations become a replacement for academic journal publications?

>PS> This is probably a bigger discussion, but this makes me realise that that one difference between us is that I probably want the unjournal (and social science in general) to accept a lower level of rigor than most journal  (perhaps somewhere between a very detailed forum/blog post and a short journal or conference article). 

One reason is that I personally think that most social science journal articles sacrifice too much speed for better quality, given heterogeneity etc. I'd prefer maybe 10x as much research, at .1x the quality. To be clear, I am keen on keeping the key parts (e.g., a good method and explanation of theory and findings), but not having so much of the fluff (e.g., summarising much prior or future potential research etc).

A second is that I expect a lot more submissions near the level of conference work or a detailed forum post level, than will be journal level. There are probably 100x more forum posts and reports produced than journal articles. Additionally, there is a lot of competition for journal level submissions. If you expect an article to get accepted at a journal then you will probably submit it to one. On the other hand, if you wrote up a report pretty close to journal level in some regards and have nowhere to put it, or no patience with the demand of a journal or uncertainty, then the unjournal is relatively attractive given the lack of alternatives. 

A submission would simply be giving you permission to publish/host the original document and reviews in the unjournal. Post review, authors could have the option to provide a link to their revised output, or to add a comment.

Actually, I don't propose to host or publish anything. Just linking a page with a DOI ... and review it based on this, no?

>PS>  Yeah, sounds good.

As you know, newsletters such as the EA Behavioral Science Newsletter (https://preview.mailerlite.com/m9i6r0j7h9) curate some options, so this could be an easy place to start.

I think I should go through this carefully, for sure.

One that comes to mind now is "How valuable is movement growth?"

I think reviewing EA forum posts is very valuable, but this is a separate thing from what I'm trying to do with Unjournal. If we include this in the 'same batch of things' it would probably drive away the academics and very serious researchers, no?

>PS>  Yeah, so that's a good point. I think that it gets into the points above. Perhaps you can have different types of submissions (e.g., work in progress, opinion etc?) .You could treat it like some other journals have and scale up expectations over time once it starts getting known?

At least for me, The Awareness/Inclination Model (AIM) in this seemed, for a while, to be a popular theory influencing how EA people thought about movement building.

A review of it this would help with understanding how confident we should be (or have been) in the empirical data presented for these arguments made, and might also throw up some ideas to build on it or test it.

Again, I don't think it has been written up in a way that is aiming at rigorous peer review, is it?

>PS>  Perhaps not.  Maybe that's something that authors need to answer. Regardless, I think that there would be a lot of value in these sort of reports getting peer reviewed by academics/experts, especially where they are influential in the EA community. 

Finally, I think that treating the first few rounds of doing this as being an experiment is probably a good idea. It might be the case that only a certain type of paper/output works, or that reviews are more useful than you imagined even for relatively low level research. Probably hard to tell until you do a few rounds of the process.

I think you might be right. I should dive in and be less delicate with this. Partial excuse for slowness so far: I'm waiting for a grant to come through that should give me some more hours to work on this (fingers crossed)

>PS>  I think you are doing a good job and I am not sure I am giving good advice! However, it could be the case that you want to use this process to test some assumptions and processes (e.g., about how many people will submit, what sorts of articles you will get, how long things will take, how best to show outputs) etc. 

Thanks for your work on it!

Comment by PeterSlattery (Peterslattery) on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-27T01:48:53.173Z · EA · GW

Also, have you seen this? https://docs.google.com/document/d/1KqbASWSxcGH1WjXrgfFTaDqmOxn3RWzfVw28mrFP74k/edit#

Comment by PeterSlattery (Peterslattery) on What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) · 2022-04-26T00:00:29.813Z · EA · GW

A quick response that I may build on later. I only scanned your write up of plan so sorry if I missed something there.

I think that inviting submissions from research in preprints or written up in EA forum posts is a good idea. 

A submission would simply be giving you permission to publish/host the original document and reviews in the unjournal. Post review, authors could have the option to provide a link to their revised output, or to add a comment.

As you know, newsletters such as the EA Behavioral Science Newsletter (https://preview.mailerlite.com/m9i6r0j7h9) curate some options, so this could be an easy place to start.

I know this is your idea so maybe you win your own bounty if you like it!?

Would RP not have many research outputs that could be included? READI might also have some upcoming work on institutional decision-making or moral circle expansion that could be considered (though I'd need to talk with the team, etc).

Aside from those, reviewing relevant old but influential reports and top forum posts, could also be valuable. 

One that comes to mind now is How valuable is movement growth?

At least for me, The Awareness/Inclination Model (AIM) in this seemed, for a while, to be a popular theory influencing how EA people thought about movement building. 

A review of it this would help with understanding how confident we should be (or have been) in the empirical data presented for these arguments made, and might also throw up some ideas to build on it or test it. 

Finally, I think that treating the first few rounds of doing this as being an experiment is probably a good idea. It might be the case that only a certain type of paper/output works, or that reviews are more useful than you imagined even for relatively low level research. Probably hard to tell until you do a few rounds of the process.
 

Comment by PeterSlattery (Peterslattery) on Release of Existential Risk Research database · 2022-04-25T01:24:47.302Z · EA · GW

I love making work for people :) Ok, great! I think that the best way for me to do this is to write up a short forum post explaining  the process and linking to the various documents. Let me know if you disagree. If not,  I'll try to do it in the next week or two and email you in case you miss it.

Comment by PeterSlattery (Peterslattery) on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-25T01:09:57.648Z · EA · GW

Hey Peter, thanks for writing this up. 

I agree with (and really appreciate)  Max's comment,  so maybe there isn't a need for a grand strategy. However, I suspect that there are probably still many good opportunities to do research to understand and change attitudes and behaviours related to AI safety if that work is carefully co-designed with experts. 

With that in mind, I just wanted to comment to ask that READI be kept in the loop about anything that comes out of this. 

We might be interested in helping in some way. For instance, that could be a literature/practice review of what is known on influencing desired behaviours, surveys to understand related barriers and enables, experimental work to test the impact of potential/ongoing interventions, and/or, brainstorming and disseminating approaches for 'systemic change' that might be effective. 

Ideally, anything we did together would be in collaboration/supervised by individuals with more domain specific expertise (e.g., Max and other people working in the field) who could make sure it is well-planned and useful in expectation and leverage and disseminate resultant insights.  We have a process that has worked well with other projects and that could potentially make sense here also.
 

Comment by PeterSlattery (Peterslattery) on Release of Existential Risk Research database · 2022-04-21T08:17:57.113Z · EA · GW

Thanks for this Rumtin! Let me know if you want to start an associated newsletter to popularise this database and update people about new additions. If so I can share the process and template for the EA Behavioral Science newsletter to build off.

Comment by PeterSlattery (Peterslattery) on To PA or not to PA? · 2022-04-21T00:55:47.430Z · EA · GW

Thanks for this! I agree with other comments that PAing is very important and think that it is one of the several high impact, low status vocations (community building is another) that EA seems to overlook.

Comment by PeterSlattery (Peterslattery) on Help with the Forum; wiki editing, giving feedback, moderation, and more · 2022-04-21T00:33:21.591Z · EA · GW

I am not going to fill the form now because I suppose to gearing down for a 'pitstop', however, I still wanted to quickly mention that I expect to be interested in i) working on the wiki and ii) helping to promote the forum to new EA and non-EA audiences at some point in the next few years.

Related to that, I think that if growing the wiki/forum audience is an aim, then there may be value in curating a bunch of people with decent size social media followings. These could be fed content to share on their networks with the aim of attracting new forum readers and EA community members.  

I'll follow up in the future if I decide to commit to something!

Comment by PeterSlattery (Peterslattery) on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-13T02:22:47.788Z · EA · GW

Thanks, I didn't realise these existed!

Comment by PeterSlattery (Peterslattery) on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-13T02:21:36.075Z · EA · GW

Thanks - let them know that I am happy to discuss if needed. The main logic is that something like this nudges people towards more topic focused than recency focused use of the forum and better promotes and leverages of our great categorisation system.

Comment by PeterSlattery (Peterslattery) on What interventions influence animal-product consumption? Plain-language summary of a meta-review · 2022-04-13T02:19:17.629Z · EA · GW

Please see this post for a slightly more in depth summary of the interventions we identified.

Comment by PeterSlattery (Peterslattery) on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-11T23:32:06.200Z · EA · GW

No, on the homepage. Maybe as something under pinned posts or front page posts? Imagine something like this as a section on the page, but with categories of tags (e.g., 'job listing (open)') instead of posts:


Important tags on our wiki


Comment by PeterSlattery (Peterslattery) on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-11T22:42:42.814Z · EA · GW

Agree. This is pretty aligned with my desire for community funding mechanisms.

Comment by PeterSlattery (Peterslattery) on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-11T22:41:05.565Z · EA · GW

Thanks Ryan/Pablo, I think that 'job listing (open)' seems fine. I agree with the other points.

As an aside, what do you think of the forum having a front-page section with link/shortcuts to some of the more important or popular tags?

I am imagining a 'Popular/key tags' section on the homepage with a list of tags and a search bar. It might help nudge people to use the wiki more. It might also increase access to posts that are important but not popular (e.g.,a new research, or open job post that gets little engagement and disappears under more the more interesting and common 'hot ideas' posts)

Comment by PeterSlattery (Peterslattery) on Against the "smarts fetish" · 2022-04-11T06:20:20.335Z · EA · GW

More precisely, I'd like to see:

  1. How much you think "smarts" explains absolute variance in impact among EAs.
  2. How much you think "smarts" explains predictable variance in impact among EAs (if smarts explains 10%, but 90% is noise, then smarts is the best and in fact only metric we care about)
  3. How much you think the community currently believes "smarts" explain absolute variance in impact among EAs.
  4. How much you think the community currently believes "smarts" explains predictable variance in impact among EAs
     

 

A very quick response by someone not very numerical and lacking much recent information on the relevant literature related to IQ: 

1/2- a lot (say 50%) if you assume we measure impact via something like research publications, and assume the presence of mediators such as individual and independent tasks (i.e., no collaboration), good (mental) health, and static agents (e.g., no feedback loops from agents engaging in regular reflection/self-improvement/recalibration loops and changing career paths), and motivation etc. Maybe 10% beyond an IQ of 120 if you assume a variance of impacts (e.g., introducing high competence people/organisations to EA, doing operations work to amplify the impact of intelligent people, and taking personal risks to setting up needed projects that have high expected value), while not assuming that any of the above mediators (e.g.,  mental health) are present.

3/4 - 50% but without realising the assumptions that are plugged in and mentioned above. Most of us know smarter people who are not able to work with others, not in good mental health, not as strongly EA aligned, healthy, not very motivated to do work or not very interested in improving themselves or changing their minds on things. 

As this suggests, I think that EAs tend to assume that intelligence is more sufficient for impact than I think they should. Part of this is my expectation that they tend to I) think of simple single impact/assessment scenarios and ii) assume the presence of other needed ingredients.

Some tangential thoughts:

Much if not most impact probably comes via collaboration with other smart people. However, some of the smartest people I know could not easily collaborate in a startup type collaboration and were therefore, from a entrepreneurial perspective, less valuable than less intelligent but more socially skilled/patient/humble alternatives etc. In such cases hiring based on intelligent could produce bad outcomes.

As I see it, many of the the highest impacts in EA come from bringing good people into the community rather than actually doing work that is seen as high value. This does not seem to load on intelligence much and is instead more about other competencies, such as social skills, access to networks and networking interest and ability).   However, my experience of hiring decisions here suggest that signals of intelligence are overweighted relative to social skills.

Comment by PeterSlattery (Peterslattery) on Against the "smarts fetish" · 2022-04-11T04:34:34.532Z · EA · GW

Thanks for this! Some very quick and somewhat poorly qualified responses:

  • I agree with this. I have felt that IQ/intelligence is overrated by EA for a long time.
    • As an analogy, I think that IQ is like the top speed of a single drone.  It's easy to measure the speed of the drone and think that that matters most, but other factors are much more important (e.g., networking capacity, range and efficiency etc).  Once you have a drone with a top speed of Xkm, you probably don't care much about getting it to be higher speed, unless you are doing some particularly speed demanding task. If you have a tough work environment, long term durability is more important. If you have to work with multiple drones then their networking and social intelligence are probably more important . If you are selling the drones, then appearance and brand appeal are probably more important. In many of these cases, however, it may be much easier to just measure the top speed of the drones and use that to extrapolate their performance.
  • Similarly, I think that EA has a big evaluability bias related to competence assessment - that it probably focuses on IQ/signals of intelligence because these are easier to access and understand than other factors that matter.
    • This probably sometimes leads to suboptimal outcomes. E.g., If you go to Oxford and excel at writing about EA on the forum, you might be as, or more, likely to be hired as  a movement builder for an area than if you get a degree from a less prestigious university where you lead 3+ highly successful social groups and an EA group.
  • As aside, I worry that EA doesn't actually optimise well for finding smarts, because it seems to favour selecting people based on signals of intelligence that aren't as good as getting them to do an IQ or performance test.
    • Many very smart people I know didn't really try hard at school, care about prestige, or focus on status building growing up. Many are very humble and bad at the marketing that gets you status. It seems that this situation often condemns them to be disadvantaged by (EA/most) selection criteria. It may also lead to an over-representation of talented self-marketers in EA settings (which is probably the case in all professional settings) and perhaps also a tendency towards more overconfident or arrogant people being hired (I have only very limited anecdotal evidence for this ever happening and could be totally wrong).
    • Admittedly, there are good reasons to select 'high intelligence signalling 'people over those who lack those signal but are actually as or more intelligent, as it such signal are persuasive to most audiences. It's also not normal to post about your IQ scores but fine to imply them via marks, awards or degrees. So maybe this approach makes sense as a part of a larger theory of change focused on EA professionals looking smart/credible.
    • I will admit that I often make the mistake of instinctively thinking that X person is probably not super competent (or as competent as Y) because of where they went to uni or something like that.  It's very hard to avoid. It also generally a good rule of thumb to think that people who attended x institute are smart etc. So maybe it is unavoidable or something that works on aggregate and saves time over more demanding approaches etc.
  • Some quick suggestions re: "what additional traits distinct from IQ do you think are important and worth prioritizing more?"
    • Calibration/Forecasting/betting performance
    • Sustained performance under pressure
    • Sustained commitment to EA
    • Charisma (key for many roles)
    • Appearance (sadly, this matters much more than it should in social setting)
    • Network size (e.g., for marketing)
    • Emotional intelligence/social intelligence (e.g., in face recognition/reading the eyes test, performance)
    • Extraversion/interpersonal warmth/social novelty needs (key to movement building success)
    • Risk tolerance (e.g., for starting new project)
    • Status indifference (e.g., for low status, high importance 'grunt' work like being an exec assistant or movement builder)
  • As a general comment related to the above -  I think that EA is going to need a lot of pretty average level intelligence level people for accelerating outputs and spreading messages across networks. I think it should stop holding out for so many elite level members.
    • As an example, many star performers in research have a huge pool of support from less competent or intelligent researchers, who will produce a first draft etc of a paper for them so that they can spread their genius more widely across many such papers. If someone like will thinks that having an exec assistant can double his output/impact (or something similar)  then we might be missing out on a lot of impact multipliers by failing to hire such people.
    • Related to that, I sometimes feel that we are trying to slowly recruit teams of 'geniuses' (who may in fact be particularly poorly suited to work with each other), when we more urgently need large teams of people to help 'geniuses'.
Comment by PeterSlattery (Peterslattery) on Making Community Building a more attractive career path · 2022-04-09T02:08:49.656Z · EA · GW

As a really quick thought, I was just chatting with an aspiring community builder and we thought that (executive) director of community (strategy) or something similar sounding could be worth considering. It might be worth looking at the tech community or similar to see their norms.