Posts

The gap between prestige and impact 2022-08-04T21:38:27.688Z
Announcing the Clearer Thinking Regrants program 2022-06-15T09:54:14.566Z

Comments

Comment by Adam Binks on Using the “executive summary” style: writing that respects your reader’s time · 2022-07-22T16:20:44.094Z · EA · GW

it'd be really valuable for more EA-aligned people to goddamn write summaries at all

To get more people to write summaries for long forum posts, we could try adding it to the forum new post submission form? e.g. if the post text is over x words, a small message shows up advising you to add a summary.

Or maybe you're thinking more of other formats, like Google docs?

Comment by Adam Binks on EAGxBoston 2022: Retrospective · 2022-07-15T09:14:26.713Z · EA · GW

Great to see this writeup, thank you!

In the runup to EAG SF I've been thinking a bit about travel funding allocation. I thought I could take this opportunity to share two problems and tentative solutions, as I imagine they hold across different conferences (including EAGx Boston).

Thing 1: Uncertainty around how much to apply for

In conversations with other people attending I've found that people are often quite uncertain and nervous when working out how much to apply for. 

One way to improve this could be to encourage applicants to follow a simple procedure for working out how much funding to apply for. E.g.: 

1. Go to Google Flights, and find a typical cost for a return flight on the conference dates. 2. Open Google Maps and find a rough lower bound of hotel prices near the conference venue, and multiply by the number of days you plan to stay. 3. Add £x slack to cover possible price rises and incidental costs.

Thing 2: Slow travel funding approval leading to higher prices

Another experience I've had is that the delay in approving travel funding was quite long (>2 weeks). In this period, the prices of flights rose significantly, and accommodation availability dropped, so the cheap options weren't available any more. Some other attendees I know needed to apply for more funding in response to this, which also took a few days to be approved (in which time prices rose further!)

My guess is that bringing in an extra person to let you respond to funding requests faster might therefore pay for itself. It would also reduce the uncertainty for attendees, which may let them spend less total time on arranging logistics for the conference. In particular, if there is a simple formula/heuristic for travel funding requests, then someone with little prior experience would be able to quickly and easily respond to requests. 

 

I'd be interested to hear if you've considered these ideas - I'm sure there are a bunch of extra constraints I'm not aware of, conference planning sounds very complex!

Comment by Adam Binks on Announcing the Clearer Thinking Regrants program · 2022-07-15T07:41:43.248Z · EA · GW

Update: deadline extended to July 22nd!

Comment by Adam Binks on Announcing the Clearer Thinking Regrants program · 2022-06-17T11:44:43.380Z · EA · GW

Thanks Ankush! For this first round, we keep things intentionally short, but if your project progresses to later rounds then there will be plenty of opportunities to share more details.

it is a pdf that I would love to get valued and be shared with the world and anyone who wants to hear about longtermism project

Posting your ideas here on the EA Forum could be a great way to get feedback from other people interested in longtermism!

Comment by Adam Binks on Announcing the Clearer Thinking Regrants program · 2022-06-16T13:26:03.502Z · EA · GW

Thanks Stuart, I'll DM you to work out the details here!

Comment by Adam Binks on AI Twitter accounts to follow? · 2022-06-10T12:32:16.999Z · EA · GW

Maybe something helpful to think about is, what's your goal?

E.g. maybe:

  • You want to stay on top of new papers in AI capabilities
  • You want to feel connected to the AI safety research community
  • You want to build a network of people in AI research / AI safety research, so that in future you could ask people for advice about a career decision
  • You want to feel more motivated for your own self study in machine learning
  • You want to workshop your own ideas around AI, and get rapid feedback from researchers and thinkers

I think for some goals, Twitter is unusually helpful (e.g., workshopping early-stage ideas, building a network). Many other goals, I think you can get a higher fidelity, lower addictiveness path towards, for example staying on top of new AI safety research papers by reading the Alignment Newsletter.

Comment by Adam Binks on Against “longtermist” as an identity · 2022-05-27T11:05:09.852Z · EA · GW

and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”

 

Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities

I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.

Comment by Adam Binks on Introducing Asterisk · 2022-05-26T10:35:38.862Z · EA · GW

This is exciting! If you've got this far in your planning yet, I'd love to hear more about how the journal will be promoted and how you plan for readers to find you? Do you have any examples of "user stories" - stories about the kind of reader you'd hope to attract, how they'd find the journal, and what it might lead them to do subsequently?

Comment by Adam Binks on Bad Omens in Current Community Building · 2022-05-22T23:52:57.127Z · EA · GW

It's also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn't want to keep him waiting).

Comment by Adam Binks on Fermi estimation of the impact you might have working on AI safety · 2022-05-13T14:51:30.768Z · EA · GW

Great to see tools like this that make assumptions clear - I think not only useful as a calculator but as a concrete operalisation of your model of AI risk, which is a good starting point for discussion. Thanks for creating!

Comment by Adam Binks on My GWWC donations: Switching from long- to near-termist opportunities? · 2022-04-24T09:48:41.322Z · EA · GW

Hi Tom! I think this idea of giving based on the signalling value is an interesting one.

One idea - I wonder if you could capture a lot of the signalling value while only moving a small part of your donation budget to non-xrisk causes?

How that would work: when you're talking to people about your GWWC donations, if you think they'd be more receptive to global health/animal ideas you can tell them about your giving to those charities. And then (if you think they'd be receptive) you can go on to say that ultimately you think the most pressing problems are xrisks, and therefore you allocate most of your donations to building humanity's capacity to prevent them.

In other words, is the signalling value scale-insensitive (compared to the real-world impact of your donations)?

Comment by Adam Binks on Longtermist EA needs more Phase 2 work · 2022-04-20T21:27:14.294Z · EA · GW

Quick meta note to say I really enjoyed the length of this post, exploring one idea in enough detail to spark thoughts but high readable. Thank you!

Comment by Adam Binks on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T22:14:47.757Z · EA · GW

You might be aware of this but for others reading -  there's a calculator to help you work out the value of your time.

 I think it's worth doing once (and repeating when your circumstances change, e.g. new job), then just using that as a general heuristic to make time-money tradeoffs, rather than deliberating every time.

Comment by Adam Binks on 13 ideas for new Existential Risk Movies & TV Shows – what are your ideas? · 2022-04-13T17:43:43.629Z · EA · GW

If I was an EA grantmaker, I'd want to start small by maybe hiring an educational-youtube-video personality (like John Green's "Crash Course") to make an Effective Altruism series. 

I think this is in the works! Kurtzegat got a $2.8m grant from Open Phil

See also A Happier World and Rational Animations.

Comment by Adam Binks on Unsurprising things about the EA movement that surprised me · 2022-03-30T23:48:29.808Z · EA · GW

Great post, thank you! This is useful as a guide to what to try and add in to intro fellowships, in particular:

There are a lot of real professional people in EA, and those people are influencing things in the real world – EA is by no means just a philosophy discussion club, even if your local EA club is one (and it does not have to be one forever!)

I think this is a really important realisation to have as someone doing an intro fellowship/getting into EA. My guess is that realising this makes it a lot easier to think seriously about making career choices based on ideas/methods from EA. 

So, how can we help new people realise this sooner?

A quick brainstorm:

  • Include some readings/podcasts in intro fellowships where people talk in the first person about their EA-aligned work
  • Encourage new members to attend EAG(x)
  • Have talks/Q&As with people currently doing EA-aligned work
  • Include a few bios of individuals and their stories of getting into this kind of work
  • Chat to new members about what previous members of your group have gone on to do (if your group is mature enough)

I think it'd be ideal if, people understand that EA is not just a philosophy discussion group, but a thing that they could shape their career around, from when they first learn about it.

Comment by Adam Binks on If you could send an email to every student at your university (to maximize impact), what would you include in it? · 2022-03-30T23:24:26.974Z · EA · GW

Thanks!

Comment by Adam Binks on If you could send an email to every student at your university (to maximize impact), what would you include in it? · 2022-03-30T14:10:51.195Z · EA · GW

Hi! It's a while since you posted this, I was curious if you did see an effect on fellowship signups as a result?

Comment by Adam Binks on Effectiveness is a Conjunction of Multipliers · 2022-03-29T13:44:15.369Z · EA · GW

I agree - and if the multiplier numbers are lower, then some claims are don't hold, e.g.:

To get more than 50% of her maximum possible impact, Ana must hit every single multiplier.

This doesn't hold if the set of multipliers includes 1.5x, for example.

Instead we might want to talk about the importance of hitting as many big multipliers as possible. And being willing to spend more effort on these over the smaller (e.g. 1.1x) ones.

(But want to add that I think the post in general is great! Thanks for writing this up!)

Comment by Adam Binks on Valuing research works by eliciting comparisons from EA researchers · 2022-03-18T12:24:22.696Z · EA · GW

I just came across a paper which mentions a loosely related method - pairwise rating for model elicitation. See p13 of this PDF (or ctrl-F for pairwise), might be of interest:

Comment by Adam Binks on Samotsvety Nuclear Risk Forecasts — March 2022 · 2022-03-18T12:17:30.852Z · EA · GW

Thanks, I'm using Chrome on Windows 10.

Comment by Adam Binks on Samotsvety Nuclear Risk Forecasts — March 2022 · 2022-03-11T15:25:13.057Z · EA · GW

Thanks for the writeup! I tried to copy and paste the simple model into Squiggle but it gave an error, couldn't immediately spot what the cause is:

Comment by Adam Binks on The Future Fund’s Project Ideas Competition · 2022-03-07T22:15:10.415Z · EA · GW

Prestigious forecasting tournaments for students

Epistemic institutions, empowering exceptional people

To scale up forecasting efforts, we will need a large body of excellent forecasters to recruit from. Forecasting is a skill that improves over time, and it takes time to build a track record to distinguish excellent forecasters from the rest - particularly on long-term questions. Additionally, forecasting builds generally useful research and rationality skills, and supports model-building and detailed understanding of question topics. Therefore, getting students to forecast high-impact questions might be particularly useful for both students own development and the development of the forecasting community.

While existing forecasting platforms allow students to participate, the prestige and compensation offered by success is limited, especially outside of the narrow forecasting community. 

We would be excited to fund highly prestigious forecasting tournaments for students, similar to the Maths Olympiad and IGEM in that it would aim to attract top talent, while being focussed on highly impactful questions. A second option is working with universities to give course credit for participation and success in the tournaments. In either case - excellent student forecasters would be rewarded by a prestigious marker on their CV, and fast-tracked application to superforecasting organisations.
 

Comment by Adam Binks on The Future Fund’s Project Ideas Competition · 2022-03-07T22:13:36.918Z · EA · GW

Align university careers advising incentives with impact

Effective altruism

Students at top universities often have lots of exposure to a limited set of career paths, such as consulting and finance. Many graduates who would be well-suited to high-impact work don’t consider it because they are just unaware of it. Universities have little incentive to improve this state of affairs, as the eventual social impact of graduates is hard to evaluate and has little effect on their alma mater (with some notable exceptions). We would therefore be excited to fund efforts to more directly align university incentives with supporting their students to enter high-impact careers. We would be interested in work identifying simple heuristic metrics of career impact, and lobbying efforts to have university league tables incorporate these measures into their rankings, rewarding universities who support students in entering impactful work.

Comment by Adam Binks on The Future Fund’s Project Ideas Competition · 2022-03-07T22:13:15.668Z · EA · GW

EA Founders Camp

Effective altruism, empowering exceptional people

The EA community is scaling up, and funding ambitious new projects. To support continued growth of new organisations and projects, we would be excited to fund an organisation to run EA Founders Camps. These events would provide an exciting, sparky environment for (1) Potential founders to meet co-founders, (2) Founders to hear about and generate great ideas for impactful projects and organisations, (3) Founders to get key training tailored to their project area, (4) Founders to build a support network of other new and existing founders, (5) Founders to connect with funders and advisers.

Comment by Adam Binks on The Future Fund’s Project Ideas Competition · 2022-03-07T22:12:52.462Z · EA · GW

Find promising candidates for “Cause X” with an iterative forecast-guided thinktank

Epistemic institutions

How likely is it that the EA community is neglecting a cause area that is more pressing than current candidates? We are fairly confident in the importance of the community’s current community areas, but we think it’s still important to keep searching for more candidates. 

We’d be excited to fund organisations attacking this problem in a structured, rigorous way, to reduce the chance that the EA community is missing huge opportunities. 

We propose an organisation with two streams: generalist research, and superforecasting. The generalist researchers create shallow, exploratory evaluations of many different cause areas. Forecasters then use these evaluations to forecast the likelihood of each cause area being a top cause area recommended (e.g. by 80,000 Hours) in 5 years time. The generalist researchers then perform progressively more in-depth evaluations of the cause areas most favoured by forecasters. Forecasters update their forecasts based on these evaluations. If the forecasted promising-ness exceeds a threshold, the organisation recommends that an EA funder funds in-depth research into the cause area.

Comment by Adam Binks on The Future Fund’s Project Ideas Competition · 2022-03-07T22:12:06.045Z · EA · GW

Help high impact academics spend more time doing research

Empowering exceptional people

Top academic researchers are key drivers of progress in priority areas like biorisk, global priorities research and AI research. Yet even top academics are often unable to spend as much time as they want to on their research. 

We’d be excited to fund an organisation providing centralised services to maximise research time for top academics, while minimising the overheads of setting up these systems for academics. It might focus on:

(1) Funding and negotiating teaching buy-outs,

(2) Providing an efficient shared team of PAs to handle admin, streamline academic service duties, submit papers, scout and screen PhD students, and accelerate literature surveys.

As AI research assistants like Elicit improve, this organisation could scalably offload work to these services.

Comment by Adam Binks on Examples of pure altruism towards future generations? · 2022-01-27T16:02:45.968Z · EA · GW

The GitHub Archive Program probably isn't quite what you're looking for, but I think is interesting. It's not historical, and it does have some short-term effects (especially publicity).

Comment by Adam Binks on Use resilience, instead of imprecision, to communicate uncertainty · 2021-12-10T13:30:14.625Z · EA · GW

This led me to think about the fact that the description of resilience is itself an estimate/prediction. I wonder how related the skills of giving first-order estimates/predictions and second-order resilience estimates are. In other words, if someone is well-calibrated, can we expect their resilience estimates to also be well-calibrated? Or is an extra skill that would take some learning.

Comment by Adam Binks on Use resilience, instead of imprecision, to communicate uncertainty · 2021-12-10T13:13:19.058Z · EA · GW

To add to your list - Subjective Logic represents opinions with three values: degree of belief, degree of disbelief, and degree of uncertainty. One interpretation of this is as a form of second-order uncertainty. It's used for modelling trust. A nice summary here with interactive tools for visualising opinions and a trust network.

Comment by Adam Binks on What is the wisdom of the EA crowd? · 2021-10-28T12:07:30.429Z · EA · GW

Not sure if this is what you have in mind, but Metaculus records the track record of its users' predictive accuracy (see Brier score for the Community prediction).

A lot of their users are EAs I think, though it is probably not a representative sample.

Comment by Adam Binks on EA Forum Creative Writing Contest: $22,000 in prizes for good stories · 2021-10-26T12:21:31.552Z · EA · GW

Interesting idea! It reminds me of the excellent Pixar film Soul.

Comment by Adam Binks on How would you gauge random undergrads' "EA potential"? · 2021-09-03T07:07:43.901Z · EA · GW

One consideration with your current question - might it be more impactful to move people from a 5 to a 9 on that question, by informing them about the principles of EA, rather than taking someone who's already a 9 to start with? Of course there's still value in pointing the 9 towards good resources, but you might expect them to find those themselves if they already care so much.

Comment by Adam Binks on Type Checking GiveWell's GiveDirectly Cost Effective Analysis · 2021-06-23T11:20:27.561Z · EA · GW

Fascinating post, thank you! This was a great introduction to dimensional analysis for me.

Comment by Adam Binks on CEA update: Q1 2021 · 2021-04-25T23:22:13.159Z · EA · GW

Thanks for this, really interesting! I am surprised that the total attendance of fellowships isn't even higher - do you have a feel for whether they're typically constrained by mentors or signups? In my experience helping run fellowships, many people are surprisingly interested but haven't heard about EA, have you looked at ways to reach more of these people?

Comment by Adam Binks on A case against strong longtermism · 2020-12-19T00:51:08.014Z · EA · GW

I think this is a good point, I'm really enjoying all your comments in this thread:)

It strikes me that one way that the next century effects of our actions might be instrumentally useful is that they might give some (weak) evidence as to what the longer term effects might be.

All else equal, if some action causes a stable, steady positive effect each year for the next century, then I think that action is more likely to have a positive long term effect than some other action which has a negative effect in the next century. However this might be easily outweighed by specific reasons to think that the action's longer run effects will differ.

Comment by Adam Binks on Ask Rethink Priorities Anything (AMA) · 2020-12-14T12:32:38.581Z · EA · GW

These are fascinating, I would love to see answers to all of these questions!

Comment by Adam Binks on £4bn for the global poor: the UK's 0.7% · 2020-12-02T11:48:06.005Z · EA · GW

Alongside social media ads, could one possible strategy be asking highly motivated constituency members in targeted areas (eg EAs, people that email their MP) to post similar content to the ads, to local Facebook groups and their own social media networks? Zero cost, and might extend reach beyond paid adverts.

One risk is that if they're not very well informed they might misrepresent the message. In which case the campaign could provide materials for them to post (maybe identical to the ad content).

Comment by Adam Binks on Please Take the 2020 EA Survey · 2020-11-19T18:27:26.448Z · EA · GW

I think my ethics are less considered than the average EA community member, so I think I'd rather defer the decision to them. Doesn't seem especially motivating for me personally.