Posts

What work has been done on the post-AGI distribution of wealth? 2022-07-06T18:59:25.904Z
(Even) More Early-Career EAs Should Try AI Safety Technical Research 2022-06-30T21:14:32.820Z
University Groups Should Do More Retreats 2022-04-06T19:20:55.894Z

Comments

Comment by levin on The Tree of Life: Stanford AI Alignment Theory of Change · 2022-07-06T20:42:01.651Z · EA · GW

Agreed with #1, in that for people doing AI safety research themselves and doing AI safety community-building, each plausibly makes you more effective at the other; the time spent figuring out how to communicate these concepts might be helpful in getting a full map of the field, and certainly being knowledgeable yourself makes you more credible and a more exciting field-builder. (The flip side of "Community Builders Spend Too Much Time Community-Building" is "Community Builders Who Do Other Things Are Especially Valuable," at least in per-hour terms. (This might not be the case for higher-level EA meta people.) I think Alexander Davies of HAIST has a great sense of this and is quite sensitive to how seriously community builders will be taken given various levels of AI technical familiarity.

I also think #3 is important. Once you have a core group of AI safety-interested students, it's important to figure out who is better suited to spend more time organizing events and doing outreach and who should just be heads-down skill-building. (It's important to get a critical mass such that this is even possible; EA MIT finally got enough organizers this spring that one student who really didn't want to be community-building could finally focus on his own upskilling.)

In general, I think modeling it in "quality-adjusted AI Safety research years" (or QuASaRs, name patent-pending) could be useful; if you have some reason to think you're exceptionally promising yourself, you're probably unlikely  to produce more QuASaRs in expectation by field-building, especially because you should be using your last year of impact as the counterfactual. But if you don't (yet) — a "mere genius" in the language of my post — it seems pretty likely that you could produce lots of QuASaRs, especially at a top university like Stanford.

Comment by levin on How I Recommend University Groups Approach the Funding Situation · 2022-07-05T18:08:06.215Z · EA · GW

I think you're probably right that there are elitism risks depending on how it's phrased. Seems like there should be ways to talk about the problem without sounding alienating in this way. Since I'm claiming that the focus really should just be on detecting insincerity, I think a good way to synthesize this would just be to talk about keeping an eye out for insincerity rather than "rent-seeking" per se.

Comment by levin on What is the top concept that all EAs should understand? · 2022-07-05T15:16:46.273Z · EA · GW

I think this is right, and its prevalence maybe the single most important difference between EA and the rest of the world.

Comment by levin on How I Recommend University Groups Approach the Funding Situation · 2022-07-05T14:20:19.040Z · EA · GW

I agree that it's very important to continue using EA money to enable people who otherwise wouldn't be able to participate in EA to do so, and it certainly sounds like in your case you're doing this to great effect on socioeconomic representation. And I agree that the amount of funding a group member requests is a very bad proxy for whether they're rent-seeking. But I don't agree with several of the next steps here, and as a result, I think the implication — that increased attention to rent-seeking in EA is dangerous for socioeconomic inclusion — is wrong.

I think my disagreement boils down to the claim:

I'm not sure how you could reliably tell the difference between a 'rent-seeker' and someone who just doesn't know EA in-depth yet, or who is nervous.

In my experience, it is actually pretty easy for group organizers to differentiate. People who are excited about EA and excited about the free flight or their first major travel experience/etc do not set off "rent-seeking alarms" in my gut. People who ask a lot of questions about getting reimbursed for stuff do not set these alarms off, either. You're right that these correlate with socioeconomic status (or youth, or random other factors) more than rent-seeking.

It's people who do these things and don't seem that excited about EA that set off these alarms. And assessing how interested someone is in EA is, like, one of the absolute essential functions of group organizers.

I think EA group organizers tend to be hyper-cooperators who strongly default towards trusting people, and generally this is fine. It's pretty harmless to allow a suspected rent-seeker to eat the free food at a discussion, and can be pretty costly to stop them (in social capital, time, drama, and possibly getting it wrong). But it's actually pretty harmful, I think, for them to come to EAGs, where the opportunity costs of people's time and attention — and the default trust people give to unfamiliar faces — are much higher. For me, it takes consciously asking the question, "Wait, do I trust this person?" for my decision-making brain to acknowledge the information that my social-observational brain has been gathering that the person doesn't actually seem very interested. But I think this gut-level thing is generally pretty reliable. I'll put it this way: I would be pretty surprised if EA group organizers incorrectly excluded basically anyone from EAGs in the past year, and I think it's very likely that the bar should be moved in the direction of scrutiny — of just checking in with our gut about whether the person seems sincere.

Comment by levin on What is the top concept that all EAs should understand? · 2022-07-05T13:58:27.747Z · EA · GW

This is more for people who are already EAs than for e.g. intro syllabus content or something, but: the HUGE differences in effectiveness between different actions. Not just abstractly knowing this but deeply internalizing it, and how it applies not just to different charities/orgs but to your own potential actions.

Comment by levin on How I Recommend University Groups Approach the Funding Situation · 2022-07-04T23:18:01.600Z · EA · GW

On "attracting rent-seekers" and "be careful how you advertise EAGs": for some reason the rent-seekers seem particularly attracted to the conferences, rather than e.g. free food, etc. This is somewhat interesting because if you were totally uninterested in EA, it would obviously be costlier to go to a conference than to get free food at weekly meetings or something, but I guess it's also the career connections (albeit in sub-spaces that fake-EAs are unlikely to actually want to go into?) and feeling of status that you're getting flown places. I also think it's (maybe obviously) much more damaging for rent-seekers to attend conferences and take up the time of professional EAs who could be meeting non-rent-seekers.

For these reasons, I think EAG's bar for accepting students has gotten a bit too low; specifically, I think they should ask university group leaders for guidance on which group members are high-priority and which shouldn't be accepted. (I know they're capacity-constrained, but this might be worth an additional staff member or something.)

On "Don't advertise 'EA has money'": I endorse your framing throughout this post as "EA doesn't want a lack of money to stop [impactful thing from happening]" rather than "we have all this money, take some and do something with it." I think this both directly attracts rent-seekers and signals that we're in it for the money (both of which probably repel altruists). I totally get why people have the instinct to talk about it, especially mid-funnel people who are just realizing how much there is but don't quite get the nuances and problems described in this post, so it's worth having this conversation with anyone who does community-building in your group.

On humor and talking about EA money in general: In a broad range of IRL social settings, I personally find it very hard not to joke about things. I just naturally gravitate towards observing ironies, referencing memes, and phrasing points in a way that lands on a surprising/humorous beat; when I try to turn this off, e.g. in serious class discussions about heavy topics, I usually fail and have to clarify that I'm not trying to make light of the thing and just go for a tone of "dark irony" instead.

Money in EA is extremely ironic, and it produces lots of opportunities to note surprising results and connections between concepts. When longtime EAs hang out, talking about various funny ways to spend money can be a fun way to push various theories (or maybe brainstorm good galaxy-brain ideas!). But I think it is a very bad look to joke about it in semi-public contexts, and I've worked hard to just not say the things that come to mind because I know it will sound like I'm trivializing suffering, or finding glee in the ridiculous inequality of this situation, or "here for the wrong reasons." Weak anecdotal/subjective evidence: when a top/mid-funnel person has joked about money, it's usually when I'm already smiling/laughing, and when I react with a polite nod but wind down the smile, this seems to actually convey a seriousness/sensitivity that I think is the right vibe. So I've also tried to institute an informal rule of "no jokes about money" and (non-confidently) recommend other group organizers do the same.

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-04T14:18:02.265Z · EA · GW

Agreed; it strikes me that I've probably been over-anchoring on this model

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-04T09:48:58.280Z · EA · GW

Hmm. I don't have strong views on unipolar vs multipolar outcomes, but I think MIRI-type thinks Problem 2 is also easy to solve, due to the last couple clauses of your comment.

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-03T22:32:42.666Z · EA · GW

Edited the post substantially (and, hopefully, transparently, via strikethrough and "edit:" and such) to reflect the parts of this and the previous comment that I agree with.

Regarding this:

I don't see many risk scenarios where a technical solution to the AI alignment problem is sufficient to solve AGI-related risk. For accident-related risk models (in the sense of this framework) solving safety problems is necessary. But even when technical solutions are available, you still need all relevant actors to adopt those solutions, and we know from the history of nuclear safety that the gap between availability and adoption can be big — in that case decades. In other words, even if technical AI alignment researchers somehow "solve" the alignment problem, government action may still be necessary to ensure adoption (whether government-affiliated labs or private sector actors are the developers).

I’ve heard this elsewhere, including at an event for early-career longtermists interested in policy where a very policy-skeptical, MIRI-type figure was giving a Q&A. A student asked: if we solved the alignment problem, wouldn’t we need to enforce its adoption? The MIRI-type figure said something along the lines of:

“Solving the alignment problem” probably means figuring out how to build an aligned AGI. The top labs all want to build an aligned AGI; they just think the odds of the AGIs they’re working on being aligned are much higher than I think they are. But if we have a solution, we can just go to the labs and say, here, this is how you build it in a way that we don’t all die, and I can prove that this makes us not all die. And if you can’t say that, you don’t actually have a solution. And they’re mostly reasonable people who want to build AGI and make a ton of money and not die, so they will take the solution and say, thanks, we’ll do it this way now.

So, was MIRI-type right? Or would we need policy levers to enforce adoption of the problem, even in this model? The post you cite chronicles how long it took for safety advocates to address glaring risks in the nuclear missile system. My initial model says that if the top labs resemble today’s OpenAI and DeepMind, it would be much easier to convince them than the entrenched, securitized bureaucracy described in the post: the incentives are much better aligned, and the cultures are more receptive to suggestions of change. But this does seem like a cruxy question. If MIRI-type is wrong, this would justify a lot of investigation into what those levers would be, and how to prepare governments to develop and pull these levers. If not, this would support more focus on buying time in the first place, as well as on trying to make sure the top firms at the time the alignment problem is solved are receptive.

(E.g., if the MIRI-type model is right, a US lead over China seems really important: if we expect that the solution will come from alignment researchers in Berkeley, maybe it’s more likely that they implement it if they are private, "open"-tech-culture companies, who speak the same language and live in the same milieu and broadly have trusting relationships with the proponent, etc. Or maybe not!)
 

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-03T21:58:30.652Z · EA · GW

Agreed, these seem like fascinating and useful research directions.

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-03T18:16:56.081Z · EA · GW

Thanks, Locke, this is a series of great points. In particular, the point about even fewer people (~25) doing applied policy work is super important, to the extent that I think I should edit the post to significantly weaken certain claims. Likewise, the points about the relative usefulness of spending time learning technical stuff are well taken, though I think I put more value on technical understanding than you do; for example, while of course policy professionals can ask people they trust, they have to somehow be able to assess the judgment of these people on the object-level thing. Also, while I think the idea of technical people being in short supply and high demand in policy is generally overrated, that seems like it could be an important consideration. Relatedly, it seems maybe easier to do costly fit-tests (like taking a first full time job) in technical research and switch to policy than vice versa. Edit: for the final point about risk models, I definitely don't have state funding for safety research in mind; what I mean is that since I think it's very unlikely that policy permanently stops AGI from being developed, success ultimately depends on the alignment problem being solved. I think there are many things governments and private decision-makers can do to improve the chances this happens before AGI, which is why I'm still planning on pursuing a governance career!

Comment by levin on Why AGI Timeline Research/Discourse Might Be Overrated · 2022-07-03T15:27:38.754Z · EA · GW

It's hard for me to agree or disagree with timeline research being overrated, since I don't have a great sense of how many total research hours are going into it, but I think Reason #4 is pretty important to this argument and seems wrong. The goodness of these broad strategic goals is pretty insensitive to timelines, but lots of specific actions wind up seeming worth doing or not worth doing based on timelines. I find myself seriously saying something like "Ugh, as usual, it all depends on AI timelines" in conversations about community-building strategy or career decisions like once a week.

For example, in this comment thread about whether and when to do immediately impactful work versus career-capital building, both the shape and the median of the AI x-risk distribution winds up mattering. A more object-level consideration means that "back-loaded" careers like policy look worse relative to "front-loaded" careers like technical research insofar as timelines are earlier.

In community-building, earlier timelines generally supports outreach strategies more focused on finding very promising technical safety researchers; moderate timelines support relatively more focus on policy field-building; and long timelines support more MacAskill-style broad longtermism, moral circle expansion, etc.

Of course, all of this is moot if the questions are super intractable, but I do think additional clarity would turn out to be useful for a pretty broad set of decision-makers -- not just top funders or strategy-setters but implementers at the "foot soldier" level of community-building, all the way down to personal career choice.

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-03T15:18:17.758Z · EA · GW

Yes! This is basically the whole post condensed into one sentence

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-02T23:49:22.991Z · EA · GW

Thanks for this -- the flaw in using the point estimate of 20-year timelines (and on the frequency and value of promotions) in this way occurred to me, and I tried to model it with guesstimate I got values that made no sense and gave up. Awesome to see this detailed model and to get your numbers!

That said, I think the 5% annual chance is oversimple in a way that could lead to wrong decisions at the margin for the trade-off I have in mind, which is "do AI-related community-building for a year vs. start policy career now." If you think the risk is lower for the next decade or so before rising in the 2030s, which I think is the conventional wisdom, then the 5% uniform distribution incorrectly discounts work done between now and the 2030s. This makes AI community-building now, which basically produces AI technical research starting in a few years, look like a worse deal than it is and biases towards starting the policy career. 

Comment by levin on Run For President · 2022-07-02T07:37:03.910Z · EA · GW

I support some people in the EA community taking big bets on electoral politics, but just to articulate some of the objections:

solving the "how to convince enough people to elect you president" problem is probably easier than a lot of other problems

Even compared to very difficult other problems, I'm not sure this is true; exactly one person is allowed to solve this problem every four years, and it's an extremely crowded competition. (Both parties had to have two debate stages for their most recent competitive cycles, and in both cases someone who had been a famous public figure for decades won.)

And even if you fail to win, even moderately succeeding provides (via predictable media tendencies) a far larger platform to influence others to do Effective things.

It provides a larger platform, but politics is also an extremely epistemically adversarial arena: it is way more likely someone decides they hate EA ideas if an EA is running against a candidate they like. In some cases this trade-off is probably worth it; you might think that convincing a million people is worth tens of millions thinking you're crazy. But sometimes the people who decide you're crazy (and a threat to their preferred candidates) are going to be (e.g.) influential AI ethicists, which could make it much harder to influence certain decisions later.

So, just saying - it is very difficult and risky, so anyone considering working on this needs to plan carefully!

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-01T13:01:35.438Z · EA · GW

Thanks for these points, especially the last one, which I've now added to the intro section.

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-06-30T22:02:04.108Z · EA · GW

With "100-200" I really had FTEs in mind rather than the >1 serious alignment threshold (and maybe I should edit the post to reflect this). What do you think the FTE number is?

Comment by levin on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-06-30T21:48:03.854Z · EA · GW

Hmm, interesting. My first draft said "under 1,000" and I got lots of feedback that this was way too high. Taking a look at your count, I think many of these numbers are way too high. For example:

  • FHI AIS is listed at 34, when the entire FHI staff by my count is 59 and includes lots of philosophers and biosecurity people and the actual AI safety research group is 4, and counting GovAI (where I work this summer [though my opinions are of course my own] and is definitely not AI safety technical research).
  • MIRI is listed at 40, when their "research staff" page has 9 people.
  • CSET is listed at 5.8. Who at CSET does alignment technical research? CSET is a national security think-tank that focuses on AI risks, but is not explicitly longtermist, let alone a hub for technical alignment research!
  • CHAI is listed at 41, but their entire staff is 24, including visiting fellows and assistants.

Should I be persuaded by the Google Scholar label "AI Safety"? What percentage of their time do the listed researchers spend on alignment research, on average?

Comment by levin on Lifeguards · 2022-06-10T22:33:30.682Z · EA · GW

This comment co-written with Jake McKinnon:

The post seems obviously true when the lifeguards are the general experts and authorities, who just tend not to see or care about the drowning children at all. It's more ambiguous when the lifeguards are highly-regarded EAs.

  • It's super important to try to get EAs to be more agentic and skeptical that more established people "have things under control." In my model, the median EA is probably too deferential and should be nudged in the direction of "go save the children even though the lifeguards are ignoring them." People need to be building their own models (even if they start by copying someone else's model, which is better than copying their outputs!) so they can identify the cases where the lifeguards are messing up.
  • However, sometimes the lifeguards aren't saving the children because the water is full of alligators or something. Like, lots of the initial ideas that very early EAs have about how to save the child are in fact ignorant about the nature of the problem (a common one is a version of "let's just build the aligned AI first"). If people overcorrect to "the lifeguards aren't doing anything," then when the lifeguards tell them why their idea is dangerous, they'll ignore them.

The synthesis here is something like: it's very important that you understand why the lifeguards aren't saving the children. Sometimes it's because they're missing key information, not personally well-suited to the task, exhausted from saving other children, or making a prioritization/judgment error in a way that you have some reason to think your judgment is better. But sometimes it's the alligators! Most ideas for solving problems are bad, so your prior should be that if you have an idea, and it's not being tried, probably the idea is bad; if you have inside-view reasons to think that it's good, you should talk to the lifeguards to see if they've already considered this or think you will do harm.

Finally, it's worth noting that even when the lifeguards are competent and correctly prioritizing, sometimes the job is just too hard for them to succeed with their current capabilities. Lots of top EAs are already working on AI alignment in not-obviously-misguided ways, but it turns out that it's a very very very hard problem, and we need more great lifeguards! (This is not saying that you need to go to "lifeguard school," i.e. getting the standard credentials and experiences before you start actually helping, but probably the way to start helping the lifeguards involves learning what the lifeguards think by reading them or talking to them so you can better understand how to help.)

Comment by levin on Free-spending EA might be a big problem for optics and epistemics · 2022-04-22T18:03:21.203Z · EA · GW

Hmm, this does seem possible and maybe more than 50% likely. Reasons to think it might not be the case is that I know this person was fairly new to EA, not a longtermist, and somebody asked a clarifying question about this question that I think I answered in a clarifying way, but may not have clarified the direction of the scale. I don't know!

Comment by levin on EA retreats are really easy and effective - The EA South Germany retreat 2022 · 2022-04-19T02:34:30.780Z · EA · GW

So glad to hear my post helped convince you to do this and that it went well!

Comment by levin on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T05:19:28.030Z · EA · GW

With the caveat that this is obviously flawed data because the sample is "people who came to an all-expenses-paid retreat," I think it's useful to provide some actual data Harvard EA collected at our spring retreat. I was slightly concerned that the spending would rub people the wrong way, so I included as one of our anonymous feedback questions, "How much did the spending of money at this retreat make you feel uncomfortable [on a scale of 1 to 10]?" All 18 survey answerers provided an answer. Mean: 3.1. Median: 3. Mode: 1. High: 9.

I think it's also worth noting that in response to the first question, "What did you think of the retreat overall?", nobody mentioned money, including the person who answered 9 (who said "Excellent arrangements, well thought out, meticulous planning"). On the question "Imagine you're on the team planning the next retreat, and it's the first meeting. Fill in the blank: "One thing I think we could improve from the last retreat is ____"," nobody volunteered spending less money; several suggestions involved adding things that would cost more money, including the person who answered 9, who suggested adding daily rapid tests. The question "Did participating in this retreat make you feel more or less like you want to be part of the EA community?" received mean 8.3, median 9, including a 9 from the person who felt most uncomfortable about the spending.

I concluded from this survey that, again, with the caveats for selection bias, the spending was not alienating people at the retreat, and especially not alienating enough to significantly affect their engagement with EA.

Comment by levin on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T16:49:36.930Z · EA · GW

I've seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it's not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.

Also want to highlight "For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive" as what I think is the most clearly correct and actionable suggestion here.

Comment by levin on Good practices for changing minds · 2022-04-07T20:01:14.976Z · EA · GW

Yes, true, avoiding jargon is important!

Comment by levin on University Groups Should Do More Retreats · 2022-04-07T15:45:22.835Z · EA · GW

Thanks, added to resources!

Comment by levin on University Groups Should Do More Retreats · 2022-04-07T15:45:04.520Z · EA · GW

Also should note that we had a bit of a head start: I had organized the DC retreat one month earlier so had some recent experience, we had lots of excited EAs already so we didn't even try to get professional EAs and we decided casual hangouts were probably very high-value, and the organizing team basically had workshops ready to go. We also had it at a retreat center that provided food (though not snacks). If any of these were different it would have taken much longer to plan.

Comment by levin on Good practices for changing minds · 2022-04-07T15:38:56.854Z · EA · GW

Great post, possibly essential reading for community-builders; adding a link to this in several of my drafts + my retreat post. I think another important thing for CBers is to create a culture where changing your mind is high-status and having strongly held opinions without good reasons is not, which is basically the opposite of the broader culture (though I think EA does a good job of this overall). Ways I've tried to do this in settings with EA newcomers:

1) excitedly changing your mind - thinking of a Robi Rahmanism "The last time I changed my mind about something was right now." This doesn't just model openness; it also makes changing your mind a two-way street, rather than you having all the answers and they just need to learn from you, which I think makes it less identity-threatening or embarrassing to change your mind.

2) saying, in conversations with already-bought-in EAs that are in front of newcomers, things like "Hmm, I think you're under-updating." This shows that we expect longtime EAs to keep evaluating new evidence (and that we are comfortable disagreeing with each other) rather than just to memorize a catechism.

Comment by levin on University Groups Should Do More Retreats · 2022-04-07T02:26:03.130Z · EA · GW

It was very much an 80-20'd thing due to organizer capacity. The schedule was something like:

  • Friday evening arrivals + informal hangouts + board games (e.g. Pandemic)
  • Saturday morning: opening session, hikes/informal hangouts
  • Saturday afternoon: three sessions, each with multiple options:
    • 1-on-1 walks, Updating Session, AI policy workshop
    • 1-on-1 walks,  Concept Swap, forecasting workshop
    • 1-on-1 walks, AI policy workshop
  • Saturday evening: Hamming Circles, informal hangouts feat. hot tub and fire pit
  • Sunday morning: walks/hangouts
  • Sunday afternoon: career reflection, closing session, departure
Comment by levin on University Groups Should Do More Retreats · 2022-04-06T23:47:16.742Z · EA · GW

Yep, great questions -- thanks, Michael. To respond to your first thing, I definitely don't expect that they'll have those effects on everybody, just that they are much more likely to do so than pretty much any other standard EA group programming.

  • Depends on the retreat. HEA's spring retreat (50 registrations, ~32 attendees) involved booking and communicating with a retreat center (which took probably 3-4 hours), probably 5-6 hours of time communicating with attendees, and like 2 hours planning programming. I ran a policy retreat in DC that was much more time-consuming, probably like 35 hours in figuring out logistics, communicating with guests, etc. I would guess the latter would do better on CBA (unless policy turns out to be very low-value).
  • I think scenic walks are probably the closest thing you can do on campus, but you definitely don't get 80% of the value (even on a per-organizer-time basis). You get to tailor the conversation to their exact interests, but it's not really the kind of sustained interaction in a self-contained social world that retreats offer.
  • Not with much confidence. I get the sense that the median person gets slightly more into EA but I guess like 5-10% of attendees can have major priorities shifts on the level of "EA seems like a cool way of thinking about climate policy" to "holy shit, x-risk." I personally have shifted in a couple ways after retreats — from "optimize my time in grad school for generic policy career provided that I make some attempt at EA community-building" to "EA community-building should be one of my top two priorities" after the group organizer retreat and from "probably will work in biosecurity" to "probably will work in AI policy or EA meta" after Icecone. 
Comment by levin on University Groups Should Do More Retreats · 2022-04-06T19:59:32.886Z · EA · GW

Re: "I'd also encourage the more "senior people" to join retreats from time to time," absolutely; not just (or even primarily) because you can provide value, but because retreats continue to be very useful in sharpening your cause prioritization, increasing your EA context, and building high-trust relationships with other EAs well after you're "senior"!

Comment by levin on Time-Time Tradeoffs · 2022-04-01T16:39:04.279Z · EA · GW

Since you published this on April 1, I read the headline, thought it was satire, and laughed out loud, but it turns out this is a great post! Big fan of time-time tradeoffs.

Comment by levin on Where would we set up the next EA hubs? · 2022-03-16T19:17:29.100Z · EA · GW

We're working on making Boston a much better hub - stay tuned!

In addition to the biosecurity hub, advantages for Boston not listed in the Boston section include immediate proximity to two of the top 2/5/5 global universities (the only place on earth where two are within a mile of each other), an advantage both for outreach/community-building and for the "culture fit" aspects discussed in this post.

It's also nearly ideally positioned between other EA hubs and mini-hubs:

  • Non-horrific distance in both time zone and flight to London (5 hours apart/6.5 hour flight) and San Francisco (3 hours apart/7 hour flight). Decent flight connectivity to Central Europe as well (though NYC is better for this).
  • Easy train ride to NYC (on which I am typing this comment!) and quick flights to NYC/DC.
  • Same time zone and 3.5 hour flight to Bahamas.