Posts

Nice 1min 80,000 Hours ad on YouTube video 2022-08-07T19:10:11.119Z
Consider Not Changing Your Forum Username to Your Real Name 2022-04-28T01:42:01.320Z
Announce summer EA internships farther in advance 2022-03-29T21:51:35.169Z
Expanding Your Optimization Circle 2022-03-24T22:58:04.126Z
Think about EA alignment like skill mastery, not cult indoctrination 2022-01-05T19:07:13.396Z

Comments

Comment by electroswing on Nice 1min 80,000 Hours ad on YouTube video · 2022-08-09T18:46:37.282Z · EA · GW

If it's OK to answer:  how much did you end up dictating the content/wording of the ad?

Comment by electroswing on The EA forum post-writing algorithm (95% > 100 karma, n> 100) · 2022-07-23T18:54:12.884Z · EA · GW

This might be better received as an April Fools' Day post.

Comment by electroswing on Think about EA alignment like skill mastery, not cult indoctrination · 2022-07-16T13:07:39.895Z · EA · GW

I should clarify—I think EAs engaging in this behavior are exhibiting cult indoctrination behavior unintentionally, not intentionally. 

One specific example would be in my comment here.

I also notice that when more experienced EAs tend to talk to new EAs about x-risk from misaligned AI, they tend to present an overly narrow perspective. Sentences like "Some superintelligent AGI is going to grab all the power and then we can do nothing to stop it" are thrown around casually without stopping to examine the underlying assumptions. Then newer EAs repeat these cached phrases without having carefully formed an inside view, and the movement has worse overall epistemics. 

Here is a recent example of an EA group having a closed off social circle to the point where a person who actively embraces EA has difficulty fitting in. 

Haven't read the whole post yet but the start of Zvi's post here lists 21 EA principles which are not commonly questioned. 

I am not going to name the specific communities where I've observed culty behavior because this account is pseudoanonymous.

Comment by electroswing on SERI ML Alignment Theory Scholars Program 2022 · 2022-05-07T01:56:09.500Z · EA · GW

I worry that the current format of this program might filter out promising candidates who are risk averse. Specifically, the fact that candidates are only granted the actual research opportunity "Assuming all goes well" is a lot of risk to take on. For driven undergraduates, the cost of a summer opportunity falling through is costly, and they might not apply just because of this uncertainty. 

Currently your structure is like PhD programs which admit students to a specific lab (who may be dropped from that lab if they're not a good fit, and in that case, will have to scramble to find an alternative placement).

Maybe a better model for this program is PhD programs who admit a strong cohort of students. Instead of one two-week research sprint, maybe you have 2-3 shorter research sprints ("rotations"). From a student perspective this would probably lower the probability of them being dropped (since all of the mentors would have to dislike them for this to happen).  

What you're currently doing seems like a fine option for you with little downside for the students if: 

1) "Assuming all goes well" means >90% of students continue on with research

2) The projects are sufficiently disjoint that it's unlikely a student is going to be a good fit for more than one project (I think this is probably false but you know more than me, and maybe you think it's true) 

3) 2-week research sprints are much more valuable than 1-week research sprints (I am not convinced of this but maybe you are)

If not all of these are the case I argue it might be better to do rotations / find other ways to make this less risky for candidates. 

Other idea to avoid filtering out risk averse candidates: You could promise that if they don't get matched with a mentor, they can at least do <some other project> , for example, they could be paid to distill AI Safety materials

Comment by electroswing on Introducing the ML Safety Scholars Program · 2022-05-06T22:51:57.734Z · EA · GW

Can undergraduates who already know ML skip weeks 1-2? Can undergraduates who already know DL skip weeks 3-5?

Comment by electroswing on Introducing the ML Safety Scholars Program · 2022-05-04T21:49:44.507Z · EA · GW

You may already have this in mind but—if you are re-running this program in summer 2023, I think it would be a good idea to announce this further in advance.

Comment by electroswing on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-04T05:27:08.845Z · EA · GW

I was in the process of writing a comment trying to debunk this. My counterexample didn't work so now I'm convinced this is a pretty good post. This is a nice way of thinking about ITN quantitatively. 

The counterexample I was trying to make might still be interesting for some people to read as an illustration of this phenomenon. Here it is:

Scale "all humans" trying to solve "all problems" down to "a single high school student" trying to solve "math problems". Then tractability (measured as % of problem solved / % increase in resources) for this person to solve different math problems is as follows:

  • A very large arithmetic question like "find 123456789123456789^2 by hand" requires ~10 hours to solve
  • A median international math olympiad question probably requires ~100 hours of studying to solve 
  • A median research question requires an undergraduate degree (~2000 hours) and then specialized studying (~1000 hours)  to solve
  • A really tough research question takes a decade of work (~20,000 hours) to solve
  • A way ahead of its time research question (maybe, think developing ML theory results before there were even computers) I could see taking 100,000+ hours of work 

Here tractability varies by 4 orders of magnitude (10-100,000 hours) if you include all kinds of math problems. If you exclude very easy or very hard things (as Thomas was describing) you end up with 2 orders of magnitude (~1000-100,000 hours). 

Comment by electroswing on 'Dropping out' isn't a Plan · 2022-04-28T22:39:34.906Z · EA · GW

I think the diagram which differentiates "Stay in school" versus "Drop out" before further splitting actually has some sense. The way I read that split is, it is saying "Stay in school" versus "Do something strange".  

In some cases it might be helpful, in abstract, to figure out the pros and cons of staying in school, before recursing down the "Drop out" path. Otherwise, you could imagine a pro/con list for ORGs 1-3 having a lot of repetition: "Not wasting time taking useless required classes" is a pro for all 3, "Losing out on connections / credential" is a con for all 3, etc. 

Comment by electroswing on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-30T15:25:57.570Z · EA · GW

Yannic Kilcher's youtube channel profiles fairly recent papers / "ML news" events. The videos on papers are 30-60mins, so more in depth than reading an abstract, and less time consuming than reading the paper yourself. The "ML news" videos are less technical but still a good way to keep up to date on what DeepMind, Meta, NVIDIA, etc. are up to. 

Comment by electroswing on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2022-01-27T17:53:26.119Z · EA · GW

Another post on this topic: https://forum.effectivealtruism.org/posts/aeobxpZXQb7X9MPbM/free-money-from-new-york-gambling-websites

Comment by electroswing on Free money from New York gambling websites · 2022-01-25T20:36:54.505Z · EA · GW

You must be located in New York or another eligible state while signing up and making the bets.
 

 

Just to confirm -- do these bets require New York residency, or just being physically present in New York? What forms of identification are requested -- does it have to be a New York state ID (e.g. driver's license)? 

Comment by electroswing on [deleted post] 2022-01-21T22:36:57.613Z

I often run into the problem of EA coming up in casual conversation and not knowing exactly how to explain what it is, and I know many others run into this problem as well.

 

Not rigorously tested or peer-reviewed but this is an approach I've found works decently. The audience is a "normal person".

My short casual pitch of EA:

"Effective altruism is about doing research to improve the effectiveness of philanthropy. Researchers can measure the effects of different interventions, like providing books versus providing malaria nets. GiveWell, an effective altruist charity evaluator, has identified a few high-impact interventions: malaria medicine and nets, vitamin A supplements, encouraging childhood vaccinations, and so on."

If I have a couple more sentences to introduce a bit of longtermism:

"There is also a part of effective altruism which is concerned with preventing future catastrophes. Climate change is one well-known example. Another example is global catastrophic biological risks—as we saw with COVID-19, pandemics can cause a lot of harm, so effective altruists see research in biosecurity and pandemic prevention as highly effective. There is also the field of "AI Safety", which is based on the premise that AI systems will become more prevalent in the future, so it is important we thoroughly research their capabilities before deploying them. The unifying theme here is a "longtermist" worldview—the idea that we can do good things now which will have positive effects on the far future."

The ideas that make up this pitch are:

  • Start with broadly accepted premises ("AI systems will become more prevalent in the future") before putting the EA spin on it ("so we need to do AI safety research"). This principle also applies to writing abstracts
  • Sacrifice precision in definitions of concepts for the sake of getting the intuitive idea across. For example, describing longtermism as "doing things which positively affect the future" does not perfectly capture the concept, but it's an easier starting point than "future-people are just as morally relevant as present-people". 

These principles can similarly be applied to simply describe AI safety, animal welfare, etc.

Comment by electroswing on Think about EA alignment like skill mastery, not cult indoctrination · 2022-01-06T22:49:42.818Z · EA · GW

When I say "repeating talking points", I am thinking of: 

  1. Using cached phrases and not explaining where they come from. 
  2. Conversations which go like
    • EA: We need to think about expanding our moral circle, because animals may be morally relevant. 
    • Non-EA: I don't think animals are morally relevant though.
    • EA: OK, but if animals are morally relevant, then quadrillions of lives are at stake.

(2) is kind of a caricature as written, but I have witnessed conversations like these in EA spaces. 

My evidence for this claim comes form my personal experience watching EAs talk to non-EAs, and listen to non-EAs talk about their perception of EA. The total number of data points in this pool is ~20. I would say that I don't have exceptionally many EA contacts, compared to most EAs, but I do particularly make an effort to seek out social spaces where non-EAs are looking to learn about EA. Thinking back on these experiences, and what conversations went well and what ones didn't, is what inspired me to write this short post.

Ultimately my anecdotal data can't make any statistical statements about the EA community at large. The purpose of this post is to more describe two mental models of EA alignment and advocate for the "skill mastery" perspective.