Posts

Dialectic of Enlightenment 2022-06-15T04:58:02.642Z
You Don't Need To Justify Everything 2022-06-12T18:36:28.770Z
Open Problems in AI X-Risk [PAIS #5] 2022-06-10T02:22:41.568Z
Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4] 2022-05-30T20:37:40.789Z
Complex Systems for AI Safety [Pragmatic AI Safety #3] 2022-05-24T00:04:04.326Z
EA Housing Slack 2022-05-14T20:16:14.144Z
Look Out The Window 2022-05-12T21:16:37.466Z
A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] 2022-05-09T17:15:26.133Z
Introduction to Pragmatic AI Safety [Pragmatic AI Safety #1] 2022-05-09T17:02:00.389Z
Introducing the ML Safety Scholars Program 2022-05-04T13:14:07.422Z
[$20K In Prizes] AI Safety Arguments Competition 2022-04-26T16:21:40.314Z
The Place Where We Survived 2022-04-13T06:08:03.707Z
Get In The Van 2022-01-21T16:49:12.990Z
Why Student Groups Should Run Summer Fellowships 2022-01-03T00:54:19.178Z
Why Undergrads Should Take History Classes 2021-11-07T14:25:56.390Z
Yale EA House Seeking Housemates 2021-11-04T03:29:29.384Z
Why Planning Events Early Can Be Important 2021-10-20T19:18:40.135Z
Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement 2021-01-28T02:50:29.258Z

Comments

Comment by ThomasWoodside on Community Builders Spend Too Much Time Community Building · 2022-06-28T20:32:58.834Z · EA · GW

Thanks for writing this post!

I do think there are some cases where there isn't a clear line between what you call "marketing" and "skilling up."

If I do the "menial operations work" of figuring out how to easily get people to go to an EA conference, is that "marketing" or "skilling up"? It depends; if my goal is to do technical research only, then it probably isn't a useful skill, but operations is a very useful skill that you can build while doing EA community building.

If I know a group organizer has done the gruntwork of operations, I know that they can handle work that may not be that intellectually stimulating (regardless of the kind of work). I know that they are highly conscientious and able to not let tasks slip through the cracks. These are extremely useful traits in anyone. Of course, group organizing isn't the only way you can get these skills, but it's a pretty good one.

Comment by ThomasWoodside on What actions most effective if you care about reproductive rights in America? · 2022-06-27T03:28:26.594Z · EA · GW

I wasn't intending to single out you or any specific person when asking that question. More that the community overall seems to collectively have responded differently (in view of up/downvotes). Due to the fact that different people see different posts, it's hardly a controlled experiment, so it could have been just chance who happened to see the post first and make a first impression.

Comment by ThomasWoodside on What actions most effective if you care about reproductive rights in America? · 2022-06-26T22:55:55.927Z · EA · GW

I notice a similarity to this post.

Somebody writes about an issue that happens to be a popular mainstream cause and asks, "how can I be most effective at doing good, given that I want to work specifically on this cause?"

I'm not saying the two issues are remotely equivalent. Obviously, to argue "this should be an EA cause area" would require very different arguments, and one might be much stronger than the other. With Ukraine, maybe you could justify it as being adjacent to nuclear risk, but the post wasn't talking about nuclear risk. Maybe close to being about preventing great power conflict, but the post wasn't talking about that, either. So, like this post, it is outside of the "standard" EA cause areas.

This comment seems to imply that if somebody is posting about a cause that isn't within the "standard" cause areas, then they should need to justify posting about it as to why this would be better than other cause areas. They cannot "leave that exercise to the reader." The first paragraph of this comment makes a meta-level point that suggests people shouldn't even post about an issue and let readers debate it in the comments (which, in fairness, is not what the author of this post did, when explicitly asking for it not to be debated in the comments after this comment was written). Instead, the author themselves must make a case for the object-level merits of the cause.

It seems others might agree, given that this comment has more karma than the original post (edit: this may or may not be currently true, but it was true at the time of this comment). If people on the forum have these beliefs about meta-level discussion norms, then I ask: why apply it to abortion and not Ukraine?

I strongly suspect that the answer is that people are letting their object-level opinions of issues subtly motivate their meta-level opinions of discussion norms. I'd rather that not happen.

Comment by ThomasWoodside on 20 Critiques of AI Safety That I Found on Twitter · 2022-06-23T16:01:08.097Z · EA · GW

I think it's essential to ask some questions first:

  • Why do people hold these views? (Is it just their personality, or did somebody in this community do something wrong?)
  • Is there any truth to these views? (As can be seen here, anti-AI safety views are quite varied. For example, many are attacks on the communities that care about them rather than the object-level issues.)
  • Does it even matter what these particular people think? (If not, then leave them be.)

Only then should one even consider engaging in outreach or efforts to improve optics.

Comment by ThomasWoodside on Ways money can make things worse · 2022-06-21T19:08:58.917Z · EA · GW

Wanted to make a very small comment on a very small part of this post.

An assistant professor in AI wants to have several PhDs funded. Hearing about the abundance of funding for AI safety research, he drafts a grant proposal arguing why the research topic his group would be working on anyway helps not only with AI capabilities, but also with AI alignment. In the process he convinces himself this is the case, and as a next step convinces some of his students.

Yes, this certainly might be an issue! This particular issue can be mitigated by having funders do lots of grant followups to make sure that differential progress in safety, rather than capabilities, is achieved.

X-Risk Analysis by Dan Hendrycks and Mantas Mazeika provides a good roadmap for doing this. There are also some details in this post (edit since my connection may not have been obvious: I work with Dan and I'm an author of the second post).

Comment by ThomasWoodside on Dialectic of Enlightenment · 2022-06-15T16:19:10.059Z · EA · GW

Curious why people are downvoting this? If it's some substantive criticism of the work I'd be interested in hearing it.

If it's just because it's not very thought through, then what do you think the "not front page" function of the forum is for? (This might sound accusatory but I mean it genuinely).

One of the reasons I posted was because I wanted to hear thoughts/criticisms of the work overall, since I felt I didn't have a good context. Or maybe to find somebody who knew it better. But downvotes don't help with this.

Comment by ThomasWoodside on The totalitarian implications of Effective Altruism · 2022-06-14T17:59:55.819Z · EA · GW

This reminds me of Adorno and Horkheimer'sThe Dialectic of Enlightenment, which argues, for some of the same reasons you do, that "Enlightenment is totalitarian." A piece that feels particularly related:

For the Enlightenment, whatever does not conform to the rule of computation and utility is suspect.

They would probably say "alienation" rather than "externalization," but have some of the same criticisms.

(I don't endorse the Frankfurt School or critical theory. I just wanted to note the similarities.)

One thing to consider is moral and epistemic uncertainty. The EA community already does this to some extent, for instance MacAskill's Moral Uncertainty, Ord's Moral Parliament, the unilateralist's curse, etc. but there is an argument that it could be taken more seriously.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-06-13T17:47:31.209Z · EA · GW

This document will include all of that information (some of it isn't ready yet).

Comment by ThomasWoodside on You Don't Need To Justify Everything · 2022-06-12T22:31:15.375Z · EA · GW

This is a good point which I don't think I considered enough. This post describes this somewhat.

I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can't come from the decisionmaker at all since people make decisions before thinking about them. I think that's possible, but I still think there's at least some component of people thinking clearly about their decision, even if what they're actually doing is trying to emulate what those around them would think.

We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what.

A leaderboard on the forum, ranking users by (some EA organization's estimate of) their personal impact could give rise to a whole bunch of QALYs.

If this estimate is based on social evaluations, won't the people making those evaluations have the same problem with motivated reasoning? It's not clear this is a better source of signal for which actions are best for individuals.

If signal can never truly come from subjective evaluation, it seems like it wouldn't be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.

Comment by ThomasWoodside on You Don't Need To Justify Everything · 2022-06-12T19:51:03.831Z · EA · GW

Yes, people will always have motivated reasoning, for essentially every explanation of their actions they give. That being said, I expect it to be weaker for the small set of things people actually think about deeply, rather than things they're asked to explain after the fact that they didn't think about at all. Though I could be wrong about this expectation.

Comment by ThomasWoodside on Lifeguards · 2022-06-10T22:32:23.727Z · EA · GW

EA groups often get criticized by university students for "not doing anything." The answer usually given (which I think is mostly correct!) is that the vast majority of your impact will come from your career, and university is about gaining the skills you need to be able to do that. I usually say that EA will help you make an impact throughout your life, including after you leave college; the actions people usually think of as "doing things" in college (like volunteering), though they may be admirable,  don't.

Which is why I find it strange that the post doesn't mention the possibility of becoming a lifeguard.

In this story, the lifeguards aren't noticing. Maybe they're complacent. Maybe they don't care about their jobs very much. Maybe they just aren't very good at noticing.  Maybe they aren't actually lifeguards at all, and they just pretend to be lifeguards. Maybe the entire concept of "lifeguarding" is just a farce.

But if it's really just that they aren't noticing, and you are noticing, you should think about whether it really makes sense to jump into the water and start saving children. Yes, the children are drowning, but no, you aren't qualified to save them. You don't know how to swim that well, you don't know how to carry children out of the water, and you certainly don't know how to do CPR. If you really want to save lives, go get some lifeguard training and come back and save far more children.

But maybe the children are dying now, and this is the only time they're dying, so once you become a lifeguard it will be too late to do anything. Then go try saving children now!

Or maybe going to lifeguard school will destroy your ability to notice drowning children. In that case, maybe you should try to invent lifeguarding from scratch.

But unless all expertise is useless and worthless, which it might be in some cases, it's at least worth considering whether you should be focused on becoming a good lifeguard.

Comment by ThomasWoodside on Jobs at EA-organizations are overpaid, here is why · 2022-06-09T00:06:21.381Z · EA · GW

This is the third time I've seen a suggestion like this, and antitrust law is always brought up. I feel like maybe it's worth a post that just says "no, you can't coordinate salaries/hiring practices/etc., here's why" since that would be helpful for the general EA population to know.

Comment by ThomasWoodside on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T17:20:15.179Z · EA · GW

Aaron didn't link it, so if people aren't aware,  we are running that competition (judging in progress).

Comment by ThomasWoodside on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T15:26:16.161Z · EA · GW

I think I disagree with this.

To me, short timelines would mean the crunch in movement building was in the past.

It's also really not obvious when exactly "crunch time" would be. 10 years before AGI? 30 years?

If AGI is in five years I expect movement building among undergrads to not matter at all. If it's in ten years maybe you could say "movement building has almost run its course" but I still think "crunch time" would probably still be in the past.

Edit: I'm referring to undergrad movement building here. Talking to tech executives, policymakers, existing ML researchers etc. would have a different timeline.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-06-06T15:26:35.982Z · EA · GW

No, we're still working on it! All decisions will be sent by tomorrow, June 7th, as indicated in this post.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-27T16:14:41.572Z · EA · GW

Total time including assignments. Don't worry, there will not be 30-40 hours of lecture videos every week!

Comment by ThomasWoodside on Complex Systems for AI Safety [Pragmatic AI Safety #3] · 2022-05-26T17:33:40.727Z · EA · GW

The terminology around AI (AI, ML, DL, RL) is a bit confused sometimes. You're correct that deep reinforcement learning does indeed use deep neural nets, so it could be considered a part of deep learning. However, colloquially deep learning is often taken to mean the parts that aren't RL (so supervised, unsupervised, and self-supervised deep learning). RL is pretty qualitatively different from those in the way it is trained, so it makes sense that there would be a different term, but it can create confusion.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-24T20:50:25.401Z · EA · GW

That shouldn't be a problem. For synchronous activities, we will have multiple sessions you can attend (we will have people from all over the world so we need to do this anyway).

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-24T07:04:52.722Z · EA · GW

Sorry, missed replying to this comment as we were working on this doc, this is indeed the resource we recommend!

Comment by ThomasWoodside on EA is more than longtermism · 2022-05-23T03:50:24.930Z · EA · GW

It reminded me a bit of Charity Navigator shooting themselves in the foot with the phrase "defective altruism".

 

It's not that their claims have zero truth, but they are over the top and it harms whatever argument they did have.

Comment by ThomasWoodside on EA is more than longtermism · 2022-05-23T00:52:11.761Z · EA · GW

The title of this post (and a link to it) was quoted here as supporting the claim that EA is mostly just longtermism.

https://reboothq.substack.com/p/ineffective-altruism?s=r

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-20T14:30:41.812Z · EA · GW

You can certainly add it to your resume, but you wouldn't be able to get a reference letter.

The program uses public recorded online classes, and while we have TAs, none of them are professors.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-20T06:54:27.125Z · EA · GW

Not clear right now whether we will need more TAs, but if we do, we'll make a post soon with an application. I'll reply to this if/when that happens. Thanks for your interest!

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-18T02:10:54.124Z · EA · GW

This will depend on the number of TAs we can recruit, our yield rate, and other variables so I can't give a good figure on this right now, sorry.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-17T02:25:43.753Z · EA · GW

A confirmation email is not expected. We received your application!

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-16T19:10:29.524Z · EA · GW

Yes, but please note this on your application. In general, short periods of unavailability are fine, but we won't give any extensions for them so you will likely have to complete the material at an accelerated pace at the times when you are available.

Comment by ThomasWoodside on EA Housing Slack · 2022-05-14T23:21:33.425Z · EA · GW

Yes, it's possible that would be better (though I can see pros and cons to both approaches). I just saw a need and wanted to fill it, and the people I talked to about this idea beforehand seemed generally happy about it (none suggested this idea which I agree could work!).

 

That being said, I'm not attached to it. If you think this would be better and people on the slack seem to agree then I wouldn't be opposed to shutting down the slack.

Comment by ThomasWoodside on EA Housing Slack · 2022-05-14T22:43:01.383Z · EA · GW

Yes, this is what I had in mind.

Comment by ThomasWoodside on EA Housing Slack · 2022-05-14T20:45:10.488Z · EA · GW

The idea is for there to be a channel for each location in the slack (e.g. Oxford, Berkeley, etc.). I think that would be unwieldy as part of another slack.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-13T18:47:17.813Z · EA · GW

We may consider people in this situation, but it's not the focus of our program and we will prioritize undergraduates.

Comment by ThomasWoodside on EA and the current funding situation · 2022-05-11T12:21:13.401Z · EA · GW

I think it's easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc.

Besides, I don't think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking "how can I spend money quick?" rather than "how can I ambitiously do good?" EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it's mismanaged (see, for an extreme example, Enron).

Comment by ThomasWoodside on EA and the current funding situation · 2022-05-10T21:41:44.038Z · EA · GW

I thought this comment was valuable and it's also a concern I have.

It makes me wonder if some of the "original EA norms", like donating a substantial proportion of income or becoming vegan, might still be quite important to build trust, even as they seem less important in the grand scheme of things (mostly, the increase in the proportion of people believing in longtermism).  This post makes a case for signalling.

It also seems to increase the importance of vetting people in somewhat creative ways. For instance, did they demonstrate altruistic things before they knew there was lots of money in EA? I know EAs who spent a lot of their childhoods volunteering, told their families to stop giving them birthday presents and instead donate to charities, became vegan at a young age at their own initiative, were interested in utilitarianism very young, adopted certain prosocial beliefs their communities didn't have, etc. When somebody did such things long before it was "cool" or they knew there was anything in it for them, this demonstrates something, even if they didn't become involved with EA until it might help their self-interest. At least until we have Silicon Valley parents making sure their children do all the maximally effective things starting at age 8.

It's kind of useful to consider an example, and the only example I can really give on the EA forum is myself. I went to one of my first EA events partially because I wanted a job, but I didn't know that there was so much money in EA until I was somewhat involved (also this was Fall 2019, so there was somewhat less money). I did some of the things I mentioned above when I was a kid (or at least, so I claim on the EA forum)! Would I trust me immediately if I met me? Eh, a bit but not a lot, partially because I'm one of the hundreds of undergrads somewhere near AI safety technical research and not (e.g.) an animal welfare person. It would be significantly easier if I'd gotten involved in 2015 and harder if I'd gotten involved in 2021.

Part of what this means is that we can't rely on trust so much anymore. We have to rely on cold, hard, accomplishments. It's harder, it's more work, it feels less warm and fuzzy, but it seems necessary in this second phase. This means we have to be better about evaluating accomplishments in ways that don't rely on social proof. I think this is easier in some fields (e.g. earning to give, distributing bednets) than others (e.g. policy), but we should try in all fields.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-07T04:07:14.084Z · EA · GW

We'll consider this if there's enough demand for it! But especially for the latter option, it might make sense for students to work through the last three weeks on their own (ML Safety lectures will be public by then).

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-06T12:17:11.623Z · EA · GW

It will be mostly asynchronous, with a few hours of synchronous content per week. We also expect to have sections at different times for people in different timezones so there should be one that works for you.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-05T01:49:11.473Z · EA · GW

I completely agree! Summer plans are often solidified quite early, so promoting earlier is better. I'm no stranger to the idea of doing things early!

In this case, we saw a need the need for this program only a few weeks ago and we're now trying to fill it. If we do run it again next year, we'll announce it earlier, though there's definitely still some benefit to having applications open fairly late (e.g. for people who may not have gotten other positions because they lacked ML knowledge).

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-05T01:40:34.702Z · EA · GW

We'll have a website, and probably also make a forum post here when we release it.

Comment by ThomasWoodside on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-04T18:24:44.354Z · EA · GW

This post, written by somebody at Wave, argues that you should outsource even less than this post argues for in many cases. I found it interesting so wanted to link it here.

Comment by ThomasWoodside on Introducing the ML Safety Scholars Program · 2022-05-04T17:52:09.600Z · EA · GW

We're prioritizing undergraduates, but depending on the strength/size of our undergraduate applicant pool, we may also admit graduate students. Feel free to apply!

The ML Safety curriculum is not yet fully ready, but when it is we will release it publicly, not just for this program. We'll post again when we do.

Comment by ThomasWoodside on Brief Presentation and Considerations for an EA Common Application · 2022-05-04T15:51:24.489Z · EA · GW

Mauricio mentioned in this comment that something like this could run into problems with antitrust law. https://www.ftc.gov/system/files/documents/public_statements/992623/ftc-doj_hr_red_flags.pdf

Neither of us are lawyers, and I certainly have no idea if this is true. But perhaps something to think about.

Comment by ThomasWoodside on My bargain with the EA machine · 2022-05-01T22:55:34.785Z · EA · GW

I just copied the data manually since I don't have that many comments, doesn't seem like it would be too hard to throw together a scraper for it though.

Comment by ThomasWoodside on My bargain with the EA machine · 2022-05-01T16:37:03.592Z · EA · GW

Fittingly, this comment is in the tail of comments I've written on the forum:

Comment by ThomasWoodside on My bargain with the EA machine · 2022-04-28T11:51:38.081Z · EA · GW

This is tricky, because it's really an empirical claim for which we need empirical evidence. I don't currently have such evidence about anyone's counterfactual choices. But I think even if you zoom in on the top 10% of a skewed distribution, it's still going to be skewed. Within the top 10% (or even 1%) of researchers, nonprofits, it's likely only a small subset are making most of the impact.

I think it's true that "the higher we aim, the higher uncertainty we have" but you make it seem as if that uncertainty always washes out. I don't think it does. I think higher uncertainty often is an indicator that you might be able to make it into the tails. Consider the monetary EV of starting a really good startup or working at a tech company. A startup has more uncertainty, but that's because it creates the possibility of tail gains.

Anecdotally I think that certain choices I've made have changed the EV of my work by orders of magnitude. It's important to note that I didn't necessarily know this at the time, but I think it's true retrospectively. But I do agree it's not necessarily true in all cases.

Comment by ThomasWoodside on My bargain with the EA machine · 2022-04-28T03:22:44.914Z · EA · GW

This is an interesting post! I agree with most of what you write. But when I saw the graph, I was suspicious. The graph is nice, but the world is not.

I tried to create a similar graph to yours:

In this case, fun work  is pretty close to impactful toll. In fact, the impact value for it is only about 30% less than the impact value of impactful toll.  This is definitely sizable, and creates some of the considerations above. But mostly, everywhere on the pareto frontier seems like a pretty reasonable place to be.

But there's a problem: why is the graph so nice?  To be more specific: why are the x and y axes so similarly scaled?

Why doesn't it look like this?

Here I just replaced  in the ellipse equation with .  It seems pretty intuitive that our impact would be power law distributed, with a small number of possible careers making up the vast majority of our possible impact. A lot of the time when people are trying to maximize something it ends up power law distributed (money donated, citations for researchers, lives saved, etc.). Multiplicative processes, as Thomas Kwa alluded to, will also make something power law distributed. This doesn't really look power law distributed quite yet though.  Maybe I'll take the log again:

Now, fun work is unfortunately 100x less impactful than impactful toll. That would be unfortunate. Maybe the entire pareto frontier doesn't look so good anymore.

I think this is an inherently fatal flaw with attempts to talk about trading off impact and other personal factors in making choices. If your other personal factors are your ability to have fun, have good friendships, etc., you now have to make the claim that those things are also power-law distributed, and that your best life with respect to those other values is hundreds of times better than your impact maximizing life. If you don't make that claim, then either you have to give your other values an extremely high weight compared with impact, or you have to let impact guide every decision.

In my view, the numbers for most people are probably pretty clear that impact should be the overriding factor. But I think there can be problems with thinking that way about everything. Some of those problems are instrumental: if you think impact is all that matters, you might try to do the minimum of self-care, but that's dangerous

I think people should think in the frame of the original graph most of the time, because the graph is nice, and a reminder that you should be nice to yourself. If you had one of the other graphs in your head, you wouldn't really have any good reason to be nice to yourself that isn't arbitrary or purely instrumental.

But every so often, when you face down a new career decision with fresh eyes, it can help to remember that the world is not so nice.

Comment by ThomasWoodside on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-25T13:11:16.282Z · EA · GW

Distill was never really about distillations in the sense this post is referring to. It was a journal that focused on having very high-quality presentation/visualizations. It's also no longer active: https://distill.pub/2021/distill-hiatus/

Comment by ThomasWoodside on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-23T22:13:55.966Z · EA · GW

Dan Hendrycks and I would love for somebody to distill some of his papers! https://arxiv.org/abs/2008.02275 https://arxiv.org/abs/2110.13136

Comment by ThomasWoodside on The Place Where We Survived · 2022-04-13T06:10:23.682Z · EA · GW

I unfortunately did not take a picture that would work well in the main body of this, but here is me flying away from the place where we survived.

Comment by ThomasWoodside on I feel anxious that there is all this money around. Let's talk about it · 2022-03-31T12:59:02.843Z · EA · GW

One thing you didn't mention is grant evaluation. I personally do not mind grants being given out somewhat quickly and freely in the beginning of a project. But before somebody asks for money again, they should need to have their last grant evaluated to see whether it accomplished anything. My sense is that this is not common (or thorough) enough, even for bigger grants. I think as the movement gets bigger, this seems pretty likely to lead to unaccountability.

Maybe more happens behind the scenes than I realize though, and there actually is a lot more evaluation than I think.

Comment by ThomasWoodside on Meditations on careers in AI Safety · 2022-03-27T01:50:12.449Z · EA · GW

Related: https://80000hours.org/articles/applying-an-unusual-skill-to-a-needed-niche/

Comment by ThomasWoodside on Effectiveness is a Conjunction of Multipliers · 2022-03-27T01:44:28.512Z · EA · GW

It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to "make it EA".

 

I think it also applies here (which, by the way, is one of the most thought-provoking and useful parts of this post). I  think some alternative phrasing like the below actually might make the point even more self-evident:


"It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to make it the most maximally impactful project you could be working on."

Comment by ThomasWoodside on Meditations on careers in AI Safety · 2022-03-27T01:31:54.418Z · EA · GW

If the community has so much money, and we believe this is such an important problem, why can't we just hire/fund world experts in AI/ML to work on it?


Food for thought: LeCun and Hinton both hold academic positions in addition to their industry positions at Meta and Google, respectively. Yoshua Bengio is still in academia entirely. Do you think that tech companies haven't tried to buy every minute of their attention? Why are the three pioneers of deep learning not all in the highest-paying industry job? Clearly, they care about something more than this.