Posts

Isaac Asimov: The Last Question 2021-11-11T17:00:41.018Z
How many people should get self-study grants and how can we find them? 2021-11-10T09:28:54.589Z
EA is vetting-constrained 2019-03-09T01:25:55.689Z
EA Hotel Fundraiser 1: The story 2018-12-27T12:15:55.157Z

Comments

Comment by toonalfrink on EA Internships Board Now Live! · 2021-12-22T10:46:44.513Z · EA · GW

As a not-student with self-funding who is looking for things to try, is this board also for me?

Comment by toonalfrink on EA is vetting-constrained · 2021-12-14T09:34:17.884Z · EA · GW

This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum. 

Increase fidelity for better delegation

In 2021 I still like to frame this as a principal-agent problem.

First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would just frame what they were doing as EA, and then they would keep doing what they were doing anyway.

This is not actually an unsolved problem if you look elsewhere in the world. Just look at your average company. Surely employees like to sugarcoat their work a bit, but we don't often see a total departure from what their boss wants from them. Why not?

Well I recently applied for funding to the EA meta fund. The project was a bit wacky, so we gave it a 20% chance of being approved. The rejection e-mail contained a whopping ~0.3 bits of information: "No". It's like that popular meme where a guy asks his girlfriend what she wants to eat, makes a lot of guesses, and she just keeps saying "no" without giving him any hints. 

So how are we going to find out what grantmakers want from us, if not by the official route? Perhaps this is why it seems so common for people close to the grantmaker to get funded: they do get to have high-fidelity communication.

If this reads as cynicism, I'm sorry. For all I know, they've got perfect reasons for keeping me guessing. Perhaps they want me to generate a good model by myself, as a proof of competence? There's always a high-trust interpretation and despite everything I insist on mistake theory.

The subscription model

My current boss talks to me for about an hour, about once a month. This is where I tell him how my work is going. If I'm off the rails somehow, this is where he would tell me. If my work was to become a bad investment for him, this is where he would fire me. 

I had a similar experience back when I was doing RAISE. Near the end, there was one person from Berkeley who was funding us. About once a month, for about an hour, we would talk about whether it was a good idea to continue this funding. When he updated away from my project being a good investment, he discontinued it. This finally gave me the high-fidelity information I needed to decide to quit. If not for him, who knows for how much longer I would have continued.

So if I was going to attempt for a practical solution: train more grantmakers. Allow grantmakers to make exploratory grants unilaterally to speed things up. Fund applicants according to a subscription model. Be especially liberal with the first grant, but only fund them for a small period. Talk to them after every period. Discontinue funds as soon as you stop believing in their project. Give them a cooldown period between projects so they don't leech off of you.

Comment by toonalfrink on AGI Safety Fundamentals curriculum and application · 2021-11-27T12:53:06.291Z · EA · GW

I have added a note to my RAISE post-mortem, which I'm cross-posting here:

Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.

Comment by toonalfrink on A Red-Team Against the Impact of Small Donations · 2021-11-25T10:12:24.811Z · EA · GW

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

I would like to respond specifically to this reasoning.

Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value. 

Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.

Probabilities for X and Y could be hotly debated, but I'm comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high. 

The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.

Comment by toonalfrink on How do EAs deal with having a "weird" appearance? · 2021-11-16T08:47:18.492Z · EA · GW

To me, reducing your weirdness is equivalent to defection in a prisoner's dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.

Of course you can't just go all-out on weirdness, because the cost you'd incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.

Comment by toonalfrink on How many people should get self-study grants and how can we find them? · 2021-11-11T15:14:30.520Z · EA · GW

Cause if there is excess funding and less applicants, I'd assume such applicants would also get funding.

I have seen examples of this at EA Funds, but it's not clear to me whether this is being broadly deployed.

Comment by toonalfrink on How many people should get self-study grants and how can we find them? · 2021-11-11T15:09:44.863Z · EA · GW

Let's interpret "study" as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?

Comment by toonalfrink on How many people should get self-study grants and how can we find them? · 2021-11-11T15:07:06.913Z · EA · GW

I think the biggest constraint for having more people working on EA projects is management and leadership capacity. But those aren't things you can (solely) self-study; you need to practice management and leadership in order to get good at them.

What about those people that already have management and leadership skills, but lack things like:

  • Connections with important actors
  • Awareness of the incentives and the models of the important actors
  • Awareness of important bottlenecks in the movement
  • Background knowledge as a source of legitimacy
  • Skin in the game / a track record as a source of legitimacy

If I take my best self as a model for leadership (which feels like a status grab but I'll hope you excuse me, it's the best data I have) then good leadership requires a lot of affinity/domain knowledge/vision/previous interactions with the thing that is being led. Can this not be cultivated?

Comment by toonalfrink on How many people should get self-study grants and how can we find them? · 2021-11-11T14:56:40.272Z · EA · GW

There is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we're tempted not to count those.

Comment by toonalfrink on What high-level change would you make to EA strategy? · 2021-11-05T22:10:45.057Z · EA · GW

I would train more grantmakers. Not because they're necessarily overburdened but because, if they had more resources per applicant, they could double as mentors.  

I suspect there is a significant set of funding applicants that don't meet the bar but would if they received regular high-quality feedback from a grantmaker.

(like myself in 2019)

Comment by toonalfrink on List of EA funding opportunities · 2021-11-05T15:18:15.043Z · EA · GW

I'd recommend putting the airtable at the top of your post to make it the schelling point

Comment by toonalfrink on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-07T19:51:50.853Z · EA · GW

What would it have taken to do something about this crisis in the first place? Back in 2008, central bankers were under the assumption that the theory of central banking was completely worked out. Academics were mostly talking about details (tweaking the tailor rule basically). 

The theory of central banking is already centuries old. What would it have taken for a random individual to overturn that establishment? Including the culture and all the institutional interests of banks etc? Are we sure that no one was trying to do exactly that, anyway?

It seems to me that it would have taken a major crisis to change anything, and that's exactly what happened. And now there are all kinds of regulations being implemented for posting collateral around swaps and stuff. It seems that regulators are fixing the issues as they come up (making the system antifragile), and I don't see how a marginal young naive EA would have the domain knowledge to make a meaningful difference here.

And that goes for most fields. Unless we basically invent the field (like AI Safety) or the strategy (like comparing charities), if the field is sufficiently saturated with smart and motivated people, I don't think EA's have enough domain knowledge to do anything. In most cases it takes decades of work to get anywhere.

Comment by toonalfrink on [deleted post] 2021-03-20T11:25:32.911Z

I think your title could be a bit more informative.

Holden's writing seems to follow a hype cycle on the idea of transparency. i.e. first you apply a fresh new idea too radically, you run into it's drawbacks, then you regress to a healthy moderate application of it.

As someone who has felt some of the drawbacks of being outside this "inner ring", I wouldn't complain about the transparency per se. Lack of engagement, maybe, but that turned out to be me. 

I'm still waiting for concrete suggestions. I also think your project would be more fruitful if you interviewed these people in person and published the result.

Comment by toonalfrink on [deleted post] 2021-03-20T10:43:31.929Z

Would removing the “crap” have been sufficient to make it polite? I like to be direct.

Comment by toonalfrink on How can I handle depictions of suffering better emotionally? · 2021-03-19T17:58:43.956Z · EA · GW

I can’t look inside your head, but if the mere thought of something makes you suffer, it probably means it reminds you of something that you are trying to ignore, i.e. trauma.

Assuming that this is indeed the case, I would further speculate that you are ignoring this memory or unpalatable insight because you subconsciously expect that thinking of it would disturb you to the point of getting in the way of whatever you would prefer to be doing, like idk, whatever your daily pursuits are.

The solution then, given these assumptions, would be to set aside some time (a week or two) to sit on a pillow and have nothing to do. This tends to bring unresolved trauma to the forefront by itself, simply because there is finally space for it.

Unfortunately you always find that there is more stuff to deal with, so this kind of spiritual work is a lifelong process (of getting progressively happier). I wholeheartedly recommend it.

Comment by toonalfrink on [deleted post] 2021-03-19T14:53:48.032Z

(lots of downvotes, so where are all the comments?)

I want to reward you for bringing up the topic of power dynamics in EA. Those exist, like in any community, but especially in EA there seems to be a strong current of denying the fact that EA's are constrained by their selfish incentives like everyone else. It requires heroism to go against that current.

But by just insinuating and not delivering any concrete evidence or constructive suggestions for change, you haven't really done your homework. I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it.

Comment by toonalfrink on EA considerations regarding increasing political polarization · 2020-06-30T22:04:55.087Z · EA · GW

What does "cancelling" mean, concretely? I don't imagine the websites will be closed down. What will we lose?

Comment by toonalfrink on EA considerations regarding increasing political polarization · 2020-06-30T21:52:55.654Z · EA · GW

I've been trying to figure out why cancel culture is so powerful. If only ~7% of people identify as pro social justice, why are social media platforms so freely bending to their will? Surely it's not out of the goodness of their hearts, what is the commercial motive? I don't buy the idea that it is simply a marketing stunt. Afaict a pro-SJ stance does not make a company look much more favorable at this point.

But then I found this:

For context, Facebook is the social media company that has been most reluctant to be political, and apparently this is really making them bleed financially.

Why are marketing people so willing to go out of their way to do "the right thing" instead of the profitable thing? Is this something cultural? Some more digging showed that the NAACP and the ADL are leading this charge of boycotting Facebook, but I don't know what to make of that.

Comment by toonalfrink on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T19:50:04.346Z · EA · GW

Re exercise: I worry that putting myself in a catabolic state (by exercising particularly hard) I temporarily increase my risk. Also by being at the gym around sweaty strangers. Is this worry justified?

Comment by toonalfrink on Moloch and the Pareto optimal frontier · 2020-01-14T19:24:49.428Z · EA · GW

I like this model but I think a more interesting example can be made with different variables.

Imagine x and y are actually both good things. You could then claim that a common pattern is for people to be pushing back and forth between x and y. But meanwhile, we may not be at the Frontier at all if you add z. So let's work on z instead!

In that sense, maybe we are never truly at the frontier, all variables considered.

Related to this line of thinking: affordance widths

Comment by toonalfrink on Does climate change deserve more attention within EA? · 2019-10-29T18:56:52.520Z · EA · GW

If you take this model a step further, it suggests working on whatever the most tractable problem is that others are spending resources on, regardless of its impact, because that will maximally free up energy for other causes.

Sounds like something someone should simulate to see if this effect is strong enough to take into account.

Comment by toonalfrink on Announcing the launch of the Happier Lives Institute · 2019-06-19T16:16:38.767Z · EA · GW
[Our] research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries.

Any reason why you're focusing on interventions that target mental health directly and explicitly, instead of any intervention that might increase happiness indirectly (like bednets)?

Comment by toonalfrink on Please use art to convey EA! · 2019-05-26T20:29:05.120Z · EA · GW

Can we come up with a list of existing pieces of art that come close to this? I don't expect good ideas to come from first principles, but there might be some type of art out there that is non-cringy and conveys elements of EA thinking properly.

I'll start with Schindler's list, and especially this scene, where the protagonist breaks down while calculating just how many more lives he could have saved if he had sold his car, his jewelry, etc.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T15:19:27.267Z · EA · GW

Okay, you've convinced me that a US based EA organisation should consider raising their wages to attract top talent.

This data does make me doubt the wisdom of basing non-local activities in the US, but that is another matter.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T14:03:27.351Z · EA · GW

It does provide clarity, and I can imagine that there are unfortunate cases where those entry level salaries aren't enough.

As I said elsewhere in this thread, I think this problem would be best resolved simply by asking how much an applicant needs, instead of raising wages accross the board. The latter would cause all kinds of problems. It would worsen the already latent center/periphery divide in EA by increasing inequality, it would make it harder for new organisations to compete, it would reduce the net amount of people that we can employ, etc etc.

But I could be wrong, and I sense that some of my thoughts might be ideologically tainted. If you feel the urge to point me at some econ 101, please do.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T13:33:26.704Z · EA · GW

30 was just an arbitrary number. Is London still hard to live in for 60? Mind that the suggestion is to raise salaries from 75k to 100k. I can't imagine many cases where 75k is prohibitive, except for those that feel a need to be competitive with their peers from industry (which, fwiw, is not something I outright disapprove of)

We should probably operationalize this argument with actual data instead of reasoning from availability.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T10:26:19.789Z · EA · GW

Given the numbers that we have in mind, these examples are all very specific to the US.

Medical expenses don't get much past $2k per year in most European countries. The only place where cost of living is prohibitively high past a ~$30k income, is San Francisco.

I'm not arguing against the idea that some people exist that should be given the $150k that is needed to unlock their talents. I'm arguing that this group of people might be very small, and concentrated in your bubble.

I think that's the crux of the argument. If a majority of senior people needed $150k to get by, I'd agree that that should be the wage you offer. If these people make up just 1% of the population (which seems true to me), offering $150k to everyone else is just going to cause a lot of subtle cultural damage.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T10:13:26.810Z · EA · GW
a lot of resentment would emerge

To the extent that this would cause resentment, I'd interpret that as a perception of a higher counterfactual, which means that the execution wasn't done well.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:57:15.034Z · EA · GW

It's unclear to me what you mean with privilege. I'm trying to imagine a situation where making 75k is not enough for a low-privilege person, but I can't think of any. AFAIK 75k is an extremely high wage. I know a CEO of a bank that makes that.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:13:31.432Z · EA · GW

Don't advertise the wage on the ad. Ask candidates how much they need to be satisfied, then give them that amount or the amount that they are economically worth to you, whichever is lower. Discourage employees from disclosing how much they make.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:09:53.183Z · EA · GW

In preventing wage dissatisfaction, I think it's better to look at perceived counterfactuals. This can come from being used to a certain wage, or a certain counterfactual wage being very obvious to you. Or it can come from your peers making a certain wage.

You seem to assume something like "people don't like to accept a wage that is lower than they can get". I suggest replacing that with "people don't like to accept a wage that is lower than they feel they can get".

I know some people that are deliberately keeping their income frozen at 15k so they won't get used to more. They reason that if they did, not only would they be psychologically attached to that wage, to a lesser extent so would their peers. In some sense they are keeping up a healthy cultural environment where it's possible to make little and still be satisfied.

I've heard of some organisations that don't have a fixed wage for a job, but a maximum. They ask their applicants "how much would you need to be satisfied", and that's how much they get. I'd expect that this practice, combined with a culture that doesn't overly discuss income or flaunt wealth, would be the best way to keep everyone satisfied, compete with industry, and still keep the average wage low.

Comment by toonalfrink on Open Thread #44 · 2019-05-03T16:31:10.394Z · EA · GW

I sometimes think about seeking funding outside of EA to increase the amount of available EA funding.

But I never made serious work of it. I have no idea what is available, or where to look. Governments? Foundations? With which ones does an Xrisk project have a chance? What's a good strategy for applying to them?

I'd be very happy if someone dived into this.

Comment by toonalfrink on Psychedelics Normalization · 2019-04-30T10:58:50.766Z · EA · GW

You forgot ibogaine, which seems to be the most compelling example. According to lots of anecdotes across the internet, it reliably cures decades old addictions to heroin in a single sitting.

Still I don't think psychedelic use is necessarily a good thing. It makes people more open to experience, which for some will be a door to madness. See for example Scott Alexander's writings about it

Comment by toonalfrink on Does climate change deserve more attention within EA? · 2019-04-24T20:49:15.953Z · EA · GW

Another consideration comes to mind: climate change is currently taking up a large amount of attention from competent altruistic people. If the issue were to be solved or its urgency reduced, some of those resources might flow into EA causes.

Comment by toonalfrink on EA Hotel Fundraiser 4: Concrete outputs after 10 months · 2019-04-21T15:07:51.839Z · EA · GW

fwiw, I personally give it >75% probability that we will be able to survive at least until next round

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T19:37:01.113Z · EA · GW

Am certainly open to considering this business model for the hotel.

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T19:32:16.668Z · EA · GW

The hotel did apply.

The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

It's about $7500 per person per year

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T13:38:06.437Z · EA · GW

As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.

Negative feedback isn't inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it's in bad faith.

Given that grantmakers are essentially at the steering wheel of EA, we can't afford for those people to not internalize this. They need to know all the criticism to make a good decision, they should cherish it.

Of course we can help them get this state of mind by celebrating their willingness to open up to scrutiny, along with the scrutiny

Comment by toonalfrink on When should EAs allocate funding randomly? An inconclusive literature review. · 2019-04-01T17:20:25.079Z · EA · GW
For this specific post, I probably won't add a summary because my guess is that in this specific case the size of the beneficial effect doesn't justify the cost.

I still think you should write it. This looks like an important bit of information, but not worth the read, and I estimate a summary would increase the amount of readers fivefold.

Comment by toonalfrink on The Case for the EA Hotel · 2019-04-01T01:50:20.151Z · EA · GW

I wrote that intense model, and I agree that it's not a good post. My apologies.

Comment by toonalfrink on The Case for the EA Hotel · 2019-04-01T01:47:09.219Z · EA · GW

I imagine EA's getting into all sorts of fields and industries while staying in the community, and this seems so valuable that it makes me second-guess the hotel.

People don't stay in the community because, if you're not involved professionally, there's not much left to gain. We should change that.

I've proposed a solution to this problem here and here

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T01:41:34.539Z · EA · GW
I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don't predict success. Maybe this could become part of the EA Hotel's mission as well.

Good idea. It will be somewhat tricky since we don't have the luxury of measuring success in monetary terms, but we should certainly brainstorm about this at some point.

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T20:38:01.234Z · EA · GW

Thank you.

With the hotel, I see a bunch of little hints that it's not worth my time to attempt an in-depth evaluation of the hotel's leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

Your posts suggests that there is some class of EA's that is a lot more competent than everyone else, which means that what everyone else is doing doesn't matter all that much. While I haven't met (or recognized) a lot of people that impress me this much, I still give this idea a lot of credence. I'd like to verify it for myself, to get on the same page with you (and perhaps even change my plans). Could you name some examples, besides Drexler and Bostrom, of EA's that are on this level of competence?

I'm not looking for credentials, I'm looking for resources that demonstrate how these people are thinking, or stories about impressive feats, so I can convince my S1 to sit down and be humble (and model their minds so I can copy the good bits).

Podcasts, maybe?

Comment by toonalfrink on The Case for the EA Hotel · 2019-03-31T15:00:12.337Z · EA · GW

I have burned out slightly, but this has happened every 6 months or so for the past 5 years, so it's probably not caused by the hotel.

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-31T12:38:24.124Z · EA · GW

At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-31T02:01:44.665Z · EA · GW

I'd argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you're just one agent, but that's certainly not a human universal.

Comment by toonalfrink on EA Hotel Fundraiser 4: Concrete outputs after 10 months · 2019-03-31T01:55:43.840Z · EA · GW

I would rather not. This would pressure people into goodharting their projects for legibility, which is one of the things our setup is supposed to prevent.

(tldr: an agent is legible if a principal can easily monitor them, but it limits their options to what is easy for the principal to measure, which might reduce performance)

Quite a few of our guests are not even on this list, but this doesn't mean they're sitting around doing nothing all day. They're doing illegible work that is hard or even impossible to evaluate at a distance. I put a few examples in the second caveat of the post.

(I realise this is at odds with the EA maxim of measuring outcomes. That's why we published this post: so the hotel could at least be evaluated in aggregate. I think it's neat that people with illegible work can hide behind legible ones)

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-30T17:53:05.728Z · EA · GW

I realise that I've been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-28T20:10:14.366Z · EA · GW
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

Not yet, but it's certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-28T14:11:23.167Z · EA · GW

I do think it affects their behavior, I just refuse to let it affect mine more than is strictly necessary, because I think it's a negative sum game.