Comment by toonalfrink on The career coordination problem · 2019-03-25T11:11:33.834Z · score: 1 (1 votes) · EA · GW

Just wanted to mention this problem is orthogonal to the related problem of generating enough work to do in the first place, and before you start thinking about how to cut up the pie better you might want to consider making the pie bigger instead.

Unless making the pie bigger is less neglected. I guess this problem can be applied to itself :)

Task Y: representing EA in your field

2019-03-24T18:37:00.498Z · score: 7 (8 votes)

The Home Base of EA

2019-03-22T05:07:54.017Z · score: 11 (13 votes)
Comment by toonalfrink on EA jobs provide scarce non-monetary goods · 2019-03-22T03:10:23.474Z · score: 3 (2 votes) · EA · GW

Nope, it's full-time. Right now two of us are doing a side project, but that's not usual

Comment by toonalfrink on Announcement: Join the EA Careers Advising Network! · 2019-03-21T04:23:15.863Z · score: 4 (3 votes) · EA · GW

Hey, this is great! I'm not sure for which role I should apply. Do you have some definition of what makes a proper advisor? How many years of experience? What level of investment? Or shall I just write my own hero license?

Comment by toonalfrink on EA jobs provide scarce non-monetary goods · 2019-03-21T04:02:48.365Z · score: 7 (5 votes) · EA · GW

Data points:

  • We offered a job that didn't offer any monetary reward at all (except for a place in the EA Hotel) and we still got 10 applications.
  • When we offered a job with a negative salary, we didn't get any applicants (yet)

Obviously, these numbers might be influenced by many factors besides pay.

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T18:04:48.025Z · score: 9 (3 votes) · EA · GW

I'm glad that someone mentions this. I have a strong alief that misrepresenting your opinions to be more palatable is a bad idea if you're right. It pulls you into a bad equilibrium.

If you sermon the truth, you might lose the respect of those that are wrong, but you will gain the respect of those that are right, and those people are the ones you want in your community.

Having said that, you really do have to be right, and I feel like not even EA's are up to the herculean task of clearly seeing outside of their political intuitions. I for one have so far been wrong about many things that felt obvious to me.

I guess that's why we focus on meta truth instead. It seems that the set of rules that arrive at truth are much more easily described than the truth itself.

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T17:50:10.186Z · score: 6 (4 votes) · EA · GW

Downvoted because I felt that the "though not linked to" and the hyperboles in your comment suggest that you're coming from a subtly adversarial mindset

(I'm telling you this because I like to see more people explain their downvotes. They're great information value. No bad feels!)

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T17:38:07.722Z · score: 12 (7 votes) · EA · GW

Appreciate the data!

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T02:30:07.880Z · score: 0 (3 votes) · EA · GW

any rules we make will be reasonable

Nah, it does apply to itself :)

and we won't push people out for having an unfashionable viewpoint

But you think pushing them out is the right thing to do, correct?

Let me just make sure I understand the gears of your model.

Do you think one person with an unfashionable viewpoint would inherently be a problem? Or will it only become a problem when this becomes a majority position? Or perhaps, is the boundary the point where this viewpoint starts to influence decisions?

Do you think any tendency exists for the consensus view to drift towards something reasonable and considerate, or do you think that it is mostly random, or perhaps there is some sort of moral decay that we have to actively fight with moderation?

Surely, well kept gardens die by pacifism, and so you want to have some measures in place to keep the quality of discussion high, both in the inclusivity/consideration sense and in the truth sense. I just hope that this is possible without banning topics. For most of the reasons stated by the OP. Before we start banning topics, I would want to look for ways that are less intrusive.

Case in point: it seems like we're doing just fine right now. Maybe this isn't a coincidence (or maybe I'm overlooking some problems, or maybe it's because we already ignore some topics)

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T01:35:00.654Z · score: 6 (3 votes) · EA · GW

I wonder where this fear of extreme viewpoints comes from. It seems to be a crux.

I personally don't have an alief that there is a slippery slope here. It seems to me that there are some meta rules for discussion in place that will keep this from happening.

For example, it seems to me that EA's are very keen to change their minds, take criticism and data very seriously, bring up contrarian viewpoints, and epistemics humility, to name a few things. I would like to call this Epistemic Honor.

Do you think that our culture of epistemic honor is insufficient for preventing extreme viewpoints, to the point that we need drastic measures like banning topics? My impression is that it's more than enough, but please prove me wrong!

Comment by toonalfrink on SHOW: A framework for shaping your talent for direct work · 2019-03-14T00:07:31.227Z · score: 5 (5 votes) · EA · GW

I don't think you read too much Robin Hanson, it clarifies a lot of things :)

In some sense, I don't even think these people are wrong to be frustrated. You have to satisfy your own needs before you can effectively help others. One of these needs just happens to be the need to feel relevant. And like everything else, this is a systemic problem. EA should try to make people feel relevant if and only if they're doing good. If doing good doesn't get you recognition unless you're in a prestigious organisation, then we have to fix that.

Comment by toonalfrink on SHOW: A framework for shaping your talent for direct work · 2019-03-13T23:58:34.378Z · score: 3 (2 votes) · EA · GW

I'd love to hear why this got downvoted. Am I missing something?

Comment by toonalfrink on SHOW: A framework for shaping your talent for direct work · 2019-03-13T23:10:11.028Z · score: 16 (19 votes) · EA · GW

My 2 cents is a shift in mindset. It's related to 3.

In my experience, the elephant in the brain is that most of us, most of the time, are still optimizing for approval 90% and value 10%. If you manage to snap out of that, you'll suddenly see that there is a lot of unacknowledged value out there.

Not because anyone's intentionally ignoring things, but because the people and organisations that have the position to acknowledge value are busy, and imperfect. They haven't thought of everything. They don't have everything in control. They're not your superiors. The only difference between them and you is that they realized that there's no fire alarm for adulthood. You won't wake up one day and realise that you are now wise and able to handle everything.

The terrifying truth is that there are not enough adults in the room. You are, in some broad sense, the most responsible individual here. No one can tell you what to do. So now you have to take ownership of these problems and personally see to it that they are solved. It's terrifying, but it's necessary.

In some sense, we're not going to make it without your help.

It's been said that EA is vetting constrained, but in some deep sense it's more like that EA (and the world) is constrained on the amount of people that don't need to be told what to do.

So build up that skill of making decisions that you would feel comfortable about even if a large amount of people scrutinized you for it. Then start acting as if a large amount of superintelligent and reasonable people are scrutinizing you with the expectation that you will personally take care of everything. If you can handle that pressure, it's the best prompt I've found to get myself to start generating plenty of work to do. Much more than I can do on my own.

Comment by toonalfrink on EA is vetting-constrained · 2019-03-09T03:58:43.496Z · score: 4 (2 votes) · EA · GW

I'm quite interested in your project. As someone with some skill and interest in designing people systems, is there some way I can help you?

Comment by toonalfrink on EA is vetting-constrained · 2019-03-09T03:45:54.811Z · score: 9 (6 votes) · EA · GW

I like this frame of maximizing the learning of the vetting skill. How can we get as many EA's as possible to get as much experience as possible on evaluating charities, while also ensuring some minimum level of quality with the charities that actually get funded?

Sounds like we want every (potential) grantmaker to be working on vetting the orgs that are on the edge of their skill. That's how you maximize learning.

Also re Jan's comment, some kind of "upward delegation" system where juniors defer to seniors but only if they can't handle an application sounds like it would have this property, plus it would minimize the time that seniors have to spend.

I also like to imagine sending small teams to the EA hotel to start an org for 3 months, explicitly with the intention to just test an idea, then write up their results and feed this back into the vetters.

Just shooting random ideas. Seems like we have some nice building blocks to create something here.

Comment by toonalfrink on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T18:28:07.971Z · score: 6 (6 votes) · EA · GW

Perhaps that's true, but I'm not sure if the EA Hotel residents are going to be in those top 1% opportunities otherwise. It's not that they lack the talent, it's just that it takes a certain kind of personality to be willing to do all the negative sum games* and goodharting and hoop-jumping to get those opportunities in the first place.

Taking myself as an example, I can't stand subordinating myself to someone who seems to be unaligned or who seems to have bad judgment. I can't stand competing. This means that my options are either to create my own work or to do something part-time minimum-effort and spend my life on something that isn't my career.

Given the personal sacrifice of pursuing a career in EA, I wouldn't be surprised if many EA's are like that.

*like job applications

Comment by toonalfrink on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T17:58:02.286Z · score: 2 (2 votes) · EA · GW

I see a lot of people from EA orgs reply this way. It's a good sign!

Comment by toonalfrink on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-03-01T17:46:30.659Z · score: 2 (3 votes) · EA · GW

Some thoughts:

  • Many fairly complicated tasks can be broken down into tasks that are fairly simple to carry out. All it takes is some ingenuity and some investment to spend the time to systematize the thing well enough that people can do the thing (Amazon Mechanical Turk comes to mind).
  • I've worked somewhat extensively with volunteers, and I find that only a small minority is actually willing to put in the work completely pro bono. Most volunteer work confers at least some benefit for the volunteer. If it doesn't, you usually find that turnover is so large that the overhead isn't worth it. In the case of regular volunteering, these benefits would be upgrading the community you're a part of, or perhaps learning skills or upgrading your CV, or maybe just fun. In EA I find that the motivation is often social interaction with like-minded peers.
  • First 2 points imply that, at least in my limited experience, the bottleneck is incentivizing people to show up consistently.
  • It seems that this requirement is sometimes automatically met if the volunteering happens offline. There's something about physicality and interacting with people that can be rewarding enough for the volunteer to keep showing up. That kind of magic is much less potent when you're online. If we could do something about that, it might be a breakthrough.
  • Should there be no one task Y, but a bag of small tasks \(Y_1, Y_2, ...,\) there might still be a "incentive Z" that all of them could employ to help motivate people to help with things. The most obvious solution for Z is money, but there might be cultural ones that are much more scalable.
  • An example to illustrate the last 2 points: if there was some kind of cozy online "EA living room" that was fun to hang around in but also repeatedly prompted people to "score points" to do things, that might be both scalable and keep people showing up. Maybe this wouldn't scale into the millions, but it would at least keep "soft EA's" meaningfully involved.
Comment by toonalfrink on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T19:06:17.225Z · score: 24 (31 votes) · EA · GW

In case it isn't clear: EA is funding constrained. (I can't imagine it will ever not be. Have we ran out of imagination?) It is similarly difficult to raise funding as a budding EA organisation. The idea that EA is not funding constrained came from surveying established orgs that are indeed talent constrained, but in the meantime I could easily name 5 (even 10) startup EA charities that are perpetually seeking funding that they could instantly turn into jobs.

Even if that wasn't the case, higher pay would allow orgs to attract better talent, outsource more, etc. There's almost always a use for additional funding.

Especially if you compare the damage of too little with the damage of too much money, earning to give is still a great idea.

Comment by toonalfrink on EA Hotel Fundraiser 2: current guests and their projects · 2019-02-07T19:31:16.515Z · score: 1 (1 votes) · EA · GW

We hope to quantify as much of this as possible in our 4th post

Comment by toonalfrink on Bottlenecks and Solutions for the X-Risk Ecosystem · 2019-01-14T15:25:44.168Z · score: 3 (1 votes) · EA · GW

How about this: you, as someone already grappling with these problems, present some existing problems to a recrutee, and ask them to come up with some one-paragraph descriptions of original solutions. You read these, and introspect whether they give you a sense of traction/quality, or match solutions that have been proposed by experts you trust (that they haven't heard of).

I'm looking to do a pilot for this. If anyone would like to join, message me.

Comment by toonalfrink on EA Hotel Fundraiser 1: the story · 2018-12-31T21:16:27.704Z · score: 5 (3 votes) · EA · GW

You might precommit to only spending some amount of your money on expansion if this amount has already been matched by donations from others. I'd personally be happy to refrain from expansion until we got the green light from external parties. It would be a good incentive to document our work.

Comment by toonalfrink on EA Hotel Fundraiser 1: the story · 2018-12-27T13:03:22.410Z · score: 9 (9 votes) · EA · GW

I think the best way to think of them are as types that are derived from the data. For each of them there are a few guests that closely resemble it, and together they're meant to cover most of the cases.

EA Hotel Fundraiser 1: the story

2018-12-27T12:15:55.157Z · score: 63 (32 votes)
Comment by toonalfrink on Burnout: What is it and how to Treat it. · 2018-11-07T18:11:07.782Z · score: 8 (7 votes) · EA · GW

I have a hypothesis about burnout that feels true, but I can't validate it because it's purely based on introspection and things other people said. Still it might inspire a fix that works:

Most fatigue is emotional fatigue, and most emotional fatigue (or all?) comes from what I call cognitive dissonance, or subagents that disagree. These subagents are relatively independent agents in you that represent and try to achieve needs that you have. For example the reason that it's hard to concentrate if you have to the toilet is because the subagent that wants you to go to the toilet is interfering with your otherwise aligned coalition of agents that are aiming at something else.

If you repeatedly do something that increases cognitive dissonance, by not meeting a specific need that you have, or acting against something you want, you build up a debt. Your subagents become increasingly "distrustful" of one another, until they just stop playing along at all and stage a "coup", so to speak. This is when parts of you become so much at odds with your usual motivations that they completely block you. We call that burnout.

Most of the time, we're barely aware that we're doing this. We put on tight clothes, sit in noisy places, deprive ourselves of sleep, tolerate scary people, skip lunch. We think that we get used to it, but we just forcefully ignore it. We take stimulants to dull the senses just so that we can keep our focus. That's how we unwittingly build up the dissonance. It's putting on emotional debt one escape at the time.

Suggestions for putting this model to use:

  • Identify the things you secretly need that you're hiding from yourself. For example I might find that I'm really not happy with my insecure financial situation.
  • Strive to be altruistic, but only under the condition that those needs are already mostly met. For example I might reduce my hours from 60 to 40 so that I have enough time for rejuvenation.
  • Routinely check in with yourself, to make sure you're not unknowingly damaging yourself. "Am I hungry/thirsty? Am I cold/warm? Can I handle what this person just said? Do I feel safe?"
  • Notice the failure mode of trying to please someone else just so that they will give you something you need. Be self-sufficient. See social anxiety as an indication that you're not. For example I might put some more attention on optimizing my self-care and housekeeping skills, and get a side job, so that failing in my EA efforts will not damage me
  • Recognize that stimulants are an excellent tool to ignore your needs. Coffee has wrecked me more than once.
  • Schedule downtime (like meditation or just staring at a wall) so that it becomes impossible to ignore your feelings, forcing you to deal with them

Comment by toonalfrink on Burnout: What is it and how to Treat it. · 2018-11-07T17:39:14.280Z · score: 2 (2 votes) · EA · GW
a good start would be to simply recognize management as what you are doing and a skill that needs to be learned.

I would like to second this, and add that it seems very hard to switch from management to direct work. As a result I would often do neither as I tried to focus on deep work but didn't quite get into it. Gotta schedule specific time for deep work, or just barely do it at all. I opted for the latter, which I find more efficient.

Comment by toonalfrink on Burnout: What is it and how to Treat it. · 2018-11-07T17:36:01.478Z · score: 3 (3 votes) · EA · GW

I used to be out of balance all the time, but grokking the phase response curve seems to have given me full control of my circadian rhythm. Taking term release melatonin at 16:00 and again at 20:00 can make me go to sleep a good 3 hours earlier.

However there doesn't seem to be a thing in the world that cures sleeplessness that was caused by an overload of stress. If I'm sufficiently overwhelmed, I'm going to lie awake until 04:00 no matter what I do. There is no substitute for opening up to and fixing the underlying issues.

Comment by toonalfrink on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-06-30T18:44:46.755Z · score: 2 (2 votes) · EA · GW

I appreciate this model and feel like it has the potential to be foundational in a rigorous account of group rationality.

This especially because I find that the higher levels lack a proper feedback mechanism. A common pattern seems to be that we often discuss high-level moral philosophy without making any hard decisions on which philosophy to employ. I even use “so have we fixed morality yet?” As a joke. People laugh, not because I pretend that it’s easy, but because I pretend that making a decision is the point. I suspect that this lack of pressure leads to impoverished thinking.

Imagine a world where an institution strived to deliver solutions to moral problems as an input to another institution that further carried it out. I’d expect the sense of responsibility to lead to much better thinking. No more belief as attire.

But then maybe this is already happening in places I haven’t been. Still, I’d love to see more people take responsibility for providing workable answers to philosophy. Even when they’re just chatting at social events.

Comment by toonalfrink on EA Hotel with free accommodation and board for two years · 2018-06-18T13:53:40.470Z · score: 4 (4 votes) · EA · GW

Hi Vollmer, appreciate your criticism. Upvoted for that.

While it's really impressive how low the rent at the hotel will be, rent cost is rarely a major reason for a project's funding constraints

Do you realise that the figure cited (3-4k a year) isn't rent cost? It's total living cost. At least in my case that's 4 times as little as what I'm running on, and I'm pretty cheap. For others the difference might be much larger.

For example a project might have an actually high-impact idea that doesn't depend on location. Instead of receiving $150k from CEA to run half a year in the bay with 3 people, they could receive $50k and run for 3 years in Blackpool with 6 people instead. CEA could then fund 3 times as many projects, and it's impact would effectively stretch 623=36 times further. Coming from that perspective, staying in the world's most expensive cities is just non-negotiable. At least for projects (coding, research, etc) that wouldn't benefit an even stronger multiplier from being on-location. And this isn't just projection. I know at least one project that is most likely moving their team to the EA hotel.

Instead, the hotel could become a hub for everyone who doesn't study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality.

I'm pretty sure EA projects find many projects net-positive even if they don't find them worth funding. For the same reason that I'd buy a car if I could afford one. Does that mean I find cars lower-quality than my bicycle? Nope.

Imo it's a very simple equation. EA's need money to live. So they trade (waste) a major slice of their resources to ineffective endeavors for money. We can take away those needs for <10% the cost, effectively making a large amount of people go from part-time to full-time EA. Assuming that the distribution of EA effectiveness isn't too steeply inequal (ie there are still effective EA's out there), this intervention is the most effective I've seen thus far.

Comment by toonalfrink on Remote Volunteering Opportunities in Effective Altruism · 2018-06-08T13:34:38.961Z · score: 1 (1 votes) · EA · GW

Thank you for the mention!