Posts

Two reasons we might be closer to solving alignment than it seems 2022-09-24T17:38:24.188Z
EA Dating Spreadsheet: see EA dating profiles 2022-09-21T18:34:59.992Z
Perhaps the highest leverage meta-skill: an EA guide to hiring 2022-08-22T10:18:38.732Z
The Parable of the Boy Who Cried 5% Chance of Wolf 2022-08-15T14:22:56.301Z
How and why to turn everything into audio 2022-08-11T08:49:32.749Z
The most important lesson I learned after ten years in EA 2022-08-03T12:28:37.752Z
Meditation course claims 65% enlightenment rate: my review 2022-08-01T11:26:58.941Z
Three common mistakes when naming an org or project 2022-07-23T13:22:15.737Z
Four reasons I find AI safety emotionally compelling 2022-06-28T14:01:33.340Z
Doing good easier: how to have passive impact 2022-05-02T14:48:29.794Z
EA Houses: Live or Stay with EAs Around The World 2022-04-15T18:51:06.954Z
New: use The Nonlinear Library to listen to the top EA Forum posts of all time 2022-04-09T20:29:15.962Z
So you want to be a charity entrepreneur. Read these first. 2022-01-26T15:53:11.925Z
EA needs a hiring agency and Nonlinear will fund you to start one 2022-01-17T14:51:57.650Z
We summarized the top info hazard articles and made a prioritized reading list 2021-12-14T19:46:59.391Z
Effective Slacktivism: why somebody should do prioritization research on slacktivism 2021-12-03T20:32:57.400Z
Why fun writing can save lives: the case for it being high impact to make EA writing entertaining 2021-11-11T22:17:31.993Z
Listen to more EA content with The Nonlinear Library 2021-10-19T12:24:40.013Z
List of AI safety courses and resources 2021-09-06T14:26:42.397Z
Introducing The Nonlinear Fund: AI Safety research, incubation, and funding 2021-03-18T14:07:06.240Z
Poll - what research questions do you want me to investigate while I'm in Africa? 2020-02-07T12:47:25.356Z
How to increase your odds of starting a career in charity entrepreneurship 2019-12-03T17:40:22.411Z
How to come to better conclusions by playing steelman solitaire 2019-11-13T13:00:05.486Z
How to have cost-effective fun 2018-06-30T21:23:58.089Z
“EA” doesn’t have a talent gap. Different causes have different gaps. 2018-05-20T22:07:53.761Z
What if you’re working on the wrong cause? Preliminary thoughts on how long to spend exploring vs exploiting. 2017-02-06T22:13:54.295Z

Comments

Comment by Kat Woods (katherinesavoie) on Two reasons we might be closer to solving alignment than it seems · 2022-09-26T13:01:18.106Z · EA · GW

That's really interesting and unexpected! Seems worth figuring out why that's happening. What are your top hypotheses for why that's happening? 

My first guess would be epistemic humility norms. 

My second would be that the first people in a field are often disproportionately talented compared to people coming in later. (Although you could also tell a story about how at the beginning it's too socially weird so it can't attract a lot of top talent). 

My third is that since alignment is so hard, it's easier for people to latch onto existing research agendas instead of creating new ones. At the beginning there were practically no agendas to latch onto, so people had to make new ones, but now there are a few, so most people just sort themselves into those. 

Comment by Kat Woods (katherinesavoie) on The $100,000 Truman Prize: Rewarding Anonymous EA Work · 2022-09-25T12:18:16.623Z · EA · GW

I don't know why sphor's comment was downvoted (I'm also confused by that), but for Ryan's, I can at least speak for myself of why I downvoted it:

  1. I strongly disagree with the comment and think that
    1.  This sort of thinking is paralyzing for the EA movement and leads to way more potential founders giving up on ideas, bouncing from the EA movement, not posting on the Forum, or moving so slowly that a lot of impact is lost.  (I might write a post about this because I think it's important and neglected in the movement)
    2. It derails the conversation on something I consider to be a small detail about an improbable small-downside outcome, and I wanted more people focusing on more fruitful potential criticisms or points about the prize. 
  2. While a lot of the comment was polite and constructive, it also said that we were being "shifty", which felt unnecessarily accusatory. I think if that word was changed I would change it from a strong downvote to just a downvote

Of note, I just strongly disagree with this comment/idea. In general, I think Ryan is great and consider him a friend. 

Comment by Kat Woods (katherinesavoie) on Two reasons we might be closer to solving alignment than it seems · 2022-09-25T12:00:01.407Z · EA · GW

Large companies are usually much less innovative than small companies

I think this is still in the framework of thinking that large groups of people having to coordinate leads to stagnation. To change my mind, you'd have to make the case that having a larger number of startups leads to less innovation, which seems like a hard case to make. 

the larger EA gets, the more people are concerned about someone "destroying the reputation of the community"

I think this is a separate issue that might be caused by the size of the movement, but a different hypothesis is that it's simply an idea that has traction in the movement. One which has been around for a long time, even while we were a lot smaller. Spending your "weirdness points" and such considerations have been around since the very beginning. 

(On a side note, I think we're overly concerned about this, but that's a whole other post. Suffice to say here that a lot of the probability mass is on this not being caused by the size of the movement, but rather a particularly sticky idea)

I think there exist potential configurations of a research field that can scale substantially better, but I don't think we are currently configured that way

🎯 I 100% agree. I'm thinking of spending some more time thinking on and writing up ways that we could make it so the movement could usefully take on more researchers. I also encourage others to think on this, because it could unlock a lot of potential. 

I expect by default exploration to go down as scale goes up

I think this is where we disagree. It'd be very surprising if ~150 researchers is the optimal amount, or that having less would lead to more innovation and more/better research agendas. 

in general, the number of promising new research agendas and direction seems to me to have gone down a lot during the last 5 years as EA has grown a lot, and this is a sentiment I've heard mirrored from most people who have been engaged for that long

An alternative hypothesis is that people you've been talking to have been becoming more pessimistic about having hope at all (if you hang out with MIRI folk a lot, I'd expect this to be more acute). It might not be because there's more people having bad ideas or that having more people in the movement leads to a decline in quality, but rather a certain contingency think alignment is impossible or deeply improbable, so that all  ideas seem bad. In this paradigm/POV, the default is that all new research agendas seem bad. It's not that the agendas got worse. It's that people think the problem is even harder than they originally thought. 

Another hypothesis is that the idea of epistemic humility has been spreading, combined with the idea that you need intensive mentorship. This leads to new people coming in being less likely to actually come up with new research agendas, but rather to defer to authority. (A whole other post there!)

Anyways, just some alternatives to consider :) It's hard to convey tone over text, but I'm enjoying this discussion a lot and you should read all my writing assuming a lot of warmth and engagement. :) 

Comment by Kat Woods (katherinesavoie) on Two reasons we might be closer to solving alignment than it seems · 2022-09-24T19:35:17.342Z · EA · GW

Also, I'm surprised at the claim that more people doesn't lead to more progress. I've heard that one major cause of progress so far has just been that there's a much larger population of people to try things (of course, progress also causes there to be more people, so the causal chain goes both ways). Similarly, the reason why cities tend to have more innovation than small towns is because there's a denser number of people around each other. 

You can also think of it from the perspective of adding more explore. Right now there are surprisingly few research agendas. Having more people would lead to more of them, and it increases the odds that one of them is correct. 

Of note, I do share your concerns about making sure the field doesn't just end up maximizing proxy metrics. I think that will be tricky and will require a lot of work (as it does right now even!). 

Comment by Kat Woods (katherinesavoie) on Two reasons we might be closer to solving alignment than it seems · 2022-09-24T18:32:56.429Z · EA · GW

I agree that 10k people working in the same org would be unwieldy. I'm thinking more having 10k people working in hundreds or orgs and sometimes independently, etc. Each of these people would be in their own little microcosm, and dealing with the same normal amount of interactions. Should address the lowering social environment cost. Might even make it better because people could more easily find their "tribe"

And I agree right now we wouldn't be able to absorb that number usefully. That's currently an unsolved problem that would be good to make progress on.

Comment by Kat Woods (katherinesavoie) on The most important lesson I learned after ten years in EA · 2022-09-24T08:46:01.414Z · EA · GW

Thanks! 

Good idea about linking to the hiring post. I wrote that after this one, but I've gone back and added it. Thanks for the suggestion! 

Comment by Kat Woods (katherinesavoie) on Let's advertise infrastructure projects · 2022-09-24T08:42:35.667Z · EA · GW

Couldn't agree more! 

Some others to add to the list:

Comment by Kat Woods (katherinesavoie) on Introducing the Existential Risks Introductory Course (ERIC) · 2022-08-22T13:47:55.615Z · EA · GW

Thank you for making this! This looks great. I've added it to the list of AI safety courses.

It's not on just technical AI safety but I feel like it's related enough that anybody looking at the list will also be interested in this resource.

Comment by Kat Woods (katherinesavoie) on New: use The Nonlinear Library to listen to the top EA Forum posts of all time · 2022-08-20T14:35:55.065Z · EA · GW

I'd love to add that but unfortunately it would be really difficult technically speaking, so we probably won't make it happen.

Comment by Kat Woods (katherinesavoie) on The Parable of the Boy Who Cried 5% Chance of Wolf · 2022-08-16T08:39:55.045Z · EA · GW

Great point! I didn't give it much thought, honestly. I think you're right and saying 5% each time is better. Gonna update it now.

Thanks for the suggestion!

Comment by Kat Woods (katherinesavoie) on The most important lesson I learned after ten years in EA · 2022-08-04T21:11:19.224Z · EA · GW

Aww. Thanks for the kind words!

Comment by Kat Woods (katherinesavoie) on Another call for EA distillers · 2022-08-04T06:19:11.939Z · EA · GW

I totally agree on being able to listen to EA content being super useful!

If you haven't heard of it yet, there's the Nonlinear Library which automatically turns the top EA and rationalist content into podcast format using text-to-speech software.

We have multiple channels for different use cases:

Comment by Kat Woods (katherinesavoie) on Three common mistakes when naming an org or project · 2022-07-25T10:18:18.800Z · EA · GW

Interesting thoughts!

For vagueness you do lose out on remembering what your org does in exchange for option value. I'd say that the option value is more important though, since most of the variance in impact doesn't come from people not remembering what you do, but what strategy you follow. You want to minimize the friction for updating your strategy based on new evidence and considerations.

And then if you change strategy without changing your name (which is indeed quite costly), it also causes problems. For example, Open Phil and OpenAI have both been criticized for not being as open as their names would suggest.

Comment by Kat Woods (katherinesavoie) on Book a chat with an EA professional · 2022-07-21T11:28:05.334Z · EA · GW

I'd be happy to chat with anybody about charity entrepreneurship in the longtermist space. You can book a time here on my year-long, location-independent EAG calendly

Also feel free to book a time there for any other topic. I want to replicate the serendipitous meetings of an EAG year-round and I'm endlessly extroverted, so love talking to new people.

Thanks Ben for setting this up!

Comment by Kat Woods (katherinesavoie) on When Utilitarianism is Actually ✨Fun✨ · 2022-07-18T08:45:24.570Z · EA · GW

Thanks for writing this! Brightened my day and I put moderate odds it'll improve my impact long term. Adding it to my list of motivational things to look at when I'm feeling down.

Comment by Kat Woods (katherinesavoie) on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-05-20T17:29:27.989Z · EA · GW

Love this post! I’d build on this question by slightly re-framing it. Instead of asking “should EA be more demanding?”, I’d ask:

  1. What should we promote as a general norm about demandingness?
  2. What should you personally aspire to in terms of demandingness?

I usually have completely different answers to these questions. General norms have to be simpler and work for a larger number of people. Personal aspirations can be tailor-made to your particular circumstances, personality, and goals.

Of course, you could make the general norm for each person to think specifically about what they should personally aspire to, and maybe we should just do that. One of my favorite things about EA is that we don’t tend to oversimplify things for people, but rather push people to really engage with the complexities and nuances of ideas.

Comment by Kat Woods (katherinesavoie) on EA Houses: Live or Stay with EAs Around The World · 2022-05-10T14:15:42.228Z · EA · GW

Good idea! I messaged them

Comment by Kat Woods (katherinesavoie) on Doing good easier: how to have passive impact · 2022-05-02T19:23:21.883Z · EA · GW

Yeah, it's an interesting question whether, all else being equal, it's better to set up many passive impact streams or build one very amazing and large organization.

I think it all depends on the particulars. Some factors are:

  • What's your personal fit? I think a really important factor is personal fit. Some people love the idea of staying at one organization for ten years and deeply optimizing all of it and scaling it massively. Others have an existential crisis just thinking of the scenario. Passive impact is a better strategy for when you like things when they're small and super startup vibe and for if you find it hard to stay interested in the same thing for years on end.
  • What sort of passive impact are you setting up? I think obsessively optimizing an amazing organization and working hard on replacing yourself with a stellar person, such that it continues to run as an amazing org without you beats starting and staying on the same org probably. On the other hand, digital automation tends to decay a lot more without at least somebody staying on to maintain the project, and that would on average be beaten by optimizing a single org.
Comment by Kat Woods (katherinesavoie) on Doing good easier: how to have passive impact · 2022-05-02T17:02:07.525Z · EA · GW

Definitely! It's a specific instance of a potential meta-trap (another piece here about the idea).

The big questions are:

1. What ratio of meta to direct work should there be in the community?

2. How do we allocate credit?

Which is much beyond the scope of this post, but very important to discuss!

Comment by Kat Woods (katherinesavoie) on EA needs a hiring agency and Nonlinear will fund you to start one · 2022-04-29T15:06:32.314Z · EA · GW

We've found three stellar people to incubate. 🥳 More details to be announced soon.

Comment by Kat Woods (katherinesavoie) on New: use The Nonlinear Library to listen to the top EA Forum posts of all time · 2022-04-29T15:03:20.626Z · EA · GW

Thanks for the suggestions!

Yeah, the links in the episode notes is the most requested feature. We have them in all of our channels except for the static playlists (such as the top of all time lists) and the main channel because of technical reasons. We're working on the main channel, but it might take a bit because it's surprisingly difficult. Kind of reminds me of this comic.

For the intros, at least on PocketCast you can set it to skip the first X seconds, which I recommend.

Comment by Kat Woods (katherinesavoie) on EA Houses: Live or Stay with EAs Around The World · 2022-04-15T19:36:46.820Z · EA · GW

Good question! We're planning on pinging listings on the sheets roughly every four months to see if it's still up to date. We also have a column that says when the listing was last updated.

Comment by Kat Woods (katherinesavoie) on New: use The Nonlinear Library to listen to the top EA Forum posts of all time · 2022-04-10T18:26:58.059Z · EA · GW

Absolutely. We use Asana and we'll just add it to our "Making a new channel" template to check and make sure that we have removed people who've opted out.

We have an automatic rule for the main channel. The problem here was that it was a one-off, static channel, so it wasn't using the same code we usually use.

I'm really sorry that that happened. I think this fix should do it.

Comment by Kat Woods (katherinesavoie) on New: use The Nonlinear Library to listen to the top EA Forum posts of all time · 2022-04-10T17:38:58.267Z · EA · GW

They should be removed now. Might take awhile to update on all the platforms. Could only find ones on the EA Forum. Let us know if you posted anything on other the other forums.

Comment by Kat Woods (katherinesavoie) on New: use The Nonlinear Library to listen to the top EA Forum posts of all time · 2022-04-10T17:21:49.208Z · EA · GW

Oh, I'm so sorry. Where is it? It probably was a mishap with our data entry person. We'll remove it ASAP.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:21:06.087Z · EA · GW

EA Marketing Agency

Improve Marketing in EA Domains at Scale

Problem: EAs aren’t good at marketing, and marketing is important.

Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:18:37.324Z · EA · GW

Top ML researchers to AI safety researchers

Pay top ML researchers to switch to AI safety

Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.

Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:14:11.264Z · EA · GW

X-risk Art Competitions

Fund competitions to make x-risk art to create emotion

Problem: Some EAs find longtermism intellectually compelling but not emotionally compelling, so they don’t work on it, yet feel guilty.

Solution: Hold competitions where artists make art explicitly intended to make x-risk emotionally compelling. Use crowd voting to determine winners.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:13:34.672Z · EA · GW

AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety

Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.

Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:12:27.863Z · EA · GW

Teaching buy-out fund

Allocate EA Researchers from Teaching Activities to Research

Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.

Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:10:55.987Z · EA · GW

Academic AI Safety Journal
Start an Academic Journal for AI Safety Research

Problem: There isn’t one. There should be. It would boost prestige and attract more talent to the field.

Solution: Fund someone to start one.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:07:54.098Z · EA · GW

Alignment Forum Writers

Pay Top Alignment Forum Contributors to Work Full Time on AI Safety

Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.

Solution: Offer them enough money to quit their job and work on AI safety full time.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:07:34.387Z · EA · GW

Bounty Budgets

Like Regranting, but for Bounties

Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.

In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.

Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.

RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.

Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:05:45.494Z · EA · GW

EA Forum Writers

Pay top EA Forum contributors to write about EA topics full time

Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.

Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:04:36.992Z · EA · GW

AI Safety “school” / More AI safety Courses

Train People in AI Safety at Scale

Problem: Part of the talent bottleneck is caused by there not being enough people who have the relevant skills and knowledge to do AI safety work. Right now, there’s no clear way to gain those skills. There’s the AGI Fundamentals curriculum, which has been a great success, but aside from that, there’s just a handful of reading lists. This ambiguity and lack of structure lead to way fewer people getting into the field than otherwise would.

Solution: Create an AI safety “school” or a bunch more AI safety courses. Make it so that if you finish the AGI Fundamentals course there are a lot more courses where you can dive deeper into various topics (e.g. an interpretability course, values learning course, an agent foundations course, etc). Make it so there’s a clear curriculum to build up your technical skills (probably just finding the best existing courses, putting them in the right order, and adding some accountability systems). This could be funded course by course, or funded as a school, which would probably lead to more and better quality content in the long run.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:02:15.189Z · EA · GW

EA Productivity Fund

Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.

Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).

Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T18:00:22.131Z · EA · GW

Translate EA content at scale

Reach More Potential EAs in Non-English Languages

Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated

Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.

Comment by Kat Woods (katherinesavoie) on The Future Fund’s Project Ideas Competition · 2022-03-06T17:58:31.177Z · EA · GW

Incubator for Independent Researchers

Training People to Work Independently on AI Safety

Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.

Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.

This could also be structured as a research organization instead of an incubator.

Comment by Kat Woods (katherinesavoie) on Some thoughts on recent Effective Altruism funding announcements · 2022-03-06T12:15:28.106Z · EA · GW

Nonlinear is launching a longtermist incubator! Given my background co-founding Charity Entrepreneurship and nobody else moving forward on the idea, I thought it was a good fit for us.

Details to be announced soon.

Comment by Kat Woods (katherinesavoie) on Chris Blattman on the chaotic nature of running surveys · 2022-03-02T23:20:56.798Z · EA · GW

So true! I wrote about my own experience running surveys in Rwanda and it was a similar thing. Same with surveys in India.

I also agree that it would be the same in richer countries too. Whenever you ask somebody a question and they say "It depends", that would be something that wouldn't show up in a typical survey.

So far my main update has been towards wider confidence intervals across the board and updating less drastically based on data, regardless of the source. All the while keeping in mind that just because science isn't all-knowing, that doesn't mean that anecdotes or intuition are better :P

Comment by Kat Woods (katherinesavoie) on Managing 'Imposters' · 2022-01-30T11:36:37.998Z · EA · GW

Thank you for writing this! Going to work on implementing some of the advice at Nonlinear.

I'd add lovingkindness meditation to the list of possible intervention points, too. That's been working wonders for me. Charlie wrote a good post on it here.

The biggest trick for me has been to practice lovingkindness on general recurring issues where I feel insecure. For example, sending acceptance and kindness to myself for: my inbox, my to do list, looming deadlines, difficult tasks, responsibilities I feel anxiety around, etc.

Also, the general structure of lovingkindness practice (generate emotion, apply to target) can be used for pretty much any emotion (e.g. motivation, confidence, etc), and I've been having a lot of success there. Will be writing up what I've found so far soon.

Comment by Kat Woods (katherinesavoie) on What self-help topics would you like better research/ resources on? · 2022-01-19T16:06:52.135Z · EA · GW

I'd love to see a more evidence-based look into different types of meditation techniques if possible.

Also, I'd like a better breakdown of how much exercise, what types, and for what benefits. (E.g. how much should you exercise for optimizing for well-being vs longevity? To what extent does it matter if it's high intensity exercise for well-being vs low intensity?)

Comment by Kat Woods (katherinesavoie) on EA needs a hiring agency and Nonlinear will fund you to start one · 2022-01-19T15:04:45.606Z · EA · GW

It depends on their experience, what they need, and how many good candidates there are. It can range from $30-$150k.

Comment by Kat Woods (katherinesavoie) on You are probably underestimating how good self-love can be · 2022-01-12T16:15:52.153Z · EA · GW

In part inspired by this post, I did one hour a day of loving-kindness meditation for ten days and the results were phenomenal. It's too soon to tell if it'll stick, but I think it's fixed about 80% of my impostor syndrome and anxiety around impact, which have been a major source of stress for me for years.

I've tried everything before, like CBT, ACT, concentration practice, IFS, exercise, therapy, etc etc. Nothing had worked. And this has been by far the most successful thing I've tried.

Will be writing about it in more detail on LessWrong when I write the review about the Finder's Course in a few weeks. Thank you so much for writing this article that gave me the extra push and framework I needed.

Comment by Kat Woods (katherinesavoie) on Listen to more EA content with The Nonlinear Library · 2022-01-11T15:11:14.921Z · EA · GW

I'm so glad to hear that! Shared it with the team to much party-parrot emoji reactions and totally made my day. Thank you for letting us know!

Comment by Kat Woods (katherinesavoie) on Field-specific LE (Longterm Entrepreneurship) · 2021-12-28T19:22:08.236Z · EA · GW

We are actually doing this! For all the reasons you mention and more.

We’ll be announcing the specifics relatively soon, but in the meantime, we’re incubating an EA-hiring agency for longtermists with an initial focus on PAs and are working on finding a founder for this. We are also incubating a promising woman for an as-yet-unspecified charity.

Our model will be similar to CE but adjusted based on the different needs of longtermism and the lessons I learned from CE’s limiting factors.

We will soon be fundraising to increase our capacity to take on more incubatees. Details to be announced soon.

Comment by Kat Woods (katherinesavoie) on External Evaluation of the EA Wiki · 2021-12-14T12:45:26.617Z · EA · GW

I'm so impressed that Pablo asked for an external review when he was feeling potentially burnt out and not sure about the impact of the wiki. That takes some incredible epistemic (and emotional!) chops. This is an example of EA at its finest.

Comment by Kat Woods (katherinesavoie) on AI Governance Course - Curriculum and Application · 2021-11-30T14:14:02.969Z · EA · GW

Fantastic! You're right, we'd just put it into podcast form so people could listen on their podcast players, so no need to host the audio files or anything. I'll DM you with more details.

Comment by Kat Woods (katherinesavoie) on AI Governance Course - Curriculum and Application · 2021-11-29T19:07:05.160Z · EA · GW

This is fantastic! Thank you for making this.

Would you like us to convert the readings into audio to make it easier for people to participate? This would be pretty easy on our end.

Comment by Kat Woods (katherinesavoie) on Can we influence the values of our descendants? · 2021-11-20T21:21:23.196Z · EA · GW
So, can we influence our great great great grandchildren's values? The answer is a very scientific and very disappointing maybe.

This made me laugh out loud. And then I found out I can take partial credit!

In all seriousness, the jokes and turns of phrase definitely contributed to me finishing the article, so it's already working on me at least. :)

Also, great title choice! I predict that question-based titles will be especially good for the forum, being both engaging and in keeping with EA culture.