Announcing AI Safety Support 2020-11-19T20:19:58.031Z
Should you do a PhD? 2020-07-24T10:15:29.420Z
The Case for Impact Purchase | Part 1 2020-04-14T13:08:48.664Z
Announcing Web-TAISU, May 13-17 2020-04-05T22:26:14.186Z
What is the funding situation for AI Safety? 2020-03-21T13:38:29.687Z
Coronavirus Tech Handbook 2020-03-11T14:44:00.478Z
TAISU - Technical AI Safety Unconference 2020-02-04T18:26:37.057Z
Two AI Safety events at EA Hotel in August 2019-05-21T18:57:00.683Z


Comment by linda-linsefors on Ethical offsetting is antithetical to EA · 2021-01-21T10:12:19.567Z · EA · GW

Edit: I've posted before reading others comments. Others have already made this an similar points.

Here is a story of how ethical offsetting can be effective.

I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly.  With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying]. 

What I did was to calculate what offsetting would cost, and it turned out to be around one USD, so basically nothing. I could now conclude that:

Flying + offsetting > Going by train

Because I would save time, and I could easily afford to offset more than the harm I would do by flying, and still pay less in total.

Now, since I'm an EA I could also do the next step

Flying + donating to the most effective thing > Flying + offsetting > Going by train.

But I needed at least the idea of offsetting to simplify the calculation to something I could manage my self in an afternoon. In the first step I compare things that are similar enough so the comparison is mostly straight forward. The second step is actually super complicated, but it's the sort of thing EAs has been doing for year, so for this I can fall back on others. 

But I'm not sure how I would have done the direct comparison between [flying + donating] v.s. [going by train]. I'm sure it's doable some how, but with the middle step, it was so much much easier.

Comment by linda-linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:35:42.790Z · EA · GW

Hi Guy
I'd be happy to talk to you. I'm co-founder of AI Safety Support, a new organization dedicated to helping people who want help with AI Safety. 

I'd like so see how we can help you, and learn from you how we can better support people in your situation. Please reach out by mail, or book a call or both.

Comment by linda-linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:31:39.256Z · EA · GW

AI Safety Support are doing an AI Safety Carers Bottleneck survay. 

Please help us spread it around. 
We want responses from anyone who are currently doing AI Safety work, or would like to do so in the future.

It only takes 5-20 minutes to answer (these are empirical numbers).

Comment by linda-linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:22:10.386Z · EA · GW

Funding proposal: AI Safety Support 

Our goal is to enable aspiring AI Safety researchers to do the things they are trying to achieve. We provide operational and community support to early career and transitioning researchers to fill gaps in the AI Safety career pipeline. (For more info, see this blogpost)

Suggested donation: Anything in the range $30k - $60k. 
We would not turn away smaller amounts, since we are not trying to get fully funded from a single donation anyway. But you suggested $30k as a lower limit.

Regarding "Relative opinions", I'm happy to discuss that in a private, if you want. 

Edit: I don't think this reasoning applies to us anyway. Though I'm happy to talk anyway.

Comment by linda-linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:04:36.624Z · EA · GW

Here is a number of EA funding requests

Comment by linda-linsefors on What are some potential coordination failures in our community? · 2020-12-19T03:00:49.341Z · EA · GW

An aspect of the funding problem is that money allocation is bad everywhere. (On a larger scale, the market mostly woks, but if you get into the details of being a human wanting to trade your time for money, most things around job applications and grant applications, is more or less terrible.) If we design a system that don't suck, over time EA will attract people who are here for the money not for the mission. 

A solution should have the feature:
1) It don't suck if you are EA aligned
2) If you are not EA aligned it should not be easier to get money from us than from other places. (It is possible to get non EA aligned people to do EA aligned actions. But that require an very different level of oversight.)

I think a grant lottery, where the barrier to entry is to have done some significant amount of EA volunteer work or EA donation or similar, would be an awesome experiment.

Comment by linda-linsefors on What are some potential coordination failures in our community? · 2020-12-13T22:02:02.002Z · EA · GW

Funding is a mess. 

Distributing money is hard and we should not expect to have a good solution anytime soon. But it would be helpful if people where aware of how inadequate our current funding ecosystem is. Even though money supposedly exists, funding is still the main bottleneck for most new EA initiatives.

My current analysis is that grant evaluation is hard becasue is inherently low bandwidth. I would therefore recommend that people donate though their own personal networks rather than giving to one of the EA Funds. I'd also expect that we'll see a greater and healthier diversity of projects this way.

I know the argument for having centralized funding. We pool all the money and all the applications in one place, and then let some trusted people sort it out. In theory this both saves time and optimize money distribution. But in practice it has a lot of problems. It's slow, it's low bandwidth, and the biases of a few will effect everyone.

I've personally lost a lot of time to grant agencies. Waiting for answers what where late. Or waiting for a promised application opening, that where canceled. If you have not experienced these things yourself, it's hard for me to describe how much it can mess up everything. And that's just one of the problems. 

Dealing with individual funders has been sooooo much easier, and just overall a much nicer and more supportive experience. 

I have a lot more to say about this, but I have not found the best way to express it yet. But feel free to reach out for more of my thoughts. 

(An alternative hypotheses, is that EA is cash constrained. I.e. the bottle neck is not around distributing the money, it's about there not being enough of it. In that case we should upgrade the importance of earning to give.)

Comment by linda-linsefors on What are some potential coordination failures in our community? · 2020-12-13T21:08:22.215Z · EA · GW

EA Hub has evolved a lot since last time I had a look. I was going to complain that it has limited usefulness since you can only search based on location and not interests and expertise, but that is no longer true. This is great!

Comment by linda-linsefors on Where are you donating in 2020 and why? · 2020-12-08T13:57:08.988Z · EA · GW

My friend's wrist where hurting from clicking, so we tried getting a second mouse, which we taped on the floor as a foot pedal. Now he moves the cursor whit his hand and click with his foot. It works surprisingly well.

Comment by linda-linsefors on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-07T13:19:13.277Z · EA · GW

I am in favour of people asking for requests, including money. Even if these post are not of interest to most readers, I think they can be of grate value when read by the right person, but the chances of that goes down dramatically if the post are not on the front-page. 

On the other hand, we don't want the front-page to be filled up with various requests. It takes up space, and also don't look very good. But I do think there is a simple win win here.

Create a top level post called something like "Requests for funding and other favours", where people can leave their requests as comments. This will only take up a single line on the front-page, and it will more accessible for the people who are looking to donate.

Comment by linda-linsefors on Donor Lottery Debrief · 2020-08-10T02:11:06.043Z · EA · GW

Then maybe these lots of people should gang up and start a new hub, literally anywhere else. Funding problem mostly solved.

If people are not seriously trying this, then it's hard for me to take seriously any claims of lack of funding. But as I said, I might be missing something. If so, pleas tell me.

Comment by linda-linsefors on Donor Lottery Debrief · 2020-08-08T22:14:30.908Z · EA · GW

You are correct that people in the Bay can find out about project in other places. The project I know about are also not in the same location as me. I don't expect being in the Bay has an advantage for finding out about projects in other places, but I could be wrong.

When it comes to project in the Bay, I would not expect people who lack funding to be there in the first place, given that it is ridiculously expensive. But I might be missing something? I have not investigated the details, since I'm not allowed to just move their my self, even if I could afford it. (Visa reason, I'm Swedish)

Comment by linda-linsefors on Donor Lottery Debrief · 2020-08-06T01:43:58.018Z · EA · GW
Looking for more projects like these

AI Safety Support is looking for both funding and fiscal sponsorship. We have two donation pledges which are conditional on the donations being tax-deductible (one from Canada and one from the US). But even if we solve that, we still have a bit more room for funding.

The money will primarily be used for sallary for me and JJ Hepburn.

AI Safety Support's mission is to help aspiring and early career AI Safety researcher in any way we can. There are currently lots of people who wants to help with this problem but who don't have the social and institutional support from organisations and people around them.

We are currently running monthly online AI Safety discussion days, where people can share and discuss their research ideas, independent of their location. These events are intended as a complement to the Alignment Forum and other written forms of publication. We believe that live talks conversation are a better way to share early stage ideas, and that blogpost and papers comes later in the proses.

We also have other projects in the pipe line, e.g. our AI Safety career bottleneck survey. However, these things are currently on hold until we've secured enough funding so that we know we will be able to keep going for at least one year (to start with).

AI Safety Support have only existed since May, but both of us have a track record of organising similar events in the past, e.g. AI Safety Camps.

Comment by linda-linsefors on Donor Lottery Debrief · 2020-08-05T10:35:44.884Z · EA · GW
I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren't yet funded by larger donors

I am sceptical about this. There are *lots* of non Bay-area projects and my impression (low confidence) is that it is harder for us to get funding. This is becasue even the official funding runs mostly on contacts, so they also mostly fund stuff in the hubs.

I know of two EA projects (not including my own) which I think should be funded, and I live in Sweden.

Comment by linda-linsefors on Donor Lottery Debrief · 2020-08-05T09:54:30.071Z · EA · GW

Registering predictions:

1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).

2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)

Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn't worth the time investment.

Comment by linda-linsefors on Should you do a PhD? · 2020-07-26T22:19:17.494Z · EA · GW

I did some googling.

In UK there are 4 ways to get a PhD (according to this website) and only one of them is the traditional PhD program.

Here is a discussion on independents PhDs. People are disagreeing on weather it is possible to do a PhD with out a supervisor, pointing towards different practices in different countries.

Several people claim that "The PhD process is about learning, not just publishing.", but my impression is that this is a very modern idea. A PhD used to be about proving your capability, not monitoring your learning process.

Comment by linda-linsefors on Should you do a PhD? · 2020-07-26T22:07:21.725Z · EA · GW
I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs.

So almost half of them don't. If you want a job at one of those think tanks, I would strongly recommend that you just go straight for that.

If you want to do research, then do the research you want to do. If the research you want to do mainly happen at a company or think thank, but not really in academia, go for the company or think tank.

There are other ways of getting a PhD degree that does not involve enrolling in a PhD program. In many countries, the only thing that actually matters for getting the degree is to write an defend a PhD thesis which should contain original research done by you. For example if you just keep publishing in academic journals, until your body of work is about the same as can be expected to be done during a PhD (or maybe some more to be on the safe side), you can just put it all in a book, approach a university and ask to defend your work.

This may be different in different countries. But universities mostly accept foreign students. So if you can't defend your independent thesis at home, go some where else.

Comment by linda-linsefors on Should you do a PhD? · 2020-07-26T01:01:45.542Z · EA · GW

Some of the questions of the checklist, I would endorse more as guidelines, or warning signs than as strict rules.

Is there a substantial amount of literature in your field?
Was there a major discovery in the field in recent years?

Both those questions measure how much you can learn from others in academia. If you can't take advantage of collogues, then going in to academia at all (even if you don't intend to stay) will be lower value. So you might be more productive elsewhere.

The first one also says something about how easy/hard it will be to publish and generally get recognised. If you do something non-established, you will have a much harder time.

But there are two main reason you might want to step into academia anyway.

1) To influence other academics. (I think this is the main reason FLI chooses to be an academic institution.)

2) To get paid. (In cases where there are no other options.)

Do you want a career in academia?
Is there a better option for prospective PhD students who want a career in research outside of academia?

Lot's of places out side academia does research. Companies, non-profits, think tanks, independent AI Safety researchers with Long Term Future Fund grants.

What is the better option depends on what research you want to do. The more abstract the more likely academia is a good choice. The more concrete the more likely it is not. E.g. charity evaluation is a type of research that I don't think would do well in academia (though this is not my field at all, so I might be wrong).

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-06-26T20:29:55.271Z · EA · GW

Sort of, and it might take some time. The short of it is that I'm less enthusiastic about impact purchase.

I want some sort of funding system that is flexible, and I think the best way to do this is to sponsor people and not project. If someone has though their past work shown competence and good judgement, I think they should be given a salary and freedom to do what they think is best.

I though the way to active this was impact purchase, but as someone pointed out in a comment, this makes for a very economical uncertain situation for the people living this way, which causes stress and short-sightedness which is not the best.

When I wrote this post, I assumed that I needed to have a plan to get a grant in the current system. But after talking to one of the fund manager of Long Term Future Fund, I found out that it is possible to get a grant by simply producing a track record and some vague plan to do more of the same. I've decided to try this out for my self. I'm waiting for an answer from Long Term Future Fund, and plan to write some update after I know how that goes.

If I get the grant this would prove that is is at least possible to get funding with out a clear plan. If I get rejected the concussions I take from that depends on what feedback I get with my rejection. Either way I decided to wait and see how the grant applications goes, before writing the follow up.

Comment by linda-linsefors on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-13T21:14:35.531Z · EA · GW

AI Safety Career Circle

Putting this suggestion out there, because there are always people looking for AI Safety career advise, and this is a tried and tested format.

First round, everyone shares their career plans (or lack of plans).

Second round everyone who wants to shares career advise that they think might be helpful for others in the circle.

Must be late session if you want me to lead it.

Comment by linda-linsefors on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-13T18:30:14.980Z · EA · GW

I want to listen to this podcast!

Comment by linda-linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-25T14:23:01.868Z · EA · GW

Watching it yet again, I think it would feel more right if the guy where not so easily convinced, but instead it ended with him, being "hm, that sounds promising, I'm going to learn some more".

Both the puppet really felt like real people with actual personalty to me, up until t=1:57. But then the guy just complexly changes his mind which broke my suspense of disbelief. I think that's the point when mostly started to sound like "yet another commercial".

Comment by linda-linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-25T14:08:56.323Z · EA · GW

The format of the video is basically: "Do you worry about these things, then we have the solution." Integrated with some back and forth, that I really like.

"Do you worry about these things, then we have the solution." is a standard panther in commercials, for a good reason. I think this is a good panther also for selling idea ideas like EA. But it also means that you can just say you understand my concerns and that you have solutions, you have to give me some evidence, or else is is just another empty commercial.

The person singing about their doubts felt relatable, in that they brought up real concerns about charity that I could imagine having before EA. I don't remember exactly but these seemed like standard and very reasonable concerns. And got the impression that you (the video maker) really understand "my" (the viewers) worries about giving to charity.

But when you where singing about the solutions you fall a bit short. I don't think this video would win the trust of an alternative Linda, that your suggestions for charity is actually better. I think it would help to put in some argument why treatable decides, and how to lift the barriers you mention.

Every charity says they are special, so just it don't count for much. But if you give me some arguments that I can understand for why your way is better, then that is evidence that you're onto something, and I might go and check it out some more.


All that said, I re-wathced the video, and I like it even more now. The energy and the mood shifts are amazing.

On re-watching I also feel that a viewer should be able to easily figure out the connection between focusing on deceases and avoiding building dependency. But I remember that first time I watched is it felt like there where a major step missing link there. I think it is now when I know what they will say, this gives me some more time to reflect and make those connections myself.

But people seeing this on the internet might only watch once, so...

Comment by linda-linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-21T21:09:56.418Z · EA · GW

I very much enjoyed the video. But I don't think it would have been able to change my mind in some alternative reality where I didn't already know about EA.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T16:54:15.721Z · EA · GW

Some more additions:

I) I found out what happened to

Paul Christiano (from privet email, with the permission to quote):

Basically just a lack of time, and a desire to focus on my core projects. I'd be supportive of other people making impact purchases or similar efforts work, I hope our foray into the space doesn't discourage anyone.

II) Justin Shovelain told me (and gave me permission to share this information) that he would probably have focused more on Coronavirus stuff early on, if he though there where a way to get paid for this work.

This is another type of situation where grants are too slow.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T16:16:04.606Z · EA · GW


I have changed my mind quite a bit since writing this blogpost. The updates are coming from the discussions with you in the comments, so thanks for everyone discussing with me.

Everything in this comment are still work in progress. I'll write something more formal and well though through later, when I have a more stable opinion. But my views have already change enough so that I wanted to add this update.


What I actually want there to be is some sort of trust based funding. If I proven my self enough (e.g. by doing good work) then I get money, and no questions asked. The reason I want this is becasue of flexibility (see main post).

Giving away money = Giving away power

Impact perches has the neat structure that if I done X amount of good I get X amount worth of trust (i.e. money). This seems to be the exact right amount, because it is the most you can give away and still be protected from exploitation. If someone who are not aligned with the goal of the funder tires to use impact purchase as a money pump, they still have to do an amount of good equal to the payout they want.



A project to project lifestyle doesn't seem conducive to focusing on impact.

We actually know this form an other field. In most of academia, the law of the land is publish or perish. Someone living of impact purchases will face a similar situation, and it is not good, at least not in the long run.


I think the high impact projects are often very risky, and will most likely have low impact.

To the extent that this is true, impact purchase will not work.

In theory we could have impact investors, who funds a risky project and earn money by selling the impact of the few projects which impact reached the stars (literally and/or figuratively). But this requires an other layer which may or may not happen in reality (probably won't happen). Also, from the perspective of the applicant, how is this any different from applying for a grant? So what have we gained?

If not impact purchase, then what?

I still would like to solve the problem of inflexibility that grants have. An actually I think the solutions already exist (to some extent).

1) Get a paid job, with high autonomy.

2) Start an organisation and fundraise. I did not think of this until now, but when orgs fundraise, they typically don't present a plan for what they will do with the money. They mainly point towards what they have done so far, and ask for continued trust.

3) ...? I'd be very interested in other suggestions. I would not be surprised if there are other obvious things I have missed.

There are also other solutions that don't exist yet (or not very much) in EA, but could be implement by any institution or person with spare money:

a) "Trusted person"-job: A generic employment you offer to anyone who you like to keep up the good work, or something like that.

b) Support people on Ko-fi or Patreon, or similar, and generally encourage this behaviour from others too. (I know this is happening already, but not enough for people to make a living.)

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T14:32:32.879Z · EA · GW

I'm ok with hit based impact. I just disagree about events.

I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.

Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.

But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.

But there are other types of event that are more hit based, and I notice that I'm less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.

Thanks for providing the links, I should read them.

(Of course everything relating to X-risk is all or nothing in therms of impact, but we can't measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:22:42.856Z · EA · GW

It can't take more that ~50 events for every AI Safety researcher to get to know each other.

And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.

There is not *the one event* that made the different and all the others where practically useless. That's not how research work. Sure there are randomness and some meetings are more important than others.

But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:08:42.388Z · EA · GW

Whait what?

100 000 AI Safety Events?

Like 100 000 individual events?

There is a typo here right?

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:05:31.836Z · EA · GW

I'm confused by this response. I answered all of this in the blogpost. Did I fail to communicate? I am not saying that you have to agree, but if you read what I wrote and still don't understand why *I* think some times paying after a project is a good idea, that is confusing to me, and I would like to understand better what part of the blogpost you found confusing.

Comment by linda-linsefors on Making Impact Purchases Viable · 2020-04-20T14:46:53.947Z · EA · GW
I guess this would be a key point where we differ. I haven't thought deeply about this, but my intuition would be that adjustments would greatly improve impact. For example, a small project extremely competently implemented and a big project poorly implemented might have the exact same impact, but the former would be a stronger signal.

In this case, the competent person can just do more great small projects and get more money.

Comment by linda-linsefors on Making Impact Purchases Viable · 2020-04-20T14:39:56.166Z · EA · GW

I did get help from my parents later, so if that where their assumptions they where not wrong. But I did not know this at the time, and when I asked why I could not get funding I got answers of the type: "that is not how things are done", which made no sense to me.

It is possible that not funding me back then where the right decision for the right reason. But since I where not told the reason, the experience for me where very discouraging and antagonising. That's why transparency is important!

(I'm not really blaming anyone. I think that the people I where talking to did not have explicit knowledge, so where therefore not even able to answer me. But I think we can do better.)

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-19T16:46:55.030Z · EA · GW
I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?

I'm unsure what size you have in mind when you say small.

I don't think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).

I also don't think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.

I think the high impact projects are often very risky, and will most likely have low impact.

If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.

I talk about success rather than impact, because for most project, you'll never know the actual impact. By "success" I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)

I'd say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?

Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-19T16:01:06.864Z · EA · GW
But on the other hand, refusing to pay someone who's good idea didn't work out and 'have impact' for no fault of their own also seems exploitative!

Letting the person running the project take all the risk, might not be optimal, but I would also say it is not exploitative as long as they know this from the start.

I'm not yet sure if I think the amount of money should be 100% based on actual impact, or if we also want to reward people for project that had high expected impact but low actual impact. The main argument for focusing on actual impact is that it is less objective.

I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn't seem conducive to focusing on impact.

Um, I was going to argue with this. But actually I think you are right.

Something like: "We like what you have done so far, so we will hire you to keep doing good things based on your own best judgment."

Comment by linda-linsefors on Making Impact Purchases Viable · 2020-04-19T14:00:45.183Z · EA · GW
Regardless of whether you use an impact purchase or provide funding normally, there's always a chance that the project would have taken place regardless. However, impact purchases greatly increase this problem since the person already did the project without knowing whether or not they would be funded.

I agree. This is not a crux for me.

Another argument is that people who had an impact in the past are likely to spend the money to have an impact in the future. This might be the case, but if this is the primary vehicle through which these purchases have an impact, it might be worthwhile crafting a mechanism that more narrowly focuses on this.

I currently think "people who had an impact in the past are likely to spend the money to have an impact in the future", is the main argument for Impact Purchase. It is possible that impact purchase is not the optimal format. I am still thinking about this. But I think it is important that "look at all the stuff I did in the past" is *enough* to get funded, no explanation of what I will do next needed, becasue that is too inflexible (see my post).

And if we are going to trust people and hand them money based on what they did in the past, it does make very much sense to me to trust them to the extent of how much good they done in the past. We want to give our trust and money to people who are competent (can successfully complete their plans) and have good judgement (have a good idea of which projects are potentially very important). Impact tracks both those metric.

When making a guess, one should start with the outside view, and then adjust from there. In most cases, the best outside view for what a person will do in the future, is what have they done in the past. Then maybe we want to do some adjustment from there?

If a person seem very reckless, maybe don't fund them. Or if you think an outcome was mostly bad luck, fund them more than just impact purchase. But in most cases I would suggest straight up impact purchase, because anything else is is really hard and you'll probably get the adjustments wrong.

Another issue with impact purchases is the potential to harm relations between people, lead to bitterness or to demotivate people.

This is already happening. People do start things in the hope of getting funding later along the way. And it is not just about projects. There is a reason posts complaining about how EA treats people gets very upvoted.

I think everything would be much better if we stopped worrying so much and started treating people as adults.

If a person does something that is good, but not good enough to be worth paying for, than this means that EA rather have money than this work. This means that if this person want to maximise impact, they should find something better to do, or up-skill, or switch to earning to give. Under most circumstances they should not keep doing that thing, so we should not encourage it.

(People are of course allowed to things that are less than optimal, if this is what they want. I am very much in favour of people doing what they want. But we should not pretend that what they do is more important than it is, just for encouragement.)

I went to my first EAG in 2016. Earlier that year I had finished my physics PhD and found out about AI Safety. At EAG almost everyone I talked to encouraged me to retrain to do AI Safety, and I felt super motivated. But I was running out of saving, so I asked for some money to take me through the transition, and most people just thought that I was weird for asking. That was demotivation as hell. These people where encouraging me to use my last savings to retrain to a risky career, but putting in their money was out of the question. This told me that they considerer spending my time and my resources to be costless. I was seen as a tool to be used, not an allay.

(Things have gotten better for me after that, but this is still a painful memory. And to be honest, I am still a bit bitter about it.)

People are not dumb. If encouragement is not backed up by action or money, they will notice.

If there is not enough money to go around to pay reasonable salaries for everyone who does important work, then this means that we a money constrained, which means that we would be better of if some of these people should switch to earning to give. If this is the case we as a community should be upfront about this. People will understand and adjust.

The way to not antagonise people is to be upfront about everything.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-18T18:38:48.654Z · EA · GW

In this situation I would think you evaluated my project as "small impact" which is possibly useful information, depending on how reliable I think you evaluation is. If I trust your judgement, this would obviously be discouraging, since I though it was much more impressive. But in the end I rather be right then proud, so that I can make sure to do better things in the future.

How I react would also depend on if your £10 is all I get, or if I get £10 each from lots of people, becasue that could potentially add up, maybe?

What it mainly comes down to in the end is: Do I get paid enough to sustainably afford to do this work. Or do I need to focus my effort on getting a paid job instead.

If you are a funder, and you think what I'm doing is good, but not good enough to pay me a liveable wages, then I'd much prefer that you don't try to encourage me, but instead just be upfront about this. Encouraging people to keep up an unsustainable work situation is exploitative and will backfire in the long run.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-18T00:05:10.768Z · EA · GW

I sort of agree with this, but I want to add some things.

I agree that money is not the best motivator. If I was trying to solve [people are not motivated enough] I would probably suggest some community measure rather than a new funding structure.

Money is for buying people the time (i.e. not having do some day-job just to earn a living), or funding other things they need for whatever awesome project they are doing.

However money can defiantly influence motivation. 80k mentions list "Pay you feel is unfair." as one of four "major negatives" which "tend to be linked to job dissatisfaction."

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-17T17:30:25.980Z · EA · GW

Lest assume for now that impact is clustered as the tails.

(I don't have a strong prior, but this at least don't seem implausible to me)

Then how would you like to spend funding? Since there will limited amour of money, what is your motivation for giving the low impact projects anything at all?

Is it to support the people involved to do keep working, and eventually learn and/or get lucky enough to do something really important?

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-16T10:32:12.789Z · EA · GW

Prices and impact purchase are very similar. I would say that impact purchase would be an improvement though.

For an impact purchase the amount of money is decided based on how good impact of the project was. For a price, the price money usually set in advance, and there is often a winner takes it all dynamic.

Prices feels aversive to me becasue I win by being better than others, which means I'm disincentivised to help. This is fine whenever I don't expect to win anyway, like with the forum prices. Very few blogpost gets awarded. Because the base rate is low I don't feel like I'm loosing much by helping others. But if I'm trying to make a living out of selling impacts more regularly, I would not want this comparative aspect.

Although, it is not that simple, becasue there is inevitably competition in the market for selling impact. So in practice maybe it is not so different? I'm honestly a bit confused about if I think there is a real difference or not.

Maybe the difference is that a price often have a narrower focus, which pushes the competition to be between very similar work.

I would hate there to be an EA events price, because that would make me view fellow organisers as my competition, and those are exactly the people I should exchange experience and advise with. It would be much less bad to compete on "team organisers" against everyone else, about who can have the biggest overall impact.

Comment by linda-linsefors on The Case for Impact Purchase | Part 1 · 2020-04-14T23:55:40.939Z · EA · GW

For me the most important consideration is flexibility, i.e. not having to wait for a grant comity to make up their mind before I can start. For this problem, the hybrid model is no better than a grant, unless it can speed up the application process by an order of magnitude.

Also, any major improvement in evaluation time also needs to be combined with running applications. Otherwise the applicant still have to wait a few months (on expectation) for the next grant round to come around. I guess there is a reason no major grant agency has running applications?

Let's say you have running applications that evaluation time is proportional to the amount of money (I have no idea of that is true). Then funding something 20% up front would take a 1/5 of the time to evaluate, which is not bad. But I'm not sure how useful that would be for the applicant. I think most applications will fall into one of two cases:

1) The applicants is fine with taking the risk of not getting paid anything

2) The applicant needs to know that the majority of the budget will get covered

For example, if I run an event online or at CEEALAR (formally EA Hotel), I'm ok with taking the risk of not getting paid, I'll just adjust my calculations for deciding if I want to run that event again. But if I run an event that has actual cost for me (other than my time), like travel and/or venue, then I need to know that those cost will be covered, 20% up front is probably not good enough.

But if an applicant is willing to put up with the hassle of applying for a grant (because they need the guaranteed money), then having some token amount depend on the outcome might be motivating. However, this also means that the grant maker need to evaluate the project twice, which takes even more time. But if I imagine myself as the recipient, I would very much welcome a post project evaluation from the grant maker, if this is something they want to do.

I think a improvement of this suggestion, is to cover any necessary cost in an initial grant (weather that be 0% or 90%). And offer an additional payment as a bonus if the project is successful. Where projects that request 0% in advance are "auto accepted" for the first half (which is £0). There might still be some point to pre-register projects with the grant makers, I think? Maybe they can say what metrics to track for the post evaluation? E.g. what questions they want in an event evaluation survey, and similar?

Comment by linda-linsefors on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-21T18:36:11.866Z · EA · GW

Who are you?

Hi I'm Linda.

I been involved in AI Safety for a few years no, mainly learning and organizing events. Once I had the ambition to be a AI Safety researcher, but I think I'm just to impatient (or maybe I'll get back to it one day, I don't know). At the moment I am mainly focusing on helping others becasue I have found that I like this role. But I am always up for discussing technical research, becasue it is just so interesting.

What are some things people can talk to you about?

  • AI Safety - I'll discuss your research idea with you and/or share some career advise
  • Physics (I have a PhD in Quantum Cosmology)
  • Productivity coaching - This is a skill I'm developing, so really you are doing my a favor if you let me practice on you.

What are things you'd like to talk to other people about?

  • I want to talk to aspiring and early career AI Safety researchers, to learn about your situation and what your bottle necks are.
  • I want to talk to anyone who is doing or wants to do any sort of AI Safety career support.
  • Help me review my plans, and if warranted give me social validation.

How can people get in touch with you?


Meeting: Calendly

Comment by linda-linsefors on Two AI Safety events at EA Hotel in August · 2019-07-05T15:33:30.742Z · EA · GW

There is still room for more participants at TAISU, but sleeping space is starting to fill up. The EA Hotel dorm rooms are almost fully booked. Fore those who don't fit in the dorm or want some more privet space, there are lots of near by hotel. However since TAISU happens to be on a UK bank holiday, these might fill up too.

Comment by linda-linsefors on Two AI Safety events at EA Hotel in August · 2019-06-13T13:24:30.620Z · EA · GW

The Learning-by-doing AI Safety workshop is now full, but due to the enthusiasm I have received I am going to organize a second Learning-by-doing AI Safety workshop some time in October/November this year. If you want to influence when it will be you can fill in our doodle:

I am leaving the application form open. You can fill it in to show interest in the second Learning-by-doing AI Safety workshop and future similar events.

Comment by linda-linsefors on Two AI Safety events at EA Hotel in August · 2019-05-22T11:37:40.491Z · EA · GW

Thanks for pointing this out :)

Should be fixed now