Posts

AI Safety Career Bottlenecks Survey Responses Responses 2021-05-28T10:41:37.166Z
Announcing AI Safety Support 2020-11-19T20:19:58.031Z
Should you do a PhD? 2020-07-24T10:15:29.420Z
The Case for Impact Purchase | Part 1 2020-04-14T13:08:48.664Z
Announcing Web-TAISU, May 13-17 2020-04-05T22:26:14.186Z
What is the funding situation for AI Safety? 2020-03-21T13:38:29.687Z
Coronavirus Tech Handbook 2020-03-11T14:44:00.478Z
TAISU - Technical AI Safety Unconference 2020-02-04T18:26:37.057Z
Two AI Safety events at EA Hotel in August 2019-05-21T18:57:00.683Z

Comments

Comment by Linda Linsefors on Open Philanthropy is seeking proposals for outreach projects · 2021-08-09T17:23:33.221Z · EA · GW

I can't find any deadline.  How long should I expect this opportunity to stay open? 
(I'm not applying myself but I'll probably encourage some other people to do so.)

Comment by Linda Linsefors on EA Survey 2019 Series: Community Information · 2021-06-03T23:33:50.930Z · EA · GW

Thanks for doing this!

When comparing whites and non-whites, did you do anything to control for location.

I noticed non-whites ranked EAG as less important. Could this be becasue they are more likely to live far away from EAG events?

Or maybe there are so few EAs living in non-white majority countries, that they don't skew the statistic? I.e. non-white EAs in majority white countries massively outnumber non-white EAs in non-white majority countries?

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2021-04-26T13:40:45.057Z · EA · GW

That would also give you all the drawbacks of grants
See "Reasons to evaluate a project after it is completed" in the original post

If you want to give me a living wage without me first having to prove my self in some way, please give me money. 

For most people, grants aren't simply "available". There has to be some evidence. This can be provided either by arguing your case (normal grant application) or by just doing the work. I think many people (including me) would prefer to just do the work, and let that speak for itself (for the reasons explained in the original post).

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-03-04T03:15:13.795Z · EA · GW

But I'd love to be proven wrong here.

I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers. 

I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.

However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.
 

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers.

We are working on that. I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-03-04T02:22:05.085Z · EA · GW

I'm not going to lead this, but would be happy to join.

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-27T16:07:03.047Z · EA · GW

 I've been told a few time that I belong in the group organizers slack, but never actually felt at home there, because I feel like I'm doing something very different from most group organizers. 

The main requirement of such a chat is that it attracts other ecosystem organizers, which is a marketing problem more than a logistical problem. There are lots of platforms that would be adequate.

Making a separate ecosystem slack channel in the group organizer slack, and marketing it here, may work (30% chance of success), and since it is low effort, it seems worth a try.

A some what higher effort, but also higher expected payoff, would be to find all ecosystem organizers, contact them personally and invite them to a group call. Or invite them to fill in a when2meet for deciding when to have said group call. 

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-12T12:30:26.353Z · EA · GW

Thanks for the much improved source!

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-09T21:05:34.376Z · EA · GW

We (AI Safety Support) are literally doing all these things

There is no CEA for people working on AI safety, that creates websites, discussion platforms, conferences, connects mentors, surveys members etc.


I don't blame DavidNash for not knowing about us. I did not know about EA Consultancy Network. So maybe what we need is a meta ecosystem for ecosystems? There is a slack group for local group organizer, and a local group directory at EA Hub. Similarly, it would be nice to have a dedicated chat some for ecosystem organizer, and a public directory somewhere.

CEA has said that they are currently not focusing on supporting this type of projects (source: privet conversation). So if someone want to set it up, just go for it! And let me know if I can help.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-09T13:41:18.271Z · EA · GW

That's surprisingly short, which is great by the way. 

I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications.

I'm happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don't know that more polish will not help. 

You can probably save a lot of time on the side of the applicants by:

  • Stating how much time you recommend people spend on the application
  • Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for.

I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T13:25:14.215Z · EA · GW

What do you think is a reasonable amount of time to spend on an application to the LFTT?

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T13:19:45.080Z · EA · GW

What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context? 

What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context? 

For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T12:02:50.739Z · EA · GW

I want to see a compelling case that there's not an organisation that would be a good home for the applicant.


My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there. 

Do you disagree with this?

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T11:52:40.523Z · EA · GW

What do you mean by "There haven't previously been many options available"? What is stopping you from just giving people money? Why do you need an institute as middle hand?

Comment by Linda Linsefors on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T19:08:55.384Z · EA · GW

What type of funding opportunities related to AI Safety would OpenPhil want to see more of?

Anything else you can tell me about the funding situation with regards to AI Safety. I'm very confused about why not more people and projects get funded. Is  because there is not enough money, or if there is some bottleneck related to evaluation and/or trust?

Comment by Linda Linsefors on Ethical offsetting is antithetical to EA · 2021-01-21T10:12:19.567Z · EA · GW

Edit: I've posted before reading others comments. Others have already made this an similar points.

Here is a story of how ethical offsetting can be effective.

I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly.  With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying]. 

What I did was to calculate what offsetting would cost, and it turned out to be around one USD, so basically nothing. I could now conclude that:

Flying + offsetting > Going by train

Because I would save time, and I could easily afford to offset more than the harm I would do by flying, and still pay less in total.

Now, since I'm an EA I could also do the next step

Flying + donating to the most effective thing > Flying + offsetting > Going by train.

But I needed at least the idea of offsetting to simplify the calculation to something I could manage my self in an afternoon. In the first step I compare things that are similar enough so the comparison is mostly straight forward. The second step is actually super complicated, but it's the sort of thing EAs has been doing for year, so for this I can fall back on others. 

But I'm not sure how I would have done the direct comparison between [flying + donating] v.s. [going by train]. I'm sure it's doable some how, but with the middle step, it was so much much easier.

Comment by Linda Linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:35:42.790Z · EA · GW

Hi Guy
I'd be happy to talk to you. I'm co-founder of AI Safety Support, a new organization dedicated to helping people who want help with AI Safety. 

I'd like so see how we can help you, and learn from you how we can better support people in your situation. Please reach out by mail, or book a call or both.

Comment by Linda Linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:31:39.256Z · EA · GW

AI Safety Support are doing an AI Safety Carers Bottleneck survay. 

Please help us spread it around. 
We want responses from anyone who are currently doing AI Safety work, or would like to do so in the future.

It only takes 5-20 minutes to answer (these are empirical numbers).
https://www.guidedtrack.com/programs/n8cydtu/run

Comment by Linda Linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:22:10.386Z · EA · GW

Funding proposal: AI Safety Support 

Our goal is to enable aspiring AI Safety researchers to do the things they are trying to achieve. We provide operational and community support to early career and transitioning researchers to fill gaps in the AI Safety career pipeline. (For more info, see this blogpost)

Suggested donation: Anything in the range $30k - $60k. 
We would not turn away smaller amounts, since we are not trying to get fully funded from a single donation anyway. But you suggested $30k as a lower limit.

Regarding "Relative opinions", I'm happy to discuss that in a private, if you want. 

Edit: I don't think this reasoning applies to us anyway. Though I'm happy to talk anyway.

Comment by Linda Linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:04:36.624Z · EA · GW

Here is a number of EA funding requests

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-19T03:00:49.341Z · EA · GW

An aspect of the funding problem is that money allocation is bad everywhere. (On a larger scale, the market mostly woks, but if you get into the details of being a human wanting to trade your time for money, most things around job applications and grant applications, is more or less terrible.) If we design a system that don't suck, over time EA will attract people who are here for the money not for the mission. 

A solution should have the feature:
1) It don't suck if you are EA aligned
2) If you are not EA aligned it should not be easier to get money from us than from other places. (It is possible to get non EA aligned people to do EA aligned actions. But that require an very different level of oversight.)

I think a grant lottery, where the barrier to entry is to have done some significant amount of EA volunteer work or EA donation or similar, would be an awesome experiment.

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-13T22:02:02.002Z · EA · GW

Funding is a mess. 

Distributing money is hard and we should not expect to have a good solution anytime soon. But it would be helpful if people where aware of how inadequate our current funding ecosystem is. Even though money supposedly exists, funding is still the main bottleneck for most new EA initiatives.

My current analysis is that grant evaluation is hard becasue is inherently low bandwidth. I would therefore recommend that people donate though their own personal networks rather than giving to one of the EA Funds. I'd also expect that we'll see a greater and healthier diversity of projects this way.

I know the argument for having centralized funding. We pool all the money and all the applications in one place, and then let some trusted people sort it out. In theory this both saves time and optimize money distribution. But in practice it has a lot of problems. It's slow, it's low bandwidth, and the biases of a few will effect everyone.

I've personally lost a lot of time to grant agencies. Waiting for answers what where late. Or waiting for a promised application opening, that where canceled. If you have not experienced these things yourself, it's hard for me to describe how much it can mess up everything. And that's just one of the problems. 

Dealing with individual funders has been sooooo much easier, and just overall a much nicer and more supportive experience. 

I have a lot more to say about this, but I have not found the best way to express it yet. But feel free to reach out for more of my thoughts. 

(An alternative hypotheses, is that EA is cash constrained. I.e. the bottle neck is not around distributing the money, it's about there not being enough of it. In that case we should upgrade the importance of earning to give.)

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-13T21:08:22.215Z · EA · GW

EA Hub has evolved a lot since last time I had a look. I was going to complain that it has limited usefulness since you can only search based on location and not interests and expertise, but that is no longer true. This is great!

Comment by Linda Linsefors on Where are you donating in 2020 and why? · 2020-12-08T13:57:08.988Z · EA · GW

My friend's wrist where hurting from clicking, so we tried getting a second mouse, which we taped on the floor as a foot pedal. Now he moves the cursor whit his hand and click with his foot. It works surprisingly well.

Comment by Linda Linsefors on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-07T13:19:13.277Z · EA · GW

I am in favour of people asking for requests, including money. Even if these post are not of interest to most readers, I think they can be of grate value when read by the right person, but the chances of that goes down dramatically if the post are not on the front-page. 

On the other hand, we don't want the front-page to be filled up with various requests. It takes up space, and also don't look very good. But I do think there is a simple win win here.

Create a top level post called something like "Requests for funding and other favours", where people can leave their requests as comments. This will only take up a single line on the front-page, and it will more accessible for the people who are looking to donate.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-10T02:11:06.043Z · EA · GW

Then maybe these lots of people should gang up and start a new hub, literally anywhere else. Funding problem mostly solved.

If people are not seriously trying this, then it's hard for me to take seriously any claims of lack of funding. But as I said, I might be missing something. If so, pleas tell me.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-08T22:14:30.908Z · EA · GW

You are correct that people in the Bay can find out about project in other places. The project I know about are also not in the same location as me. I don't expect being in the Bay has an advantage for finding out about projects in other places, but I could be wrong.

When it comes to project in the Bay, I would not expect people who lack funding to be there in the first place, given that it is ridiculously expensive. But I might be missing something? I have not investigated the details, since I'm not allowed to just move their my self, even if I could afford it. (Visa reason, I'm Swedish)

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-06T01:43:58.018Z · EA · GW
Looking for more projects like these

AI Safety Support is looking for both funding and fiscal sponsorship. We have two donation pledges which are conditional on the donations being tax-deductible (one from Canada and one from the US). But even if we solve that, we still have a bit more room for funding.

The money will primarily be used for sallary for me and JJ Hepburn.

AI Safety Support's mission is to help aspiring and early career AI Safety researcher in any way we can. There are currently lots of people who wants to help with this problem but who don't have the social and institutional support from organisations and people around them.

We are currently running monthly online AI Safety discussion days, where people can share and discuss their research ideas, independent of their location. These events are intended as a complement to the Alignment Forum and other written forms of publication. We believe that live talks conversation are a better way to share early stage ideas, and that blogpost and papers comes later in the proses.

We also have other projects in the pipe line, e.g. our AI Safety career bottleneck survey. However, these things are currently on hold until we've secured enough funding so that we know we will be able to keep going for at least one year (to start with).

AI Safety Support have only existed since May, but both of us have a track record of organising similar events in the past, e.g. AI Safety Camps.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-05T10:35:44.884Z · EA · GW
I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren't yet funded by larger donors

I am sceptical about this. There are *lots* of non Bay-area projects and my impression (low confidence) is that it is harder for us to get funding. This is becasue even the official funding runs mostly on contacts, so they also mostly fund stuff in the hubs.

I know of two EA projects (not including my own) which I think should be funded, and I live in Sweden.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-05T09:54:30.071Z · EA · GW

Registering predictions:

1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).

2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)


Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn't worth the time investment.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T22:19:17.494Z · EA · GW

I did some googling.

In UK there are 4 ways to get a PhD (according to this website) and only one of them is the traditional PhD program.

Here is a discussion on independents PhDs. People are disagreeing on weather it is possible to do a PhD with out a supervisor, pointing towards different practices in different countries.

Several people claim that "The PhD process is about learning, not just publishing.", but my impression is that this is a very modern idea. A PhD used to be about proving your capability, not monitoring your learning process.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T22:07:21.725Z · EA · GW
I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs.

So almost half of them don't. If you want a job at one of those think tanks, I would strongly recommend that you just go straight for that.

If you want to do research, then do the research you want to do. If the research you want to do mainly happen at a company or think thank, but not really in academia, go for the company or think tank.

There are other ways of getting a PhD degree that does not involve enrolling in a PhD program. In many countries, the only thing that actually matters for getting the degree is to write an defend a PhD thesis which should contain original research done by you. For example if you just keep publishing in academic journals, until your body of work is about the same as can be expected to be done during a PhD (or maybe some more to be on the safe side), you can just put it all in a book, approach a university and ask to defend your work.

This may be different in different countries. But universities mostly accept foreign students. So if you can't defend your independent thesis at home, go some where else.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T01:01:45.542Z · EA · GW

Some of the questions of the checklist, I would endorse more as guidelines, or warning signs than as strict rules.

Is there a substantial amount of literature in your field?
Was there a major discovery in the field in recent years?

Both those questions measure how much you can learn from others in academia. If you can't take advantage of collogues, then going in to academia at all (even if you don't intend to stay) will be lower value. So you might be more productive elsewhere.

The first one also says something about how easy/hard it will be to publish and generally get recognised. If you do something non-established, you will have a much harder time.

But there are two main reason you might want to step into academia anyway.

1) To influence other academics. (I think this is the main reason FLI chooses to be an academic institution.)

2) To get paid. (In cases where there are no other options.)

Do you want a career in academia?
Is there a better option for prospective PhD students who want a career in research outside of academia?

Lot's of places out side academia does research. Companies, non-profits, think tanks, independent AI Safety researchers with Long Term Future Fund grants.

What is the better option depends on what research you want to do. The more abstract the more likely academia is a good choice. The more concrete the more likely it is not. E.g. charity evaluation is a type of research that I don't think would do well in academia (though this is not my field at all, so I might be wrong).

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-06-26T20:29:55.271Z · EA · GW

Sort of, and it might take some time. The short of it is that I'm less enthusiastic about impact purchase.

I want some sort of funding system that is flexible, and I think the best way to do this is to sponsor people and not project. If someone has though their past work shown competence and good judgement, I think they should be given a salary and freedom to do what they think is best.

I though the way to active this was impact purchase, but as someone pointed out in a comment, this makes for a very economical uncertain situation for the people living this way, which causes stress and short-sightedness which is not the best.

When I wrote this post, I assumed that I needed to have a plan to get a grant in the current system. But after talking to one of the fund manager of Long Term Future Fund, I found out that it is possible to get a grant by simply producing a track record and some vague plan to do more of the same. I've decided to try this out for my self. I'm waiting for an answer from Long Term Future Fund, and plan to write some update after I know how that goes.

If I get the grant this would prove that is is at least possible to get funding with out a clear plan. If I get rejected the concussions I take from that depends on what feedback I get with my rejection. Either way I decided to wait and see how the grant applications goes, before writing the follow up.

Comment by Linda Linsefors on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-13T21:14:35.531Z · EA · GW

AI Safety Career Circle

Putting this suggestion out there, because there are always people looking for AI Safety career advise, and this is a tried and tested format.

First round, everyone shares their career plans (or lack of plans).

Second round everyone who wants to shares career advise that they think might be helpful for others in the circle.

Must be late session if you want me to lead it.

Comment by Linda Linsefors on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-13T18:30:14.980Z · EA · GW

I want to listen to this podcast!

Comment by Linda Linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-25T14:23:01.868Z · EA · GW

Watching it yet again, I think it would feel more right if the guy where not so easily convinced, but instead it ended with him, being "hm, that sounds promising, I'm going to learn some more".

Both the puppet really felt like real people with actual personalty to me, up until t=1:57. But then the guy just complexly changes his mind which broke my suspense of disbelief. I think that's the point when mostly started to sound like "yet another commercial".

Comment by Linda Linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-25T14:08:56.323Z · EA · GW

The format of the video is basically: "Do you worry about these things, then we have the solution." Integrated with some back and forth, that I really like.

"Do you worry about these things, then we have the solution." is a standard panther in commercials, for a good reason. I think this is a good panther also for selling idea ideas like EA. But it also means that you can just say you understand my concerns and that you have solutions, you have to give me some evidence, or else is is just another empty commercial.

The person singing about their doubts felt relatable, in that they brought up real concerns about charity that I could imagine having before EA. I don't remember exactly but these seemed like standard and very reasonable concerns. And got the impression that you (the video maker) really understand "my" (the viewers) worries about giving to charity.

But when you where singing about the solutions you fall a bit short. I don't think this video would win the trust of an alternative Linda, that your suggestions for charity is actually better. I think it would help to put in some argument why treatable decides, and how to lift the barriers you mention.

Every charity says they are special, so just it don't count for much. But if you give me some arguments that I can understand for why your way is better, then that is evidence that you're onto something, and I might go and check it out some more.

******

All that said, I re-wathced the video, and I like it even more now. The energy and the mood shifts are amazing.

On re-watching I also feel that a viewer should be able to easily figure out the connection between focusing on deceases and avoiding building dependency. But I remember that first time I watched is it felt like there where a major step missing link there. I think it is now when I know what they will say, this gives me some more time to reflect and make those connections myself.

But people seeing this on the internet might only watch once, so...

Comment by Linda Linsefors on I Want To Do Good - an EA puppet mini-musical! · 2020-05-21T21:09:56.418Z · EA · GW

I very much enjoyed the video. But I don't think it would have been able to change my mind in some alternative reality where I didn't already know about EA.

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T16:54:15.721Z · EA · GW

Some more additions:

I) I found out what happened to impactpurchase.org

Paul Christiano (from privet email, with the permission to quote):

Basically just a lack of time, and a desire to focus on my core projects. I'd be supportive of other people making impact purchases or similar efforts work, I hope our foray into the space doesn't discourage anyone.

II) Justin Shovelain told me (and gave me permission to share this information) that he would probably have focused more on Coronavirus stuff early on, if he though there where a way to get paid for this work.

This is another type of situation where grants are too slow.

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T16:16:04.606Z · EA · GW

Update:

I have changed my mind quite a bit since writing this blogpost. The updates are coming from the discussions with you in the comments, so thanks for everyone discussing with me.

Everything in this comment are still work in progress. I'll write something more formal and well though through later, when I have a more stable opinion. But my views have already change enough so that I wanted to add this update.

------------------------------------------------

What I actually want there to be is some sort of trust based funding. If I proven my self enough (e.g. by doing good work) then I get money, and no questions asked. The reason I want this is becasue of flexibility (see main post).

Giving away money = Giving away power

Impact perches has the neat structure that if I done X amount of good I get X amount worth of trust (i.e. money). This seems to be the exact right amount, because it is the most you can give away and still be protected from exploitation. If someone who are not aligned with the goal of the funder tires to use impact purchase as a money pump, they still have to do an amount of good equal to the payout they want.

But...

Khorton:

A project to project lifestyle doesn't seem conducive to focusing on impact.

We actually know this form an other field. In most of academia, the law of the land is publish or perish. Someone living of impact purchases will face a similar situation, and it is not good, at least not in the long run.

Halffull

I think the high impact projects are often very risky, and will most likely have low impact.

To the extent that this is true, impact purchase will not work.

In theory we could have impact investors, who funds a risky project and earn money by selling the impact of the few projects which impact reached the stars (literally and/or figuratively). But this requires an other layer which may or may not happen in reality (probably won't happen). Also, from the perspective of the applicant, how is this any different from applying for a grant? So what have we gained?

If not impact purchase, then what?

I still would like to solve the problem of inflexibility that grants have. An actually I think the solutions already exist (to some extent).

1) Get a paid job, with high autonomy.

2) Start an organisation and fundraise. I did not think of this until now, but when orgs fundraise, they typically don't present a plan for what they will do with the money. They mainly point towards what they have done so far, and ask for continued trust.

3) ...? I'd be very interested in other suggestions. I would not be surprised if there are other obvious things I have missed.

There are also other solutions that don't exist yet (or not very much) in EA, but could be implement by any institution or person with spare money:

a) "Trusted person"-job: A generic employment you offer to anyone who you like to keep up the good work, or something like that.

b) Support people on Ko-fi or Patreon, or similar, and generally encourage this behaviour from others too. (I know this is happening already, but not enough for people to make a living.)

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-25T14:32:32.879Z · EA · GW

I'm ok with hit based impact. I just disagree about events.

I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.

Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.

But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.

But there are other types of event that are more hit based, and I notice that I'm less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.

Thanks for providing the links, I should read them.

(Of course everything relating to X-risk is all or nothing in therms of impact, but we can't measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:22:42.856Z · EA · GW

It can't take more that ~50 events for every AI Safety researcher to get to know each other.

And key ideas are not seeded at a single point in time, it is something that comes together from lots of reading and talking.

There is not *the one event* that made the different and all the others where practically useless. That's not how research work. Sure there are randomness and some meetings are more important than others.

But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:08:42.388Z · EA · GW

Whait what?

100 000 AI Safety Events?

Like 100 000 individual events?

There is a typo here right?

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-22T19:05:31.836Z · EA · GW

I'm confused by this response. I answered all of this in the blogpost. Did I fail to communicate? I am not saying that you have to agree, but if you read what I wrote and still don't understand why *I* think some times paying after a project is a good idea, that is confusing to me, and I would like to understand better what part of the blogpost you found confusing.

Comment by Linda Linsefors on Making Impact Purchases Viable · 2020-04-20T14:46:53.947Z · EA · GW
I guess this would be a key point where we differ. I haven't thought deeply about this, but my intuition would be that adjustments would greatly improve impact. For example, a small project extremely competently implemented and a big project poorly implemented might have the exact same impact, but the former would be a stronger signal.

In this case, the competent person can just do more great small projects and get more money.

Comment by Linda Linsefors on Making Impact Purchases Viable · 2020-04-20T14:39:56.166Z · EA · GW

I did get help from my parents later, so if that where their assumptions they where not wrong. But I did not know this at the time, and when I asked why I could not get funding I got answers of the type: "that is not how things are done", which made no sense to me.

It is possible that not funding me back then where the right decision for the right reason. But since I where not told the reason, the experience for me where very discouraging and antagonising. That's why transparency is important!

(I'm not really blaming anyone. I think that the people I where talking to did not have explicit knowledge, so where therefore not even able to answer me. But I think we can do better.)

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-19T16:46:55.030Z · EA · GW
I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?

I'm unsure what size you have in mind when you say small.

I don't think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters).

I also don't think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things.

I think the high impact projects are often very risky, and will most likely have low impact.

If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success.

I talk about success rather than impact, because for most project, you'll never know the actual impact. By "success" I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?)

I'd say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding?

Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-19T16:01:06.864Z · EA · GW
But on the other hand, refusing to pay someone who's good idea didn't work out and 'have impact' for no fault of their own also seems exploitative!

Letting the person running the project take all the risk, might not be optimal, but I would also say it is not exploitative as long as they know this from the start.

I'm not yet sure if I think the amount of money should be 100% based on actual impact, or if we also want to reward people for project that had high expected impact but low actual impact. The main argument for focusing on actual impact is that it is less objective.

I think people who are using this type of work as a living should get paid a salary with benefits and severance. A project to project lifestyle doesn't seem conducive to focusing on impact.

Um, I was going to argue with this. But actually I think you are right.

Something like: "We like what you have done so far, so we will hire you to keep doing good things based on your own best judgment."

Comment by Linda Linsefors on Making Impact Purchases Viable · 2020-04-19T14:00:45.183Z · EA · GW
Regardless of whether you use an impact purchase or provide funding normally, there's always a chance that the project would have taken place regardless. However, impact purchases greatly increase this problem since the person already did the project without knowing whether or not they would be funded.

I agree. This is not a crux for me.

Another argument is that people who had an impact in the past are likely to spend the money to have an impact in the future. This might be the case, but if this is the primary vehicle through which these purchases have an impact, it might be worthwhile crafting a mechanism that more narrowly focuses on this.

I currently think "people who had an impact in the past are likely to spend the money to have an impact in the future", is the main argument for Impact Purchase. It is possible that impact purchase is not the optimal format. I am still thinking about this. But I think it is important that "look at all the stuff I did in the past" is *enough* to get funded, no explanation of what I will do next needed, becasue that is too inflexible (see my post).

And if we are going to trust people and hand them money based on what they did in the past, it does make very much sense to me to trust them to the extent of how much good they done in the past. We want to give our trust and money to people who are competent (can successfully complete their plans) and have good judgement (have a good idea of which projects are potentially very important). Impact tracks both those metric.

When making a guess, one should start with the outside view, and then adjust from there. In most cases, the best outside view for what a person will do in the future, is what have they done in the past. Then maybe we want to do some adjustment from there?

If a person seem very reckless, maybe don't fund them. Or if you think an outcome was mostly bad luck, fund them more than just impact purchase. But in most cases I would suggest straight up impact purchase, because anything else is is really hard and you'll probably get the adjustments wrong.

Another issue with impact purchases is the potential to harm relations between people, lead to bitterness or to demotivate people.

This is already happening. People do start things in the hope of getting funding later along the way. And it is not just about projects. There is a reason posts complaining about how EA treats people gets very upvoted.

I think everything would be much better if we stopped worrying so much and started treating people as adults.

If a person does something that is good, but not good enough to be worth paying for, than this means that EA rather have money than this work. This means that if this person want to maximise impact, they should find something better to do, or up-skill, or switch to earning to give. Under most circumstances they should not keep doing that thing, so we should not encourage it.

(People are of course allowed to things that are less than optimal, if this is what they want. I am very much in favour of people doing what they want. But we should not pretend that what they do is more important than it is, just for encouragement.)

I went to my first EAG in 2016. Earlier that year I had finished my physics PhD and found out about AI Safety. At EAG almost everyone I talked to encouraged me to retrain to do AI Safety, and I felt super motivated. But I was running out of saving, so I asked for some money to take me through the transition, and most people just thought that I was weird for asking. That was demotivation as hell. These people where encouraging me to use my last savings to retrain to a risky career, but putting in their money was out of the question. This told me that they considerer spending my time and my resources to be costless. I was seen as a tool to be used, not an allay.

(Things have gotten better for me after that, but this is still a painful memory. And to be honest, I am still a bit bitter about it.)

People are not dumb. If encouragement is not backed up by action or money, they will notice.

If there is not enough money to go around to pay reasonable salaries for everyone who does important work, then this means that we a money constrained, which means that we would be better of if some of these people should switch to earning to give. If this is the case we as a community should be upfront about this. People will understand and adjust.

The way to not antagonise people is to be upfront about everything.

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-04-18T18:38:48.654Z · EA · GW

In this situation I would think you evaluated my project as "small impact" which is possibly useful information, depending on how reliable I think you evaluation is. If I trust your judgement, this would obviously be discouraging, since I though it was much more impressive. But in the end I rather be right then proud, so that I can make sure to do better things in the future.

How I react would also depend on if your £10 is all I get, or if I get £10 each from lots of people, becasue that could potentially add up, maybe?

What it mainly comes down to in the end is: Do I get paid enough to sustainably afford to do this work. Or do I need to focus my effort on getting a paid job instead.

If you are a funder, and you think what I'm doing is good, but not good enough to pay me a liveable wages, then I'd much prefer that you don't try to encourage me, but instead just be upfront about this. Encouraging people to keep up an unsustainable work situation is exploitative and will backfire in the long run.