Posts

AI Safety Career Bottlenecks Survey Responses Responses 2021-05-28T10:41:37.166Z
Announcing AI Safety Support 2020-11-19T20:19:58.031Z
Should you do a PhD? 2020-07-24T10:15:29.420Z
The Case for Impact Purchase | Part 1 2020-04-14T13:08:48.664Z
Announcing Web-TAISU, May 13-17 2020-04-05T22:26:14.186Z
What is the funding situation for AI Safety? 2020-03-21T13:38:29.687Z
Coronavirus Tech Handbook 2020-03-11T14:44:00.478Z
TAISU - Technical AI Safety Unconference 2020-02-04T18:26:37.057Z
Two AI Safety events at EA Hotel in August 2019-05-21T18:57:00.683Z

Comments

Comment by Linda Linsefors on Impact Markets: The Annoying Details · 2022-08-18T15:34:27.618Z · EA · GW

I would very much want there to be a "money after the project" funding system for smaller projects in EA. 

https://forum.effectivealtruism.org/posts/7iptwuSyzDzxsEY5z/the-case-for-impact-purchase-or-part-1

Although after I wrote this post I updated towards this not being a good idea for anyone to get their income from this system long term. But I still think it would be a good alternative to other funding systems for new EAs. Retrospective funding have definite disadvantages, but for someone with out enough reputation, it may be better than the available alternatives. 

Comment by Linda Linsefors on Impact Markets: The Annoying Details · 2022-07-20T11:45:19.979Z · EA · GW

Someone who is exited about impact markets should do Goal Factoring on your preferred version of impact markets.

Comment by Linda Linsefors on Impact Markets: The Annoying Details · 2022-07-20T11:37:59.975Z · EA · GW

What do we want from impacts markets?

Mostly, we want the prediction power of markets.

Do we want the market to take on some of the risk? Probably no.  Most current funds are ok with risks, as long as the odds are net positive. Scott seems to think that transferring risk to to investors is a bad externality of impacts market. 

This is bad both because we don’t want people to lose all their money, and because this might create moral hazard on the part of final oracular funders to recoup some of people’s losses if they seem like an especially pitiful case.

I have not thought at lot about this, but I think I agree with Scott here.


So what we want is a way of tracking and rewarding good predictions. But we already have a solution for that, i.e. prediction markets. Granted that there are still design details to be worked out regarding prediction markets, but 

  1. It seems like an easier problem
  2. There are already several experiments to learn from

 

In a prediction market, good predictors provide information and get rewarded for this. Funders can use the prediction market to guide what they fund. If they choose to fund something risky, some of the risk falls on the predictors (we can't remove this completely, and I don't think we want to either) but most of the risk falls on the funder, which I think is what we want.

This also solves some other problems. 

  1. Funders can just not fund things with obvious large downside risk.
  2. Investor can't capture almost all the value by just being fast at buying obvious opportunities.

Although I don't think 2 will be a big problem even in a impact market.  If it is an obvious good idea the people who wants to run the projects can just not go though the markets and instead reach out to a funder directly.

 

predictor = person who invest in prediction market
investor  = person who invest in impacts market
funder = person or institution who want's to fund altruism, with no expectation of financial return.

 

I'm I missing something that we want from a impacts market that we can't get from a prediction market?

(I've written about impact purchase before. However the thing I want out of such a system is completely removed from Scott's suggestion, and vise versa, so should probably not count as the same type of system.)

Comment by Linda Linsefors on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-27T13:25:16.501Z · EA · GW

Great initiative!

I'm registering these prediction:

  • They'll get more than 1 000 responses -- 90% chance
  • They'll get more than 10 000 responses -- 40% chance
Comment by Linda Linsefors on The availability bias in job hunting · 2022-05-02T18:26:09.948Z · EA · GW

80k podcast does this (more identify than reward, but still). But I agree that more would be good.

Comment by Linda Linsefors on Introducing Canopy Retreats · 2022-04-25T22:15:55.366Z · EA · GW

Thanks :)

Comment by Linda Linsefors on Introducing Canopy Retreats · 2022-04-25T09:52:48.952Z · EA · GW

From your website:

Canopy Retreats aids EA orgs and community members in the planning and running of mid-sized, multi-day retreats.

What size is "mid-sized"?

Comment by Linda Linsefors on The Vultures Are Circling · 2022-04-18T15:39:07.576Z · EA · GW
  • In general, being a good organizer isn’t even something that seems to get you much clout in this community, see other post today about this (i haven’t read it yet)

 

Which post is this?

Comment by Linda Linsefors on Issues with centralised grantmaking · 2022-04-14T21:36:56.244Z · EA · GW

I agree that grantmaking is hard! 

There are gaps in the sytem exactly because grantmaking is hard. 

No, this is not about grantmakting skills, or at least not directly. But skills in relation to the task dificulty is very relevant. But nither is it about fairness. Slowing down to worry about fairness with in EA seems dumb.

This is about not spreading harmfull missleading information to applicants, and other potential donors who are concidering if they want to make thier own donation decition or not.

I'm mostly just trying to say that can we please accknolage that the system is not perfect? How do I say this without anyone feeling attact?

Getting rejected hurts. If you tell everyone that EA has heeps of money and that the grantmakers are perfect, then it hurts about 100x more. This is a real cost. EA is loosing members because of this, and almost no-one talks about it. But it would not be so bad, if we could just agree that grantmaking is hard, and therefor grantmakers makes mistakes sometimes.

https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection

My current understanding is that the bigest dificulty in grantmaking is the information bandwith. The text in the application is usually not nearly enough information, which is why grantmakers rely on other channels of information. This information is nesserarly biased by their network, mainly it is much easier to get funded if you know the right people. This is all fine! I want grantmakers to use all the information they can, even if this casues unfairness. All successfull networks rely hevily on personal conections, becasue it's just more efficient. Personal trust beats formal systems every day. I just wish we could be honest about what is going on. 

I don't expect rich people to deligate their funding decitions to unknown people outside their network, just for fairness. I don't think that would be a good idea.

But I do want EAs who who happen to have some money to give, and happen to have significantly diffrent networks compared to the super donors, to be aware of this, to be aware of their comparative advantage to donate in their own network, instead of deligating this away to EA Funds.

What is owed is honesty. That is all.

It's not even the case that the grant makers themsevels exagurate their own infalability, at least not explicitly.  But others do, which leads to the same problems.  This makes it harder to answer "who owes what".  Fortunatly I don't care much about blame. I just want to spread more accurate informations becasue I've seen the harm of the missinformation. That's why I decided to argue against your comment. Leaving those claims unchalanged would add to the problems I tried to explain here.

_____________________

Regarding spelling. I usually try harder. But this topic makes me very angry, so I try minimising the time I spend on writing this. Sorry about that.

Comment by Linda Linsefors on Should you do a PhD? · 2022-04-13T11:48:51.857Z · EA · GW

I do not recomend going to France if you don't already know some Frecnch. I got though my PhD ok in English, and leanring enough french to be able to by food and similar is not hard. But I did not have a social life for over 2 years and it was terrible, and eventually I left to finish my PhD from Sweden (my home country).

My plan was to learn french when I got there, and I tried. But I'm also slow at languges, and never got good enough to have a real conversation.

I recomend going to an English speaking country, or go to one of the small western Europe countries, (Nordics, Netherlands, etc) where most peopel speek good English. 

If you decide to go to Grenoble anyway, I can't help you with courses. My PhD program required very few coursers, and I think all the ones I took where for PhD students only. And the only good one whas a one-time course about particle phsics given by a German post-doc, who is probably not there anymore.

I don't know much about masters programs in general. I did a undergraduate and master roled in to one program, which is common in Sweden, so I never had to look for a master. 

I recomend joining this slack and ask in the applying-for-granschool channel.

Here's more AI Safety grand school advise

Comment by Linda Linsefors on Issues with centralised grantmaking · 2022-04-11T16:43:43.256Z · EA · GW

Let's say Charles He starts some meta EA service, let's say an AI consultancy, "123 Fake AI". 

Charles's service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.

Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.

Someone has to kibosh this, and a set of unified grant makers could do this.

 

I don't understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service. 

In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone. 

 

As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don't want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you'll get more similar applications. 

Comment by Linda Linsefors on Issues with centralised grantmaking · 2022-04-11T16:26:11.370Z · EA · GW

The chess analogy don't work. We don't have grant experts in the same way we have chess experts.

Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you don't get true feedback at all, but have to rely on proxies.

I'm not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert.  But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, that's evidence of group think. 

 

Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.

So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.

I agree, but this don't require that there are only few funders. 
 

 

Now we happen to be in a situation where almost all EA money comes from a few rich people. That's just how things are wether I like it or not.  It's their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.

Also, I do think that these big donors are awsome people and that the world is better for their generosity.  As far as I can see, they are spending their money on very important projects. 

But they are not perfect! (This is not an attack!)

I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.  

Comment by Linda Linsefors on Issues with centralised grantmaking · 2022-04-10T11:47:07.084Z · EA · GW

Yes! Exactly!

If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Don't relly on an unintended sidefect of a coincidental system design.

Comment by Linda Linsefors on Issues with centralised grantmaking · 2022-04-10T11:38:09.135Z · EA · GW

I don't think there is a negative bias against centalised funging in the EA netowrk.

I've discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.

Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.

Comment by Linda Linsefors on We are giving $10k as forecasting micro-grants · 2022-02-16T01:18:35.196Z · EA · GW

Rethink can help with charity status https://rethink.charity/fiscal-sponsorship

Comment by Linda Linsefors on Donor Lottery Debrief · 2021-12-18T13:45:17.987Z · EA · GW

I don't know. I totaly forgott about this. Tell me if you find out what the outcome is. Unfortuatly I will not make it a priority to find out. But I would apriciate to know this.

Comment by Linda Linsefors on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-10-22T20:59:19.135Z · EA · GW

Did you succeed in guiding the values? Did the 'evidence based policy' become part of Neo-liberal internet identity? 

Comment by Linda Linsefors on Open Philanthropy is seeking proposals for outreach projects · 2021-08-09T17:23:33.221Z · EA · GW

I can't find any deadline.  How long should I expect this opportunity to stay open? 
(I'm not applying myself but I'll probably encourage some other people to do so.)

Comment by Linda Linsefors on EA Survey 2019 Series: Community Information · 2021-06-03T23:33:50.930Z · EA · GW

Thanks for doing this!

When comparing whites and non-whites, did you do anything to control for location.

I noticed non-whites ranked EAG as less important. Could this be becasue they are more likely to live far away from EAG events?

Or maybe there are so few EAs living in non-white majority countries, that they don't skew the statistic? I.e. non-white EAs in majority white countries massively outnumber non-white EAs in non-white majority countries?

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2021-04-26T13:40:45.057Z · EA · GW

That would also give you all the drawbacks of grants
See "Reasons to evaluate a project after it is completed" in the original post

If you want to give me a living wage without me first having to prove my self in some way, please give me money. 

For most people, grants aren't simply "available". There has to be some evidence. This can be provided either by arguing your case (normal grant application) or by just doing the work. I think many people (including me) would prefer to just do the work, and let that speak for itself (for the reasons explained in the original post).

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-03-04T03:15:13.795Z · EA · GW

But I'd love to be proven wrong here.

I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers. 

I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment.

However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.
 

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers.

We are working on that. I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-03-04T02:22:05.085Z · EA · GW

I'm not going to lead this, but would be happy to join.

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-27T16:07:03.047Z · EA · GW

 I've been told a few time that I belong in the group organizers slack, but never actually felt at home there, because I feel like I'm doing something very different from most group organizers. 

The main requirement of such a chat is that it attracts other ecosystem organizers, which is a marketing problem more than a logistical problem. There are lots of platforms that would be adequate.

Making a separate ecosystem slack channel in the group organizer slack, and marketing it here, may work (30% chance of success), and since it is low effort, it seems worth a try.

A some what higher effort, but also higher expected payoff, would be to find all ecosystem organizers, contact them personally and invite them to a group call. Or invite them to fill in a when2meet for deciding when to have said group call. 

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-12T12:30:26.353Z · EA · GW

Thanks for the much improved source!

Comment by Linda Linsefors on Ecosystems vs Projects in EA Movement Building · 2021-02-09T21:05:34.376Z · EA · GW

We (AI Safety Support) are literally doing all these things

There is no CEA for people working on AI safety, that creates websites, discussion platforms, conferences, connects mentors, surveys members etc.


I don't blame DavidNash for not knowing about us. I did not know about EA Consultancy Network. So maybe what we need is a meta ecosystem for ecosystems? There is a slack group for local group organizer, and a local group directory at EA Hub. Similarly, it would be nice to have a dedicated chat some for ecosystem organizer, and a public directory somewhere.

CEA has said that they are currently not focusing on supporting this type of projects (source: privet conversation). So if someone want to set it up, just go for it! And let me know if I can help.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-09T13:41:18.271Z · EA · GW

That's surprisingly short, which is great by the way. 

I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications.

I'm happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don't know that more polish will not help. 

You can probably save a lot of time on the side of the applicants by:

  • Stating how much time you recommend people spend on the application
  • Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for.

I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T13:25:14.215Z · EA · GW

What do you think is a reasonable amount of time to spend on an application to the LFTT?

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T13:19:45.080Z · EA · GW

What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context? 

What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context? 

For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T12:02:50.739Z · EA · GW

I want to see a compelling case that there's not an organisation that would be a good home for the applicant.


My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there. 

Do you disagree with this?

Comment by Linda Linsefors on Long-Term Future Fund: Ask Us Anything! · 2021-02-04T11:52:40.523Z · EA · GW

What do you mean by "There haven't previously been many options available"? What is stopping you from just giving people money? Why do you need an institute as middle hand?

Comment by Linda Linsefors on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T19:08:55.384Z · EA · GW

What type of funding opportunities related to AI Safety would OpenPhil want to see more of?

Anything else you can tell me about the funding situation with regards to AI Safety. I'm very confused about why not more people and projects get funded. Is  because there is not enough money, or if there is some bottleneck related to evaluation and/or trust?

Comment by Linda Linsefors on Ethical offsetting is antithetical to EA · 2021-01-21T10:12:19.567Z · EA · GW

Edit: I've posted before reading others comments. Others have already made this an similar points.

Here is a story of how ethical offsetting can be effective.

I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly.  With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying]. 

What I did was to calculate what offsetting would cost, and it turned out to be around one USD, so basically nothing. I could now conclude that:

Flying + offsetting > Going by train

Because I would save time, and I could easily afford to offset more than the harm I would do by flying, and still pay less in total.

Now, since I'm an EA I could also do the next step

Flying + donating to the most effective thing > Flying + offsetting > Going by train.

But I needed at least the idea of offsetting to simplify the calculation to something I could manage my self in an afternoon. In the first step I compare things that are similar enough so the comparison is mostly straight forward. The second step is actually super complicated, but it's the sort of thing EAs has been doing for year, so for this I can fall back on others. 

But I'm not sure how I would have done the direct comparison between [flying + donating] v.s. [going by train]. I'm sure it's doable some how, but with the middle step, it was so much much easier.

Comment by Linda Linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:35:42.790Z · EA · GW

Hi Guy
I'd be happy to talk to you. I'm co-founder of AI Safety Support, a new organization dedicated to helping people who want help with AI Safety. 

I'd like so see how we can help you, and learn from you how we can better support people in your situation. Please reach out by mail, or book a call or both.

Comment by Linda Linsefors on Open and Welcome Thread: January 2021 · 2021-01-18T16:31:39.256Z · EA · GW

AI Safety Support are doing an AI Safety Carers Bottleneck survay. 

Please help us spread it around. 
We want responses from anyone who are currently doing AI Safety work, or would like to do so in the future.

It only takes 5-20 minutes to answer (these are empirical numbers).
https://www.guidedtrack.com/programs/n8cydtu/run

Comment by Linda Linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:22:10.386Z · EA · GW

Funding proposal: AI Safety Support 

Our goal is to enable aspiring AI Safety researchers to do the things they are trying to achieve. We provide operational and community support to early career and transitioning researchers to fill gaps in the AI Safety career pipeline. (For more info, see this blogpost)

Suggested donation: Anything in the range $30k - $60k. 
We would not turn away smaller amounts, since we are not trying to get fully funded from a single donation anyway. But you suggested $30k as a lower limit.

Regarding "Relative opinions", I'm happy to discuss that in a private, if you want. 

Edit: I don't think this reasoning applies to us anyway. Though I'm happy to talk anyway.

Comment by Linda Linsefors on 2018-19 Donor Lottery Report, pt. 2 · 2020-12-22T03:04:36.624Z · EA · GW

Here is a number of EA funding requests

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-19T03:00:49.341Z · EA · GW

An aspect of the funding problem is that money allocation is bad everywhere. (On a larger scale, the market mostly woks, but if you get into the details of being a human wanting to trade your time for money, most things around job applications and grant applications, is more or less terrible.) If we design a system that don't suck, over time EA will attract people who are here for the money not for the mission. 

A solution should have the feature:
1) It don't suck if you are EA aligned
2) If you are not EA aligned it should not be easier to get money from us than from other places. (It is possible to get non EA aligned people to do EA aligned actions. But that require an very different level of oversight.)

I think a grant lottery, where the barrier to entry is to have done some significant amount of EA volunteer work or EA donation or similar, would be an awesome experiment.

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-13T22:02:02.002Z · EA · GW

Funding is a mess. 

Distributing money is hard and we should not expect to have a good solution anytime soon. But it would be helpful if people where aware of how inadequate our current funding ecosystem is. Even though money supposedly exists, funding is still the main bottleneck for most new EA initiatives.

My current analysis is that grant evaluation is hard becasue is inherently low bandwidth. I would therefore recommend that people donate though their own personal networks rather than giving to one of the EA Funds. I'd also expect that we'll see a greater and healthier diversity of projects this way.

I know the argument for having centralized funding. We pool all the money and all the applications in one place, and then let some trusted people sort it out. In theory this both saves time and optimize money distribution. But in practice it has a lot of problems. It's slow, it's low bandwidth, and the biases of a few will effect everyone.

I've personally lost a lot of time to grant agencies. Waiting for answers what where late. Or waiting for a promised application opening, that where canceled. If you have not experienced these things yourself, it's hard for me to describe how much it can mess up everything. And that's just one of the problems. 

Dealing with individual funders has been sooooo much easier, and just overall a much nicer and more supportive experience. 

I have a lot more to say about this, but I have not found the best way to express it yet. But feel free to reach out for more of my thoughts. 

(An alternative hypotheses, is that EA is cash constrained. I.e. the bottle neck is not around distributing the money, it's about there not being enough of it. In that case we should upgrade the importance of earning to give.)

Comment by Linda Linsefors on What are some potential coordination failures in our community? · 2020-12-13T21:08:22.215Z · EA · GW

EA Hub has evolved a lot since last time I had a look. I was going to complain that it has limited usefulness since you can only search based on location and not interests and expertise, but that is no longer true. This is great!

Comment by Linda Linsefors on Where are you donating in 2020 and why? · 2020-12-08T13:57:08.988Z · EA · GW

My friend's wrist where hurting from clicking, so we tried getting a second mouse, which we taped on the floor as a foot pedal. Now he moves the cursor whit his hand and click with his foot. It works surprisingly well.

Comment by Linda Linsefors on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-07T13:19:13.277Z · EA · GW

I am in favour of people asking for requests, including money. Even if these post are not of interest to most readers, I think they can be of grate value when read by the right person, but the chances of that goes down dramatically if the post are not on the front-page. 

On the other hand, we don't want the front-page to be filled up with various requests. It takes up space, and also don't look very good. But I do think there is a simple win win here.

Create a top level post called something like "Requests for funding and other favours", where people can leave their requests as comments. This will only take up a single line on the front-page, and it will more accessible for the people who are looking to donate.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-10T02:11:06.043Z · EA · GW

Then maybe these lots of people should gang up and start a new hub, literally anywhere else. Funding problem mostly solved.

If people are not seriously trying this, then it's hard for me to take seriously any claims of lack of funding. But as I said, I might be missing something. If so, pleas tell me.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-08T22:14:30.908Z · EA · GW

You are correct that people in the Bay can find out about project in other places. The project I know about are also not in the same location as me. I don't expect being in the Bay has an advantage for finding out about projects in other places, but I could be wrong.

When it comes to project in the Bay, I would not expect people who lack funding to be there in the first place, given that it is ridiculously expensive. But I might be missing something? I have not investigated the details, since I'm not allowed to just move their my self, even if I could afford it. (Visa reason, I'm Swedish)

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-06T01:43:58.018Z · EA · GW
Looking for more projects like these

AI Safety Support is looking for both funding and fiscal sponsorship. We have two donation pledges which are conditional on the donations being tax-deductible (one from Canada and one from the US). But even if we solve that, we still have a bit more room for funding.

The money will primarily be used for sallary for me and JJ Hepburn.

AI Safety Support's mission is to help aspiring and early career AI Safety researcher in any way we can. There are currently lots of people who wants to help with this problem but who don't have the social and institutional support from organisations and people around them.

We are currently running monthly online AI Safety discussion days, where people can share and discuss their research ideas, independent of their location. These events are intended as a complement to the Alignment Forum and other written forms of publication. We believe that live talks conversation are a better way to share early stage ideas, and that blogpost and papers comes later in the proses.

We also have other projects in the pipe line, e.g. our AI Safety career bottleneck survey. However, these things are currently on hold until we've secured enough funding so that we know we will be able to keep going for at least one year (to start with).

AI Safety Support have only existed since May, but both of us have a track record of organising similar events in the past, e.g. AI Safety Camps.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-05T10:35:44.884Z · EA · GW
I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren't yet funded by larger donors

I am sceptical about this. There are *lots* of non Bay-area projects and my impression (low confidence) is that it is harder for us to get funding. This is becasue even the official funding runs mostly on contacts, so they also mostly fund stuff in the hubs.

I know of two EA projects (not including my own) which I think should be funded, and I live in Sweden.

Comment by Linda Linsefors on Donor Lottery Debrief · 2020-08-05T09:54:30.071Z · EA · GW

Registering predictions:

1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).

2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)


Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn't worth the time investment.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T22:19:17.494Z · EA · GW

I did some googling.

In UK there are 4 ways to get a PhD (according to this website) and only one of them is the traditional PhD program.

Here is a discussion on independents PhDs. People are disagreeing on weather it is possible to do a PhD with out a supervisor, pointing towards different practices in different countries.

Several people claim that "The PhD process is about learning, not just publishing.", but my impression is that this is a very modern idea. A PhD used to be about proving your capability, not monitoring your learning process.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T22:07:21.725Z · EA · GW
I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs.

So almost half of them don't. If you want a job at one of those think tanks, I would strongly recommend that you just go straight for that.

If you want to do research, then do the research you want to do. If the research you want to do mainly happen at a company or think thank, but not really in academia, go for the company or think tank.

There are other ways of getting a PhD degree that does not involve enrolling in a PhD program. In many countries, the only thing that actually matters for getting the degree is to write an defend a PhD thesis which should contain original research done by you. For example if you just keep publishing in academic journals, until your body of work is about the same as can be expected to be done during a PhD (or maybe some more to be on the safe side), you can just put it all in a book, approach a university and ask to defend your work.

This may be different in different countries. But universities mostly accept foreign students. So if you can't defend your independent thesis at home, go some where else.

Comment by Linda Linsefors on Should you do a PhD? · 2020-07-26T01:01:45.542Z · EA · GW

Some of the questions of the checklist, I would endorse more as guidelines, or warning signs than as strict rules.

Is there a substantial amount of literature in your field?
Was there a major discovery in the field in recent years?

Both those questions measure how much you can learn from others in academia. If you can't take advantage of collogues, then going in to academia at all (even if you don't intend to stay) will be lower value. So you might be more productive elsewhere.

The first one also says something about how easy/hard it will be to publish and generally get recognised. If you do something non-established, you will have a much harder time.

But there are two main reason you might want to step into academia anyway.

1) To influence other academics. (I think this is the main reason FLI chooses to be an academic institution.)

2) To get paid. (In cases where there are no other options.)

Do you want a career in academia?
Is there a better option for prospective PhD students who want a career in research outside of academia?

Lot's of places out side academia does research. Companies, non-profits, think tanks, independent AI Safety researchers with Long Term Future Fund grants.

What is the better option depends on what research you want to do. The more abstract the more likely academia is a good choice. The more concrete the more likely it is not. E.g. charity evaluation is a type of research that I don't think would do well in academia (though this is not my field at all, so I might be wrong).

Comment by Linda Linsefors on The Case for Impact Purchase | Part 1 · 2020-06-26T20:29:55.271Z · EA · GW

Sort of, and it might take some time. The short of it is that I'm less enthusiastic about impact purchase.

I want some sort of funding system that is flexible, and I think the best way to do this is to sponsor people and not project. If someone has though their past work shown competence and good judgement, I think they should be given a salary and freedom to do what they think is best.

I though the way to active this was impact purchase, but as someone pointed out in a comment, this makes for a very economical uncertain situation for the people living this way, which causes stress and short-sightedness which is not the best.

When I wrote this post, I assumed that I needed to have a plan to get a grant in the current system. But after talking to one of the fund manager of Long Term Future Fund, I found out that it is possible to get a grant by simply producing a track record and some vague plan to do more of the same. I've decided to try this out for my self. I'm waiting for an answer from Long Term Future Fund, and plan to write some update after I know how that goes.

If I get the grant this would prove that is is at least possible to get funding with out a clear plan. If I get rejected the concussions I take from that depends on what feedback I get with my rejection. Either way I decided to wait and see how the grant applications goes, before writing the follow up.