Posts

Giving What We Can & EA Funds now operate independently of CEA 2020-12-22T03:47:48.140Z
How to best address Repetitive Strain Injury (RSI)? 2020-11-19T09:15:27.271Z
Why you should give to a donor lottery this Giving Season 2020-11-17T12:40:02.134Z
Apply to EA Funds now 2020-09-15T19:23:38.668Z
The EA Meta Fund is now the EA Infrastructure Fund 2020-08-20T12:46:31.556Z
EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z

Comments

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2021-01-27T11:05:58.063Z · EA · GW

I think you, Adam, and Oli covered a lot of the relevant points.

I'd add that the LTFF's decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren't very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn't be approved by the majority of the committee.  Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.)

I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a "legible longtermist fund" and a "judgment-driven longtermist fund" to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.

Comment by jonas-vollmer on Why do content blockers still suck? · 2021-01-25T12:04:42.531Z · EA · GW

This is not what you asked for, but I wanted to share some general skepticism of content blocking tools. Over time, I've come to the conclusion that they do more harm than good for me personally:

  • Content blockers have an adversarial vibe to them, like the different agents in my brain are fighting each other, and one blocked the other from doing what it likes. I prefer something that feels more like I'm being nice to myself.
  • I've had more success with setting up good nudges and more 'peaceful' negotiation between the different agents in my head. Not in the sense of compromise à la "just 15 minutes of YouTube, then back to work", but more in the sense "Ok, what does the YouTube-craving part of my brain really want, and can I make it happy in some other way?" For me, the answer is often "take a break from work, get away from the screen, and spend some time with friends."
  • In general, it seems to me that content blockers shift the focus from "why do I do X and how can I do Y instead?" to "how can I prevent myself from doing X?", which doesn't seem fruitful.
  • Content blockers lead me to replace bad behavior X by bad behavior Y (e.g., watching YouTube videos → watching videos on some other site that isn't blocked).
  • As you said, there's often  some scenario where I need to make an exception (e.g., access facebook because a work-related conversation took place there).

Overall, I've found these tools useful to occasionally break particularly bad (addiction-like) habits, but not for being more focused in general. I've tried many of them but haven't used any for a while.

Comment by jonas-vollmer on CEA update: Q4 2020 · 2021-01-24T13:11:23.082Z · EA · GW

I agree with Denise's description and Max's, but I don't see how it follows that focusing on professionals is more useful than focusing on students. In fact, I think that Germans taking gap years, changing degrees, etc. makes it more plausible that student groups are a promising target audience, as this allows students to spend more time thinking about EA ideas and make relatively large changes to their careers.

Comment by jonas-vollmer on CEA update: Q4 2020 · 2021-01-24T13:04:07.130Z · EA · GW

Echoing what Vaidehi said, I think it's somewhat unusual for a wiki to be built into a tag system, so I'd like to advocate for seeing if there's a way to make it more intuitive and less confusing, ideally in a way that doesn't require explaining/tutorials but is immediately obvious to users. Perhaps the solution could be to simply to use the term "wiki" instead of "tags" in most places.

Comment by jonas-vollmer on CEA's strategy as of 2021 · 2021-01-15T14:35:32.049Z · EA · GW

Very cool that you decided to share this publicly, thanks!

Comment by jonas-vollmer on Donating to EA funds from Germany · 2021-01-07T12:40:14.543Z · EA · GW

Yeah, what Denis wrote sounds correct to me.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-29T12:10:20.441Z · EA · GW

Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it's not open access makes me a bit less excited.

Comment by jonas-vollmer on EA Meta Fund Grants – July 2020 · 2020-12-29T11:24:02.583Z · EA · GW

I think FHF can be argued to fall within the scope of either fund. I'm sure you saw this part of the above report:

We see this as a promising meta initiative because The Future of Humanity Foundation is aiming to leverage FHI’s operations and increase its overall impact. (FHI itself also acts as a meta initiative to some degree, because it provides scholarships, promotes important ideas through popular science books, and trains early-career researchers through its Research Scholars Programme.)

I perceive this grant to be worldview-specific rather than cause-area-specific: there are several longtermist cause areas (AI safety, pandemic prevention, etc.) that FHI contributes to. Other grants (e.g., Happier Lives Institute, Charity Entrepreneurship) are also based on particular worldviews or even cause areas, so this is not unprecedented.

In general, I think it makes sense for the EA Infrastructure Fund (EAIF) to support both cause-neutral and cause-specific projects, as long as they have a meta component and the EAIF fund managers are well-placed to evaluate the projects.

I personally actually think it's pretty unclear what the EAIF's funding threshold and benchmark should be. The GHDF aims to beat GiveWell top charities, the AWF should match/beat OP's animal welfare grantmaking, the LTFF aims to beat OP's last longermist dollar, but there's no straightforward benchmark for the EAIF given that it's kind of cause-agnostic. I plan to work with the fund managers to define this more clearly going forward. Let me know if you have any ideas.

Comment by jonas-vollmer on Giving What We Can & EA Funds now operate independently of CEA · 2020-12-24T10:42:03.375Z · EA · GW

I personally am very much in favor of sharing internal documents, both to increase transparency and accountability to donors, and also to help others who are running similar projects and generally advance EA discourse. So my current plan is to publish these guidelines. That said, there's some chance I end up concluding that preventing misunderstandings and responding to questions/comments is too much work (e.g., with these guidelines, I worry that people may come away thinking we're more risk-averse than we actually are), so I'm not sure whether I'll actually publish them.

Comment by jonas-vollmer on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T16:00:52.332Z · EA · GW

Depending on how you interpret this comment, the LTFF is looking for funding as well.

(Disclosure: I run EA Funds.)

Comment by jonas-vollmer on Asking for advice · 2020-12-14T11:40:35.383Z · EA · GW

I've found it difficult to find a clear takeaway from this discussion. I think relevant points are here:

  1. Making each other feel respected
  2. Finding a time that actually works well for both (i.e. not overly inconvenient times)
  3. Saving time scheduling meetings

Some of the suggestions emphasize #1 at the expense of #3 (and possibly #2). E.g., if I send my Calendly and make concrete suggestions, that removes the time-saving aspects because I have to check my calendar and there's a risk of double-booking (or I have to hold the slots if I want to prevent that).

My current guess is that the following works best: Send the Calendly link, click it yourself briefly to ensure it has a reasonable amount of options in the recipient's time zone available, and tell the recipient "feel free to just suggest whichever times work best for you."

Not sure that works for those who are most skeptical/unhappy about Calendly.

Comment by jonas-vollmer on Asking for advice · 2020-12-14T11:32:20.329Z · EA · GW

Would you be fine with Claire's suggestion? This one:

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

Comment by jonas-vollmer on Asking for advice · 2020-12-14T11:27:30.933Z · EA · GW

I personally have this tech aversion to Calendly and Doodle specifically, but not to other, similar tools that I find more user-friendly, such as When2Meet. The main reason is that I would much prefer a "week view" rather than having to click on each date to reveal the available slots. That said, Calendly is still my most preferred option for scheduling meetings.

Comment by jonas-vollmer on Ask Rethink Priorities Anything (AMA) · 2020-12-14T11:01:07.981Z · EA · GW

How funding-constrained is your longtermist work, i.e., how much funding have you raised for your 2021 longtermist budget so far, and how much do you expect to be able to deploy usefully, and how much are you short?

Comment by jonas-vollmer on 2018-19 Donor Lottery Report, pt. 1 · 2020-12-13T14:45:42.107Z · EA · GW

Thanks a lot for publishing this report, it's great to see that so much careful thought has gone into your decision.

I want to highlight that giving to the donor lottery is a highly effective way to donate even if you don't publish such a report. I've heard people say that they were hesitant to give to the donor lottery because they didn't want to be obliged to publish articles like these. Just like some people choose to report publicly about their ordinary donations and others don't, it's fine if some report about their donor lottery decision and others don't. Your decision whether to participate in the donor lottery doesn't affect the probability that someone else will win and publish a report.

You can read more about the lottery here.

Comment by jonas-vollmer on My recommendations for RSI treatment · 2020-12-13T14:24:14.664Z · EA · GW

Thanks, very helpful!

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-13T14:23:39.801Z · EA · GW

It's not public. If you like, you can PM me your email address and I can try asking someone to get in touch with you.

Comment by jonas-vollmer on Uncorrelated Investments for Altruists · 2020-12-13T14:21:52.137Z · EA · GW

Based on this source, art might be interesting to look into for large investors: it may be largely uncorrelated with global equities, and total asset value is estimated at $3 trillion, or ~3% of global equities. To get exposure, one could try to buy shares of major auction houses (none of which are currently publicly traded). Curious to hear what people think.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T19:19:51.597Z · EA · GW

Thanks!

Comment by jonas-vollmer on Local priorities research: what is it, who should consider doing it, and why · 2020-12-11T11:10:02.328Z · EA · GW

I liked this post and personally think that local priorities research (LPR) seems like one of the most effective things EA groups can do.

I wish the article spent a lot more time discussing the concrete research projects people could undertake, in addition to the brief bullet point list provided in section 2. The article spends a lot of time discussing the pros and cons of LPR, but gives little guidance on how to actually do LPR. Perhaps I'll give some more ideas and examples sometime if I have time.

(As always, personal opinion, not my employer's.)

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T10:45:55.097Z · EA · GW

Again, I agree with Asya. A minor side remark:

- Pay for virtual assistants and all other things that could speed researchers out.

As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them.

Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T10:44:09.089Z · EA · GW

The second sentence on that page (i.e. the sentence right after this one) reads:

In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

"Predominantly" would seem redundant with "in addition", so I'd prefer leaving it as-is.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T18:24:52.299Z · EA · GW

Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I'd personally think the current name seems more accurate.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T18:18:22.448Z · EA · GW

Thanks, I appreciate the detailed response, and agree with many of the points you made. I don't have the time to engage much more (and can't share everything), but we're working on improving several of these things.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T09:54:17.041Z · EA · GW

The very first sentence on that page reads (emphasis mine):

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.

Here's a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we might work on in 2021.

We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil's report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-08T20:45:03.602Z · EA · GW

(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.)

I agree with Adam and Asya. Some quick further ideas off the top of my head:

  • More academic teaching buy-outs. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it.
  • Research into the long-term risks (and potential benefits) of genetic engineering.
  • Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.)
  • Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations.
  • Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them).
  • Books about longtermism-relevant topics.
Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-08T20:13:01.041Z · EA · GW

Makes sense, thanks!

Comment by jonas-vollmer on American policy platform for total welfare · 2020-12-08T20:07:53.780Z · EA · GW

(As always, personal opinion, not my employer's.)

While I agree that it could be good for EAs to become more politically active, I don't think there are good arguments for an EA branding.

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. By choosing an EA branding for your project, you selectively increase the downside risk, but not the upside/benefits.

Quoting from that link:

You might be worried that Effective Altruism will get a public relations problem if members associate with something politically controversial.

I'm not worried about this; I think EAs doing something politically controversial is both a risk worth taking and mostly unavoidable. I'm only worried about associating the EA brand itself with something politically controversial (or, perhaps as big a risk, something that's perceived as amateurish).

Where political work earns criticism from some, it earns accolades from others. 

The concern is not that political work earns criticism (I think that's a risk worth taking), but that this criticism would be perceived as being relevant to all of EA (rather than just your project).

People are glad to see Effective Altruists supporting their rights and interests. 

I think this is not a strong argument:

  • The EA community is small and isn't widely perceived as having a lot of resources.
  • A lot of EA issues are inherently controversial, with a small supporter base. Partly by definition, EA focuses on neglected issues, helping those who don't have a supporter base. Non-human animals and people in the long-term future might be glad about the support we provide, but cannot help us now get more political influence.

The movement is perceived as more serious and potent when it tackles political issues in addition to regular charities and careers. 

I think this mainly holds if your project is successful; see my point about option value above.

I perceive your website as framed in a EA-ingroup-y way. I don't think this is bad; in fact, I really like some work of this type (e.g., Brian Tomasik's essays). But I don't think it's a great way to get more ordinary people to perceive EA as "more serious and potent" – instead, I think it'll make EA look somewhat weird and niche.

Finally, as time goes by, our efforts will probably be increasingly regarded as being on the right side of history, due to our generally superior epistemics and ethics.

I appreciate your optimism, but I think it'll be a relatively small minority of people who perceive it that way – most will just believe whatever is most advantageous given the short-term incentives they face. E.g., I don't think atheists/deists and experts are very highly regarded in politics, despite being on the right side of history.

most people do not think about politics in the same way as the Very Online left and right … Don’t let Twitter define your understanding of what counts as good or bad PR. … The people who get most outraged about political disagreement generally wouldn’t contribute positively to EA causes anyway, so we can let them go.

I agree; I'm mainly worried about the perception by public intellectuals, policy professionals, and politicians.

EA already has a contentious reputation among some people who are highly politically animated, either because they cannot stand the diversity of political opinions within the EA movement, or because we do not often support certain political causes. Those people are simply a lost cause.

(I don't think this is an important point here, but you could still make things much worse by causing major backlash, shitstorms, etc.)

Finally, Effective Altruism grows best when it offers something for everyone. And for people who are not well equipped or interested in our other cause areas, civic action may be that something.

You can do this just as well without putting "EA" into the name of the project.

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. 

I agree with this. As far as I know, none of these orgs and individuals currently use an EA branding. That seems good to me, and I hope that everyone launching a political EA project will follow suit.

I hope this is helpful, and I hope it’s clear that I wrote this comment trying to help you improve the project and have more impact, and I’m overall excited about this work. I haven’t looked at the handbook in detail, but based on skimming it, it looks really interesting, so thanks for putting that together!

Comment by jonas-vollmer on Why you should give to a donor lottery this Giving Season · 2020-12-08T11:44:26.044Z · EA · GW

We currently don't implement any measures to prevent people from making donations to their employer, whether through the donor lottery or as ordinary donations through the EA Funds website. The due diligence process for grants to individuals is much more thorough; if there was a potential COI we would investigate that carefully before making a grant. Most likely, we wouldn't allow people to fund themselves.

Comment by jonas-vollmer on EA Meta Fund Grants – July 2020 · 2020-12-07T22:01:31.611Z · EA · GW

Update: The current LTFF AMA elaborates on common reasons for rejecting applications to some degree. 

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T21:02:20.265Z · EA · GW

I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:

If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.

My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T19:00:05.384Z · EA · GW

As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.

Some thoughts on this question:

  • LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
  • Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months). I am currently implementing the first iteration of a fund manager appointment process, where we invite potential grantmakers to apply as Fund managers, and are also considering hiring a full-time grantmaking grant specialist. Hopefully, this will allow the LTFF to increase the number of grants it can evaluate, and its active grantmaking capacity in particular.
  • Types of grants: Areas in which I expect the LTFF to be able to substantially expand its current grantmaking include academic teaching buy-outs, scholarships and top-up funding for poorly paid academics, research assistants for academics, and proactively seeding new longtermist organizations and research projects (active grantmaking).
  • Structural changes: I think having multiple fund managers on a committee rather than a single decision-maker leads to improved diversity of networks and opinions, and increased robustness in decision-making. Increasing the number of committee members on a single committee leads to disproportionately larger coordination overhead, so the way to scale this might be to create multiple committees. I also think a committee model would benefit from having one or more full-time staff who can dedicate their full attention to EA Funds or the LTFF and collaborate with a committee of part-time/volunteer grantmakers, so I may want to look into hiring for such positions.
  • Legible longtermist fund: Donating to the LTFF currently requires a lot of trust in the Fund managers because many of the grants are speculative and hard to understand for people less involved in EA. While I think the current LTFF grants are plausibly the most effective use of longtermist funding, there is significant donor demand for a more legible longtermist donation option (i.e., one that isn’t subject to massive information asymmetry and thus doesn’t rely on trust as much). This may speak in favor of setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
  • Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else. Regarding the LTFF and longtermism in particular, Open Phil has expanded its activities, Survival And Flourishing (SAF) has launched, and other donors and grantmakers (such as Longview Philanthropy) continue to be active in the area to some degree, which means that effective projects may get funded even if the LTFF doesn’t expand its grantmaking. It’s pretty plausible to me that EA Funds should pursue a strategy that’s less focused on grantmaking than what I wrote in the above paragraphs, which would mean that I might not dedicate as much attention to expanding the LTFF in the ways suggested above. I’m still thinking about this; the decision will likely depend on external feedback and experiments (e.g., how quickly we can make successful active grants).

If anyone has any feedback, thoughts, or questions about the above, I’d be interested in hearing from you (here or via PM).

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T18:47:04.773Z · EA · GW

(I drafted this comment earlier and feel like it's largely redundant by now, but I thought I might as well post it.)

I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change for this project to result in improvements in the long-term future.’ 

Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’) 

We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.

Comment by jonas-vollmer on American policy platform for total welfare · 2020-12-07T18:01:54.562Z · EA · GW

(As always, personal opinion, not my employer's.)

I think this looks like an interesting project and think it would be great if more EAs were more involved in politics.

One piece of feedback though – I hope it's useful: I generally recommend against using the EA branding for such projects, for several reasons: 1) it likely discourages others from attempting similar projects, as they think the space is already covered, 2) if you don't do a great job, that could reflect badly on all of EA, as your project will automatically be perceived by some people as being representative of EA if you're using that branding, 3) you might unnecessarily limit your target audience: not everyone might like or understand the EA philosophy, but they might still be interested in your project. (Over the past years, I have recommended against using EA branding in the projects that I've been involved with myself for these reasons, unless they represent a central part of EA infrastructure. For instance, I've rebranded EAF to CLR.)

I hope that's helpful feedback!

Comment by jonas-vollmer on My mistakes on the path to impact · 2020-12-07T17:41:33.708Z · EA · GW

Lots of emphasis on avoiding accidentally doing harm by being uninformed

I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.

If you think these sorts of framing still miss the mark, I'd be interested in hearing your reasoning about that.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T17:31:36.788Z · EA · GW

If a private company applied for funding to the LTFF and they checked the "forward to other funders" checkbox in their application, I'd refer them to private donors who can directly invest in private companies (and have done so once in the past, though they weren't funded).

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T17:29:49.164Z · EA · GW

Another one is that people assume we are inflexible in some way (e.g., constrained by maximum grant sizes or fixed application deadlines), but we can often be very flexible in working around those constraints, and have done that in the past.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T14:17:39.122Z · EA · GW

Thanks for the input, we'll take this into account. We do provide tax advice for the US and UK, but we've also looked into expanding this. Edit: If you don't mind, could you let me know which jurisdiction was relevant to you at the time?

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T13:58:28.253Z · EA · GW

There will likely be a more elaborate reply, but these two links could be useful.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T13:47:50.463Z · EA · GW

Interested in talking more about this – sent you a PM!

EDIT: I should mention that this is generally pretty hard to implement, so there might be a large fee on such grants, and it might take a long time until we can offer it.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T11:39:32.006Z · EA · GW

Whether we are at the "hinge of history" is a gradual question; different moments in history have different degrees of influentialness. I personally think the current moment is likely very influential, such that I want to spend a significant fraction of the resources we have now, and I think on the current margin we should probably be spending more. I think this could change over the coming years, though.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T11:26:58.406Z · EA · GW

I'm personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that's on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.

Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T11:19:27.433Z · EA · GW

I agree with Habryka and Adam.

Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I'd add the following:

  • The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded.
  • Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
Comment by jonas-vollmer on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T18:47:05.044Z · EA · GW

(I’m not a Fund manager, but I’ve previously served as an advisor to the fund and now run EA Funds, which involves advising EA Funds.)

In addition to what Adam mentions, two further points come to mind:

1. I personally think some of the April 2019 grants weren’t good, and I thought that some (but not all) of the critiques the LTFF received from the community were correct. (I can’t get more specific here – I don’t want to make negative public statements about specific grants, as this might have negative consequences for grant recipients.) The LTFF has since implemented many improvements that I think will prevent such mistakes from occurring again.

2. I think we could have communicated better around conflicts of interest. I know of some 2019 grants donors perceived to be subject to a conflict of interest, but there actually wasn’t a conflict of interest, or it was dealt with appropriately. (I also can recall one case where I think a conflict of interest may not have been dealt with well, but our improved policies and practices will prevent a similar potential issue from occurring again.) I think we’re now dealing appropriately with COIs (not in the sense that we refrain from any grants with a potential COI, but that we have appropriate safeguards in place that prevent the COI from impairing the decision). I would like to publish an updated policy once I get to it.

Comment by jonas-vollmer on My recommendations for RSI treatment · 2020-12-05T17:14:08.688Z · EA · GW

Thanks, I've ordered that and will be trying those exercises!

Comment by jonas-vollmer on How to best address Repetitive Strain Injury (RSI)? · 2020-11-19T11:20:15.893Z · EA · GW

Thanks, very helpful. Especially knowing about your experience (what you said in the first paragraph) seems helpful.

Would love to get some more of that vitamin EA! ;)

Comment by jonas-vollmer on How to best address Repetitive Strain Injury (RSI)? · 2020-11-19T11:16:59.468Z · EA · GW

Thank you! :)

Comment by jonas-vollmer on Why you should give to a donor lottery this Giving Season · 2020-11-19T09:28:53.996Z · EA · GW

Good point. I think this would probably involve some coding effort which I'm not sure is worth it, but it's worth considering.

Comment by jonas-vollmer on How to best address Repetitive Strain Injury (RSI)? · 2020-11-19T09:23:42.968Z · EA · GW

Some things I've tried and found mildly helpful:

  • Using an ergonomic keyboard (I use a split keyboard, which also helps with back pain)
  • Avoiding typing while feeling cold (this means sometimes wearing a coat at my desk)
  • Wearing a wrist brace at night
  • Adjusting the height of my desk, and using a desk with sufficient depth so I can rest my forearms on it while typing
  • Using my phone with the other hand that isn't affected
  • Generally trying to avoid straining movements with the affected hand (during cooking, etc.)

 

Things I've considered:

  • Learning touchtyping with the Dvorak layout (or some other alternative layout) – takes ~20h to learn, benefits seem disputed (academic research and lifehackers claim mixed results). Might look into if things don't get better.
  • Using a foot pedal for clicking and modifier keys. Takes some time to learn and set up, though some seemed to like it a lot.
Comment by jonas-vollmer on Why you should give to a donor lottery this Giving Season · 2020-11-18T14:51:26.918Z · EA · GW

Some reasons for entering anonymously:

  • You might generally care about anonymity/privacy on the internet
  • You might want to avoid public attention in case you win (e.g., perhaps some people might send you unsolicited fundraising pitches (which I hope they don't), or you worry that you might feel pressured by other EAs to publish your grants or thinking)
  • You can always publish your name later, so entering anonymously has more option value

Some reasons for entering with your name attached:

  • It’s generally more exciting to know who else in the community is participating
  • It shows other people that you personally endorse the lottery as a serious way to donate effectively, which might encourage others to participate

I'm sure I forgot some points, so would be curious to hear what people think.