I sent an email with a group application on September 18th (https://docs.google.com/document/d/1ERZ7spGHZYJjixaY3ln4eVxu_GKkOWqvB9rpz3Abd6M/), but still haven't received a reply; I hadn't used the Google Form given it was a group application -- Did you receive my application? :/
read quickly, but basically: value that is harder to capture by the market is more neglected, so actually, there's a lot of opportunities of helping more people per employee in altruistic sectors, so not doing that is an opportunity cost
MATS is extending applications until May 22nd for its SERI ML Alignment Theory Scholars Program 2022. More info: https://forum.effectivealtruism.org/posts/nSyvMy3QQTyBzybNx/seri-ml-alignment-theory-scholars-program-2022
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
First human to achieve some level of intelligence (as measured by some test) (prize split between the person themselves and the parents and the genetic engineering lab if applicable) (this is more about the social incentive than economical one, as I suppose there's already an economical one)
1M USD for the first to create a gamete (sperm and/or egg) from stem cells that result in a successful birth in one of the following species: humans, mice, dogs, pigs (probably should add more options).
Hi all, Haydn and I figured this post was a good place to plug our startup, Pantask. While the services we provide are not as advanced as those listed here, Pantask can offer assistance to EA orgs that need help with day to day operations but can’t afford to hire full time employees. We provide general virtual assistance services, such as organizing chaotic troves of data, manage schedules and emails, and help with brain debugging. We also offer graphic design, copyediting, transcription, and writing services. Our assistants can also perform certain kinds of research (the kind you can do in <8 hours, generally speaking), such as finding service providers, information on grants, etc.
Essentially, if the task can be done by a reasonably competent person without a specialized skill set, our assistants can very likely do it for you. In addition to being EA owned, some of our assistants are also EAs and even more are familiar and interested in EA. We’ve served EA charities before. We charge 30 USD per hour. If you're not used to delegating tasks, we can help you review the tasks you delegate to make sure they are clear, at no additional cost.
You can send tasks to email@example.com, or email either of us at firstname.lastname@example.org or email@example.com, or call us at (570) 509-3366. You can also schedule me on: https://calendar.google.com/calendar/u/0/appointments/schedules/AcZssZ0Dc0qvV3EbGsGR39_dhoeusVtX6rwnpfXpGVHwRHPGPuIjTd1GPiCRz9qMwTkIZKCPPVB0AQQm
I intend to update this answer as I think of more.
Creating a gamete from a stem cell (to enable [iterated embryo selection](https://www.lesswrong.com/tag/iterated-embryo-selection))
Reanimating a cryonics patient (although, creating a prize that long in advance will probably not create a market pressure in the short term)
First human to achieve some level of intelligence (as measured by some IQ test) (prize split between the person and the genetic engineering lab) (this is more about the social incentive than economical one, as I suppose there's already an economical one)
so am understanding you have short AI timelines, and so don't think genetic engineering would have time to pay off, but psychedelics would, and that you think it's of similar relevance as working directly on the problem
This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).
Why was that not respected? nor mentioned in this post AFAICT
in terms of augmenting humans, my impression is that genetic engineering is by far the most effective intervention. my understanding is that we're currently making a lot of progress in that area, yet some important research aspects seem neglected, and could have a transformative impact on the world.
I guess I was working on the assumption that it was rare that people would want to split their donation between local and effective a priori, and my point was that GM wasn't useful to people that didn't already want to split their donations in that way before GM's existence -- but maybe this assumption is wrong actually
hummm, I guess it's fine after all. I change my mind. People can just give whatever fraction they were going to give to local charities, and then be matched. And the extra matching to effective charities is a signal from the matcher about their model of the world. I don't think someone that was going to give 100% to another charity than those 9 should use GivingMultiplier though (unless they changed their mind about effective charities). But my guess is that this project has good consequences.
I'm henceforth offering a MetaGivingMultiplier. It's the same structure than GivingMultiplier, but replace "local charities" with "GivingMultiplier" and "super-effective charities" with "a cryonics organization" (I recommend https://www.alcor.org/rapid/ or https://www.brainpreservation.org/). Anyone wants to take advantage of my donation match?
I don't want to argue for what to do in general, but here in particular my "accusation" consists of doing the math. If I got it wrong, am sure other got it wrong too and it would be useful to clarify publicly.
On that note, I've sent this post along to Lucius of the GivingMultiplier team.
This doesn't change the "indistinguishable from if I gave X" property, but it is a thing that would have been easy to check before posting.
I did check. As you said, it doesn't change the conclusion (it actually makes it worse).
Second, point (b) matters. It seems like a bold assumption to assume that EA charities have reached "market efficiency"
I'm >50% sure that it doesn't fare better, but maybe. In any case, I specified in my OP that my main objection was (a).
Thus, if you actually think one of the "EA" choices at GivingMultiplier is more valuable than the rest, it seems very likely that you contribute more to their work by choosing them to be matched.
Yep, I did mentioned that in my OP.
Did you see anything on the site that actually seemed false to you?
No, I also mentioned this in OP.
Give people an incentive to think about splitting their donation between "heart" and "head", by...
There's not really a real incentive though. I feel like there's a motte-and-bailey. The motte is that you get to choose one of the 9 charities, the bailey is that the matching to the local charity is actually meaningful.
and the local charity of their choice
That's meaningless as I showed in OP.
If you think they could have been even more clear, or think that most donors will believe something different despite the FAQ, you could say so. But to say that people who use the match "don't understand what's going on" is both uncharitable and, as best I can tell, false.
Importance: not really important to read this comment
Update: I updated; see my reply
GivingMultiplier's description according to the EA newsletter^1:
Let's assume Effective_Charity and Local_Charity.
If you were going to give 100 USD to Local_Charity, but instead donate 10 USD to Effective_Charity and 90 USD to Local_Charity, GivingMultiplier will give 9 USD to Local_Charity and 1 USD to Effective_Charity, so there's now 99 USD going to the Local_Charity and 11 USD going the Effective_Charity. GivingMultiplier would give the money to Effective_Charity anyway. So for the donor, this is indistinguishable from donating 99 USD to Local_Charity and 1 USD to Effective_Charity, but it's done in a more obscure way.^2
Also, sure they are rather transparent about their process – at least in the newsletter; it wasn't obvious from the main page of the website –, but still, their scheme mostly works only insofar as people don't understand what's going on.
A bunch of people don't know Why you shouldn’t let “donation matching” affect your giving, and so they will be misguided by donation matches. If EA charities don't use them, then they might be at a disadvantage. So their reasoning might be that the game theory favors also using this technique under a consequentialist moral framework – sort of like a tit-for-tat with other charities, with deceiving donors as an externality.
One could argue that they should link to the piece against donation matching on their website, but maybe both memes are fit to different environments – maybe it would mostly reduce how much people use that specific service to fill their donation matching need, or something like that. I don't know, I'm trying to steelman it.
They might also want to know where people donate money, so they allow people to choose where some money goes among those 9 charities in exchange for knowing where they donate the rest of the money. And at the same time, they signal support for those 9 charities.
Consequences on the donors
If donation matches don't change how much donors give, but just where they give (which seems plausible to me), biasing them equally against all charities might actually help them make decisions that are more aligned with their worldview than if they were less biased with only a subset of them.
1) There website is actually giving different numbers, but the idea is the same.
2) Sure, there's the real choice of choosing which of the 9 Effective Charities receive the money, but:
a) The part about local charities is a red herring
b) Those charities probably sort-of have reached market efficiency (in the sense that large donors can rebalance their donations according to how much total funding they want each of them to have)