Thanks for writing this up! Really appreciate the clear and transparent writeup across hiring, output, and financial numbers, and think that more orgs (including Manifold!) should strive for this level of clarity. One thing I would have been curious to see is how much money came in from each funding source, haha.
I set up a prediction market to see how RP will do against its funding goals:
I don't see that security/privacy is especially important as a feature of a messaging system, when compared to something like "easy to use" or "my friends are already on it"
Basically all sensitive/important EA communication already happens over Slack or Gmail. This means that the consideration for the switching isn't especially relevant to "EA", vs just regular consumers.
This post reads as fairly alarmist against FB messenger, but doesn't do a good job explaining or quantifying what the harms of a possible security breach are, nor how likely such a breach might be
I don't think EA want to be spending weirdness points convincing people to use a less-good system - switching costs are quite high!
Fwiw, I do agree that choosing good software is quite important - for example, I think EA orgs are way overindexed on Google Docs, and a switch to Notion would make any one org something like 10% more productive within 3 months.
This is really awesome! Along the things that Hauke mentioned around scientometrics, I'd love to figure out a native integration for predicting different kinds of metrics for new research papers. Then other scientists browsing on Arxiv can quickly submit their own thoughts on the quality and accuracy of different aspects of each paper, as a more quantitative and public way of delivering feedback to the authors.
A quick sketch: On every new paper submission, we automatically create a markets for:
"How many citations will this paper have?"
"Will this paper have successfully replicated in 1 year?"
"Will this paper be retracted in the next 6 months?"
Along with letting the author set up markets on any key claims made within the paper, or the sources the paper depends on
Manifold would be happy to provide the technical expertise/integration for this; we've previously explored this space with the folks behind Research.bet, which I would highly encourage reaching out to as well.
Hey! Thanks for writing this up, I'm a huge fan of weird funding proposals haha. Let me try and summarize the proposal to see if I understand it. I found some of the terms to be confusing, so refer to the Donald's terminology in "quotes" and my personal interpretation in (italics)
Set aside a "subsidy market" (aka funding pool) to match "investor" (allocator) money
Each "venture" (project) starts with a "cost bar" (funding target)
Investors buy into each venture via dutch auction; call the total raised T.
The subsidy market scales up the T by a multiplicative factor R; eg if R = 2.5, then the subsidy market provides a 150% match.
If R*T > cost bar, the venture is good to go; excess is returned to the subsidy market.
Later, once the venture is complete, "funders" (final oracular funders) decide how much good the venture achieved, and pay the total to the investors.
Excess funds are also returned to the investors.
None of the examples seem to illustrate the investors actually earning positive return, so I'll draw one up:
Project with cost bar $2m
Raises a total of 10k shares * $100 per share = $1m
Subsidy market scales this up to $2.5m (then takes back $0.5m since that's above the bar)
Project ends up spending $1.5m and generating 3m utils; $0.5m is scaled down to 200k and sent to the investors ($20 per share)
Funder pays $3m for the project ($300 per share)
So in total:
Investors paid $1m and got $3.2m = +2.2m
Project gained 1.5m to spend = +1.5m
Funders spent 3m = -3m
Subsidy market spent 1.5m, took back 0.5m, scaled down 0.3m = -0.7m
which all balances out.
The main new thing in this proposal seems to be the "subsidy market" which 1) pays out as a matching pool for projects which counterfactually wouldn't have been funded, and 2) absorbs surplus when a project is overfunded? And 2) is an attempt to solve Scott's question of "who gets the surplus from a profitable venture"? It's this part I'm most confused about
It's not clear to me that this subsidy market leads to better outcomes -- specifically, it seems to mess with the incentives such that the people running the venture don't care about spending the money well? Your first counterexample with the 20m utils seems to address this, but it's not very reassuring - the case where "the fact that $10m and $1m buy the same thing is known up front" is a pretty big ask, IMO.
Also, with the way the system is set up, the subsidy market seems to earn money when projects don't actually need its funding (in the High Returns example), and lose money when its funds are actually useful (in my example). This is deeply weird to me -- if I were viewing the subsidy market as a lender, it would seem "fair" somehow to pay it back extra if its funds were actually used, rather than when it sits by twiddling its thumbs.
One adjustment/framing that makes more intuitive sense to me is to make the subsidy market as just another shareholder; e.g. if it scales up T to 2.5T and thus is bankrolling 1.5T/2.5T = 60% of the operation, it should just get 60% of the total profit among all investors.
I've had the pleasure of meeting Isaak in person, and it's clear that thoughtfulness and agency are both values that he not only espouses, but also embodies. (Ask him sometime about his experience starting a utilitarian student movement -- before ever having heard of "effective altruism"!)
The Future Forum looks incredibly exciting and I would highly encourage you to apply~
I think impact markets should be viewed in that experimental lens, for what it's worth (it's barely been tested outside of a few experiments on the Optimism blockchain). I'm not sure if we disagree much!
Curious to hear what experiments and better funding mechanisms you're excited about~
I'm not sure that "uniqueness" is the right thing to look at.
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities - but these would apply equally to impact-focused entities too.
We should be cautious about pushing the world (and EA especially) further towards the "big things happen due to individuals following their local financial incentives" dynamics.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money - but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
Hm, naively - is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don't think this a unique reason to avoid impact markets.
My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than "the thing that happens in regular markets already".
Additionally - I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?
Finally: on a meta level, the amount of risk you're willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we're likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That's not my current read of our xrisk situation, but would love to be convinced otherwise!)
In July 2022, there still aren’t great forecasting systems that could deal with this problem. The closest might be Manifold Markets, which allows for the fast creation of different markets and the transfer of funds to charities, which gives some monetary value to their tokens. In any case, because setting up such a system might be laborious, one could instead just offer to set such a system up only upon request.
Manifold would be enthusiastic about setting up such a system for improving grant quality, through either internal or public prediction markets! Reach out (firstname.lastname@example.org) if you participate in any kind of grantmaking and would like to collaborate.
The primary blocker, in my view, is a lack of understanding on our team on how the grantmaking process operates - meaning we have less understanding of the the use case (and thus how to structure our product) than eg our normal consumer product. A few members of OpenPhil have previously spoken with us; we'd love to work more to understand how we could integrate.
A lesser blocker is the institutional lack of will to switch over to a mostly untested system. I think here "Prediction Markets in the Corporate Setting" is a little pessimistic wrt motives; my sense is that decisionmakers would happily delegate decisions, if the product felt "good enough" - so this kind of goes back to the above point.
Specifically, we actually DO have a matching pool, but there are some properties of fixed-matching-pool QF that is not super desirable; aka it turns into a zero-sum competition for the fixed pool. We're trying to address this with a growing matching pool, would love to see if your mechanism here is the right fix. More discussion: https://github.com/manifoldmarkets/manifold/pull/486#issuecomment-1154217092
allocating "tips" on the comments of a particular market
And in the latter scenario, we had been thinking of a matching pool-less approach of redistributing contributions according to the quadratic funding equation. But of course, the downside of "I wanted to tip X but the commenter is getting less!" always is kind of weird. I like this idea of proportionally increasing commitments out of a particular limit, it seems like a much easier psychological sell.
Really appreciate the animations btw - super helpful for giving a visual intuition for how this works!
Missing-but-wanted children now substantially outnumber unwanted births. Missing kids are a global phenomenon, not just a rich-world problem. Multiplying out each country’s fertility gap by its population of reproductive age women reveals that, for women entering their reproductive years in 2010 in the countries in my sample, there are likely to be a net 270 million missing births—if fertility ideals and birth rates hold stable. Put another way, over the 30 to 40 years these women would potentially be having children, that’s about 6 to 10 million missing babies per year thanks to the global undershooting of fertility.
It becomes clear that there's a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we've got:
It's common sense that not being bit by mosquitos is nice, all else equal.
The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
Lots of smart people recommend this intervention.
There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like "what about this edge case" rather than taking issue with the central premise.
Even if one of these fails, there are still the others. You're very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.
I really liked this framing, and think it could be a post on it's own! It points at something fundamental and important like "Prefer robust arguments".
You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.
Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments "robust".
By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.
Some signs that an argument is robust:
Many people who think hard about this issue agree
People with very different backgrounds agree
The argument does a good job predicting past results across a lot of different areas
Robustness isn't the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.
I actually do think that getting Flynn elected would be quite good, and would be open to other ways to contribute. eg if phonebanking seems to be the bottleneck, could I pay for my friends to phonebank, or is there some rule about needing to be "volunteers"?
I have donated $2900, and I'm on the fence about donating another $2900. Primarily, I'm not sure what the impact of a marginal dollar to the campaign will accomplish -- is the campaign still cash-constrained?
Thank you so, so much for writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more "me trying to lay out my intuitions" and less "I know exactly how we should change EA on account of these intuitions". I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!
I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences - but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don't know - and to that end your review is very enlightening! And some is: there's a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I'd push for a return to more of Sequences-style shorter chunks.
I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can't change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary.
I still believe in (2), but I'm not confident I can articulate why (and I might be wrong!). Once again, I'd draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.
Can we change the past? I feel like the answer is not 100% obviously "no" -- I think this post by Joe Carlsmith lays out some arguments for why:
Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of “acausal control,” leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.
(but it's also super technical and I'm at risk of having misunderstood his post to service my own arguments.)
In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more "this was a good idea, here's your prize", and less "here's some money to go do X".
I'm not entirely sure what % of my belief in this comes from "this is a morally just way of paying out to the past" vs "this will be effective at producing better future outcomes"; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.
To this end, I've been working on a proposal for equity for charities -- still in a very early stage, but as you work as a fund manager, I'd love to hear your thoughts (especially your criticism!)
I deeply do not share the intuition that younger versions of me are dumber and/or less ethical. Not sure how to express this but:
17!Austin had much better focus/less ADHD (possibly as a result of not having a smartphone all the time), and more ability to work through hard problems
17!Austin read a lot more books
17!Austin was quite good at math
17!Austin picked up new concepts much more quickly, had more fluid intelligence
17!Austin had more slack, ability to try out new things
17!Austin had better empathy for the struggles of young people
This last point is a theme in my all-time favorite book, Ender's Game - that the lives of children and teenagers are real lives, but society kind of systematically underweights their preferences and desires. We stick them into compulsory schooling, deny them the right to vote and the right to work, and prevent them from making their own choices.
Thanks - this comparison was clarifying to me! The point about past people being poorer was quite novel to me.
Intuitively for me, the strongest weights are for "it's easier to help the future than the past" followed by "there are a lot of possible people in the future", so on balance longtermism is more important than "pasttermism" (?). But I'd also intuit that pasttermism is under-discussed compared to long/neartermism on the margin - basically the reason I wrote this post at all.
A friend of mine (Eric Jang) has proposed a mechanism to game the "Donating Appreciated Securities" aspect of US tax law. Say Alice and Bob both want to send $100 to Givewell, and deductions are worth 30% to them. If they just sent the money, Givewell would get $200: $140 from Alice and Bob, and $60 from the US Govt.
Instead, they create a security that costs $100 and says "In 1 year, flip a coin; the winner gets $200". The winner could send that to Givewell and take a $60 deduction. Meanwhile, the loser can take a $30 capital loss on their taxes. In effect, Givewell is still getting $200, but now Alice + Bob only pay a total of $110, with the US Govt contributing $90.
I'm not an accountant, and don't know if it actually works this way -- but if so, this might be a natural fit for the EA Funds Donor Lottery haha.
Thanks for writing this post! All three of these strategies are things I've separately worked out and have tried recommending people, and it's especially cool to have them all gathered together in one place, with background and how-tos for each. I'll definitely be linking more to this in the future.
FWIW: highly, highly recommend LessWrong's editing services! Justis Millis in particular gave excellent feedback, across the gamut of low-level formatting to high-level content ideas, in a stunningly short amount of time (<48h, iirc).
"Structural capital is the ability of the holder to absorb resources (e.g. people or money) and turn them into useful things". What useful things has EA produced (exclusive of fundraising and converting more EAs)? I think e.g. the outcomes around developing world health interventions are really great, but it's not clear to me how much of that is counterfactually attributable to EA; would the Gates foundation or somebody else have picked it up anyways?
Competent management: it feels like excellent management and managers are in short supply; there are a lot of people who do direct work (research, community work), but few managers and even fewer execs on the level of "VP or director at top series-A Silicon Valley startup"
Well written code: maybe the comparison to SV is especially harsh here, but I've been thinking that EA needs better software (still WIP). Software is an incredibly high-leverage activity, and I'd claim that eg most of the world's productivity gains in the last two decades can be attributed to software; but EA draws from an philosophical/academic tradition and thus wayyy overvalues "blogging" over "coding"
I love the framing of "structural capital", and would tentatively state that EA as a movement has much less structural capital than I would expect, relative to its amount of financial/human/network capital. In fact, I would argue that EA is bottlenecked on structural capital.
It seems to me like EA has a ton of money, a bunch of really smart people, and the ear of decisionmakers... but has had at best mixed results converting this into effective organizations, good ops, or good code. This is relative to my experience in the Silicon Valley tech scene, which feels like the best point of comparison. (You may draw different conclusions compared to e.g. academia)
One question I would be very interested in: how much of the money & people are being spent acquiring more money & people, vs being converted into structural capital?
Setting up an information markets for these questions here:
FWIW we'd love something like this in Manifold too, but that's probably a bit farther out; Metaculus is much better developed in terms of complex in-depth estimation/forecasting, while Manifold is trying to focus on being as simple as possible.
Logically structure your writing as a pyramid. Present information as it is needed. Your reader shouldn't have to jump around your document, like they would for a piece of code.
I'd recommend structuring your code to not require jumping around either! E.g. group logic together in functions; put function and variable declarations close to where they are used; use the most local scope possible.
A coworking space ($300/mo), or someplace which isn't your own house/bedroom, to have a better separation of work and life
A second monitor! Doubling my screen real estate increased my productivity by a (wild guess) 10%, easily an incredible investment. If you have a laptop, I'd recommend this portable second monitor ($250): https://www.amazon.com/gp/aw/d/B087792CQT
Books! Ramit Sethi has an excellent heuristic goes like "If you're thinking about buying a book, just buy it". Don't worry about finishing (or even starting!) every book you buy, just increase your own access to them to increase your expected number of book content consumed.
That could be the case! Although, this would only be harmful if 1) Metaculus provides really good forecasts 2) in a way Manifold could not. I'm curious which parts of Manifold feel less good for forecasting compared to Metaculus, and would love to see how we can improve.
I actually think the products are mostly aimed towards different segments, just due to the nature of complexity of the products; superforecasters will probably enjoy the power of modeling a specific probability distribution in Metaculus, while more everyday users will appreciate the simple buy YES/NO choices in Manifold.
Donor lotteries are a good case study; they seem great (I sent my donations last year to the EA Funds lottery) but also very underutilized, which should be a warning sign for other kinds of weird altruistic schemes. I'm wondering how much of this is lack of promotion, and how much that the arguments in favor are too hard to grok. I'm also hopeful that the "skill" aspect of a prediction market makes it more attractive than a straight lottery.
3 -- There's two kinds of incentive alignment, I think?
First, there's "do I win the most points for putting down accurate prediction", and I think both proper scoring rules & prediction markets do a reasonable job of this? Possibly proper scoring rules do even better than markets here.
Second, there's "why do I care about points", which afaict is much harder to get right outside of prediction market settings. (This might be not standard usage of "incentive-alignment", for which I apologize). Metaculus, for example, is a positive-sum system, and it's not obvious how to pay people for good predictions through that. Metaculus does run cash tournaments on specific topics; my vague understanding is that the tournament structure encourages high-variance bets, in order to have a better shot at top prizes?
As a novice to the forecasting space, I'm sure I'm missing a lot of options; I'm very curious about which forecasting incentive structures work well in your experience!
I think your list covered it! Primarily, we'd appreciate advice on organizational structure (for our main product; nonprofit branch; offshore crypto subsidiary) and on regulatory barriers (figuring out what kinds of prediction market payouts would be acceptable)
This sounds cool - Manifold Markets absolutely would have benefited (and currently would still benefit!) from easy to access, EA-aligned legal help. We are based in the US, though, so consider this a call for "a lawyer admitted in a United States jurisdiction" to take on similar work!
Thank you for taking the time to write this response!
I'm not exactly sure what premise downvoters are reading from my question. To be clear, I think the war is a horrible idea and it's important to punish defection in a negative-sum way (aka impose sanctions on countries in violation of international laws).
The main point I wanted to entertain was: it's sad when we have to impose sanctions on countries; lots of people will suffer. In the same way it's sad when a war is fought, and lots of people suffer. We should be careful not to treat economic punishment as qualitatively different or intrinsically superior to direct violence; its a question of how much net utility different responses produce for the world.
What's the QALY cost of the sanctions on Russia? How does it compare to the QALY lost in the Ukraine conflict?
My sense of the media narrative has been "Russia/Putin bad, Ukraine good, sanctions good". But if you step back (a lot) and squint, both direct warfare and economic sanctions share the property of being negative-sum transactions. Has anyone done an order-of-magnitude calculation for the cost of this?
Quick stab: Valuing one QALY at $100k (rough figure for US), Russian GDP was $1.4T; the ruble has lost 30% of its value. If we take that to be a 10% contraction, $140B/$100k = 1.4M QALY lost; if 80 QALY = 1 life, then 17.5k lives lost.