Experiment in Retroactive Funding: An EA Forum Prize Contest 2022-06-01T21:15:09.031Z
Chaining Retroactive Funders to Borrow Against Unlikely Utopias 2022-04-19T18:25:57.992Z
Thoughts on the Transparent Newcomb’s Problem 2022-04-15T21:05:37.948Z
Toward Impact Markets 2022-03-15T22:08:02.787Z
Are there highly leveraged donation opportunities to prevent wars and dictatorships? 2022-02-26T03:31:43.400Z
How do you stay emotionally motivated while working on highly specific problems? 2021-05-02T14:22:42.185Z
How to get up to speed on a new field of research? 2021-03-01T00:36:02.124Z
How to work with self-consciousness? 2021-02-03T18:53:09.341Z
How do you balance reading and thinking? 2021-01-17T13:47:57.526Z
How do you approach hard problems? 2021-01-04T14:00:25.588Z
How might better collective decision-making backfire? 2020-12-13T11:44:43.758Z
Summary of Evidence, Decision, and Causality 2020-09-05T20:23:04.019Z
Self-Similarity Experiment 2020-09-05T17:04:14.619Z
Modelers and Indexers 2020-05-12T12:01:14.768Z
Denis Drescher's Shortform 2020-04-23T15:44:50.620Z
Current Thinking on Prioritization 2018 2018-03-13T19:22:20.654Z
Cause Area: Human Rights in North Korea 2017-11-20T20:52:15.674Z
The Attribution Moloch 2016-04-28T06:43:10.413Z
Even More Reasons for Donor Coordination 2015-10-27T05:30:37.899Z
The Redundancy of Quantity 2015-09-03T17:47:20.230Z
My Cause Selection: Denis Drescher 2015-09-02T11:28:51.383Z
Results of the Effective Altruism Outreach Survey 2015-07-26T11:41:48.500Z
Dissociation for Altruists 2015-05-14T11:27:21.834Z
Meetup : Effective Altruism Berlin Meetup #3 2015-05-10T19:40:40.990Z
Incentivizing Charity Cooperation 2015-05-10T11:02:46.433Z
Expected Utility Auctions 2015-05-02T16:22:28.948Z
Telofy’s Effective Altruism 101 2015-03-29T18:50:56.188Z
Meetup : EA Berlin #2 2015-03-26T16:55:04.882Z
Common Misconceptions about Effective Altruism 2015-03-23T09:25:36.304Z
Precise Altruism 2015-03-21T20:55:14.834Z
Telofy’s Introduction to Effective Altruism 2015-01-21T16:46:18.527Z


Comment by Dawn Drescher (Telofy) on The Next EA Global Should Have Safe Air · 2022-09-24T20:01:19.562Z · EA · GW

Another note on 4: A friend of mine contracted Covid at EAGx and says that she knows of many people how have. That’s just one pick from almost a thousand people. Her bubble may be unusually Covidious due to being a bubble with Covid though. So I don’t think Microcovid overestimates the risk of infection.

I’ve so far used the individual’s risk of infection and multiplied it with the number of individuals. But of course these people infect each other, so they are very much not independent. I would imagine that an EAG has either very few or very many infections. So that would require tracking the number over the course of several events to be able to average over them.

But a relatively Covid-conscious event like the Less Wrong Community Weekend may also cause or be correlated with more people afterwards reporting their Covid infections. A more Covid-oblivious EAG probably suffers underreporting afterwards. Maybe 10x from the same source that causes people not to fill in feedback surveys unless they are strongly coerced to and maybe another 10x from bad tests and bad sample-taking.

Some people don’t have the routine figured out of rubbing the swap first against the tonsils and then sticking it through the nose all the way down into the throat. Plus there are order-of-magnitude differences in the sensitivity of the self-tests. Bad tests and bad sample-taking can easily make a difference of 10x among the people who think they just had a random cold. So maybe a follow-up survey should ask about symptoms rather than confirmed positive tests, be embedded in various other feedback questions (so that it’s not just filled in by people with Covid), and then be used as a sample to extrapolate to the whole attendee population.

I’ve been trying to find studies on medical conferences but the only one I could find had various safety mechanisms in place, very much unlike EAGx, so it’s unsurprising that very few people got Covid. (I’m assuming that the vaccination statuses of the attendees are similar between a medical conference and an EAG.)

Comment by Dawn Drescher (Telofy) on The Next EA Global Should Have Safe Air · 2022-09-24T19:46:44.141Z · EA · GW

I see! Yeah, I don’t have an overview of the bottlenecks in the biosecurity ecosystem, so that’s good to consider.

Comment by Dawn Drescher (Telofy) on The Next EA Global Should Have Safe Air · 2022-09-22T08:34:31.098Z · EA · GW
  1. Yeah, but I can see Guy’s point that there’s some threshold where an event is short enough that a social intervention is cheaper than a technical one, so that different solutions are best for different contexts. But I don’t really have an opinion on that.
  2. Hmm, true. Testing for fever maybe?
  3. Thanks!
  4. My model (based on Microcovid) would’ve predicted about 9 cases (3–26) for a 1,000-person event around nowish in Berlin. I don’t have easy access to the data of London back then, but the case count must’ve been higher. With these numbers we “only” lose about a year of EA time in expectation and have less than one case of long-covid. 
Comment by Dawn Drescher (Telofy) on The Next EA Global Should Have Safe Air · 2022-09-21T16:27:26.062Z · EA · GW

At EAGx Berlin just now, I and a few others discussed 80/20 interventions.

My first suggestion was mandatory FFP2 or better masks indoors and many outdoors activities, ideally with some sort of protection from rain – a roof or tent.

Another participant anticipated the objection that people would probably object to that that it’s harder to read facial expressions with masks, which could make communication harder for those people who are good at using and reading facial expressions. A counter-suggestion was hence to mandate masks only for the listeners during talks since that is a time when they might fill a room with Covid spray but don’t need to talk.

Improving the air quality is another good option that I do a lot at home but haven’t modeled. It feels like one that is particularly suitable to EA offices and group houses.

The Less Wrong Community Weekend in Berlin was successful with very rigorous testing every day with the most sensitive test that is available.

All in all I would just like to call for a lot more risk modeling to get a better idea of the magnitude of the risks to EA and EAs, and then proportionate solutions (technical or social) to mitigate the various sources of risk. Some solution may be better suited for short events, some for offices and group houses.

This seems all easily important enough that someone should quantitatively model it. 

I did the math for the last EAG London, though I underestimated the attendee count by 3–4x. (Does someone know the number?)

Without mask, the event cost 6 years of EA time (continuous, so 24 hours in a day, not 8 h). Maybe it was worth it, maybe not, hard to tell. But if everyone had worn N95 or better masks, that would’ve been down to about 17 days. They could’ve kept about 100% of the value of EAG while reducing the risk to < 1%.

If the event really had more like 900 attendees, then that’s almost 20 years of EA time that is lost in expectation through these events. I’m not trying to model this conservatively; I don’t know in which direction I’m erring.

One objection that I can see is that maybe this increases the time lost from EAGs by some low single digit factor, and since the event is only 3 days long, that doesn’t seem so bad on an individual level. (Some people spend over a week on a single funding application, so if it’s rejected, maybe that comes with a similar time cost.)

Another somewhat cynical objection could be that maybe there’s the risk that someone doesn’t contribute to the effective altruism enterprise throughout two decades of their life because they were put off by having to wear a mask and so never talked to someone who could answer their objections to EA. Maybe losing a person like that is as bad as a few EAs losing a total of 20 years of their lives. This seems overly cynical to me, but I can’t easily argue against it either.

My Guesstimate model is here.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-08-18T19:28:20.305Z · EA · GW

Indeed! I think this transition from impact markets to other sources of funding can happen quite naturally. A new, unknown researcher may enjoy the confidence in her abilities of some close friends but has little to show that would convince major funders that she can do high-quality research. But once she has used impact markets to fund her first few high-quality pieces of research, she will have a good track record to show, and can plausibly access other sources of funding. Then she can choose between them freely, is not dependent on impact markets alone anymore.

Comment by Dawn Drescher (Telofy) on By how much should Meta's BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities? · 2022-08-10T12:37:11.025Z · EA · GW

I’m quite confused about that too. I don’t know of any real statistics, but my informal impression is that almost everyone is on board with not speeding capabilities work. There’s the vague argument floating around that actively impeding capabilities work would do nothing but burn bridges (which doesn’t seem right in full generality since animal rights groups also manage to influence whole production chains to switch to more human methods that form a new market equilibrium), but all the pitches for AI safety work always stress all the ways in which the groups will be careful not to work on anything that might differentially benefit capabilities and will keep everything secret by default unless they’re very sure that it won’t enhance capabilities. So I think my intuition that this is the dominant view is probably not far off the mark.

But the recruiting for non-safety roles is (seemingly) in complete contradiction to that. That’s what I’m completely confused about. Maybe the idea is that the organizations can be pushed in safer directions if there are more safety-conscious people working at them, so that it’s good to recruit EAs into them, since they are more likely to be safety-conscious than random ML people. (But the EAs you’d want to recruit for that are not the usual ML EAs but probably rather ML EAs who are also really good at office politics.) Or maybe these these groups are actually very safety-conscious and are years ahead of everyone else and are only gradually releasing stuff that they’ve completed years ago to keep the investors happy but are keeping all the really dangerous stuff completely secret.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-26T21:53:01.667Z · EA · GW

An alternative that we’ve been toying with are reverse charity fundraisers of sorts. You do your thing, and when you’re done, you publish it, and then there’s a reward button where anyone can reward you for it. “Your thing” can be doing research, funding research, copyediting research, etc.

I love the simplicity of it, but there are a few worries that we have when it comes to incentives for collaboration when participants have different levels of social influence. Still, it’s a very promising model in my mind.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-26T21:36:12.648Z · EA · GW

By “measurement” do you mean the measurements of metrics that the payouts are conditional on (we don’t use those) or measurements of the extent to which the prizes encourage efforts that would not otherwise have happened (we’d be very interested in those)?

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-26T21:33:20.538Z · EA · GW

I haven’t but I’m aware of a forthcoming report by Rethink Priorities that covers prize contests. If you find more research on that – or on the similar dynamic of hopes for acquisitions incentivizing entrepreneurship – I’d be very interested!

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-26T21:29:28.448Z · EA · GW

Prizes: Yes, totally! I’ve found that people understand what I’m getting at with impact markets much quicker if I frame it as a prize contest!

That point system sounds interesting. Maybe you can explain it again in our call as I’m not sure I follow the explanation here. But we’re currently betting on indirect normativity, so it won’t be immediately applicable for us.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-26T21:20:36.936Z · EA · GW

I think there were already prediction markets for future grants at the time when I did my first research into impact markets. Maybe they still exist.

The risk and prediction aspects that you mention factor somewhat into my value proposition for impact markets, but I’m not sure how important they are or generally how I feel about them.

My/our ( thinking is rather what I mentioned in the comment above, that we want to (1) radically reduce the time cost borne by funders, (2) expand the hiring pool for funders, and (3) enlist all the big networks of angel investors and impact investment funds in the search for the best funding opportunities. My thinking about the relative emphasis changes from time to time here as I become aware of new considerations. The risk angle could also make the top list, but of course only with fully informed investors who know exactly what they’re doing.

The biggest factor in my mind is currently the third one, followed by the first one. A quick Guesstimate model says that the third factor could 20x (5–85x) the accessible funding opportunities (access to thousands more social circles worldwide of people who are great at networking), and if hits-based funders currently aim for funding 10% successful projects, that translates to a reduction of about 10x in time cost.

I’m having a hard time guessing how the hiring pool might change. It will probably get vastly greater once grantmakers have to know only the priorities research and don’t need to be great judges of character and project-specific skill of others too, but funders might choose not to scale to absorb all of them.

I don’t know enough to exclude that this could be done with prediction markets, but to get angel investors and impact investment funds only prediction markets, they would have to involve real money, and that causes lots of legal hurdles. But even if we clear those, they would only put their money in them if they can expect the risky equivalent of a riskless 10–30% APY. I’m a bit hazy on how the initial liquidity gets onto new prediction markets that people create (e.g., on Polymarket or Augur) and how the initial value of Yes and No are set, so it’ll take me a lot more learning before I can repeat the profitability math for prediction markets. Also I don’t know yet how the markets would be resolved.

It seems all weird when I try to play through how this could work: You could establish the norm that researchers create markets on Augur that resolve positively if any of some set of retro funders endorse them. Then the researchers buy Yes. Then their impact investor friends buy Yes. Then the researchers sell out of some of their positions again to pay their rent etc. Then, if the project is successful, the retro funder comes in, buys just the right amount of No, and resolves the market to Yes. And by “just the right amount” I mean the sort of amount that just incentivizes investors to keep investing into projects like that without wasting money beyond that point. But that’s such a weird use of prediction markets… (On second thought, that wouldn’t work at all because there would not be enough liquidity on the Yes side for the researcher and the investor to buy it, right?)

Maybe someone with deeper knowledge of prediction markets can come up with a system that works well and has all the advantages though!

Update: Since there are maybe people here with more knowledge of prediction markets, it’s probably more productive to phrase this as a question: Would it be possible to create an Augur-like system where:

  1. altruistic funders can provide funds, a “prize pool” of sorts, with only a one-time effort,
  2. researchers can create markets at a low cost that try to predict whether any of a set of funders will endorse their research project and its results,
  3. investors can bet money on such endorsements,
  4. researchers can sell into those buys from investors to fund their project,
  5. funders can eventually resolve the markets with their endorsements and will thereby reward investor and researcher to just the right extent, and
  6. funders have no costs if markets don’t resolve or somehow resolve to no?
Comment by Dawn Drescher (Telofy) on Hiring? Avoid the Candidate Bystander Effect · 2022-07-26T09:41:46.938Z · EA · GW

[I feel like I don’t approach this topic as dispassionately as I usually do with epistemics, so please bear in mind this “epistemic status.”]

Even if it won't be that easy, I claim it will be easier with actual data than trying to solve the theoretical problem.

Indeed! I imagine that that trades off against personal biases. When you feel like you’ve won in a 1:1000 lottery at your dream job while being worried about your finances, it’s hard to think objectively about whether taking the job is really the best thing impartially considered. I’d much rather stand on the shoulders of a crowd of people who are biased in many different directions and have homed in on some framework that I can just apply when I have to make such a decision.

What?? Who says that?

Oh, sorry, not explicitly, but when I run into an important, opaque, confusing problem and most other people act like it’s not there, my mind goes to, “They must understand something that makes this a solved problem or non-issue that I don’t understand.” But of course there’s also the explanation that they’ve all concluded that someone else should solve it or that it’s too hard to solve.

Back in the day before EA, orgs that I was in touch with were also like, “The library, the clinic, and the cat yoga are all important projects, so we should split our funds evenly between them,” and I was secretly like, “Why? What about all the other projects besides these three? How do you know they’re not at least equally important? Are those three things really equally important? How do they know that? If I ask, will they hate me and our org for it? Or is it an infohazard, and if I ask, they’ll think about it and it’ll cause anomie and infighting that has much worse effects than any misallocation, especially if I’m wrong about the misallocation?”

It’s hard to say in retrospect, but I think was split like “30% they know something I don’t; 30% it’s an infohazard; 30% something else is going on; and 10% I’m right.” I failed to take into account that the “10% I’m right” should have much more weight because how important it would be if they turn out true despite the low probability, even though conversely I was very concerned about the dire effects of spreading a viral infohazard.

(After 1–2 years I started talking about it in private with close friends, and after 4 years, in 2014, I was completely out of the closet, when I realized that Peter Singer had beat me to the realization by a few decades and hasn’t destroyed civilization with it.)

Now I feel like the situation is vaguely similar, and I want to at least talk about it to not repeat that mistake.

Would you link to the post(s) you're talking about?

Just need to try to find them again.

If productivity is really power-law distributed, that’d be a strong reason not to worry much about it because the top candidate is probably easy to identify. But without having engaged much with it, I’m worried that seeming outliers are often carried

  1. by network effects (i.e. they are a random person that got cast into the right spot at the right time and thus were super successful, but so would’ve been half of the rest of the candidates);
  2. by psychological effects  (e.g., maybe almost anyone who gets cast into a leading position will become more confident, have less self-doubt, and so create more output);
  3. by skillful Goodharting of the most legible metrics, including skillful narcissism, (because they’re the ones that bestow social credit and that researchers might also have to rely on when studying job performance in general) at the expense of social cohesion, collaboration, and any other qualities that are harder to attribute to someone.

What makes things worse, or harder to study, is that there are probably always many necessary conditions for outsized success, some of which stem from the candidate and others from the position or people the candidate ends up working with. These need to be teased apart somehow.

Brian has this interesting article about the differences in expected cost-effectiveness among the top 50% of charities. It contains a lot of very general considerations that limit the differences that we can reasonably expect between charities. Maybe a similar set of considerations applies to candidates so that it’s unlikely that there are even 10x differences between the top 50% of candidates in subjective expectation.

With Less Wrong in 2013 maybe having an average and median IQ almost three standard deviations above average and the overlap of LW and EA, it’s easy for anyone but about 1:300 people to conclude that they probably don’t need to apply for most jobs. (Not that they’d be right not to – that’s an open question imo.) Whatever crystallized skills they have that are unique and relevant for the job, the IQ 150+ people can probably pick them all up within a year. That’s an oversimplification since there are smart people who will just refuse to learn something or otherwise to adapt to the requirements of the situation, but it feels like it’ll apply by and large.

An org I know did an IQ test as part of the application process, but one that was only calibrated up to IQ 130. That could be an interesting data point since my model would predict that some majority (don’t know how to calculate it) of serious applicants must’ve maxed out the score on it if the average IQ among them is in the 140 area. (By “serious” I mean to exclude ones who only want to fill some quota of applications to keep receiving unemployment benefits and similar sources of noise.)

What’s ironic is that as a kid I thought that my 135 score was unlikely to be accurate because it’s a 1:100 unlikely score, so it’s more likely that I got lucky during the test in some fashion or it was badly calibrated.  (Related to the Optimizer’s Curse, but I didn’t know that term at the time.) Now among people whose average IQ is 140ish, it seems perfectly plausible. Plus several more tests all came out at 133 or 135. Yay, reference class tennis! Doesn’t reduce my confusion about what to do though.

This seems like a very interesting model that has made me much less worried about applications to very large hiring rounds in fields where the applicants’ plan Z is unlikely to be extremely harmful.

I wanted to play around with the models, reimplement them in Squiggle or Causal, and understand their sensitivity to the inputs better, but I never got around to it.

It’s been too long that I read this article, but it also seems very relevant!

This is another article (section) I referenced.

Another important input is the so-called “value drift,” which, in my experience, has nothing to do with value drift and is mostly people running out of financial runway and ending up in dead-end industry jobs that eat up all of their time until they burn out. (Sorry for the hyperbole, but I dislike the term a lot.)

More recent research indicates that it’s surprisingly low compared to what I would’ve expected. But I haven’t checked whether I trust the data to be untainted by such things as survival bias.

Comment by Dawn Drescher (Telofy) on Book a chat with an EA professional · 2022-07-25T23:36:58.656Z · EA · GW

Seeing your post on impostor syndrome in hiring, I’m wondering whether you’re maybe well-positioned to start a think tank to solve hiring among altruists. How to invest our time seems to be about as big of a question as how to invest our money, so it may also warrant similarly much effort to optimize.

There has been a lot of research on the money side, but on the time side, there are only 80k and Probably Good. It seems to me that both give you good answers if you’re a particular type of person and wonder what, very broadly, you should do with your life. But they address questions such as efficient moral trade between orgs with different goals or optimal cooperation between orgs with similar goals only very superficially by comparison. Maybe that problem warrants a new dedicated think tank.

I don’t know if that’ll end up requiring software to solve though.

Comment by Dawn Drescher (Telofy) on Hiring? Avoid the Candidate Bystander Effect · 2022-07-25T22:32:25.248Z · EA · GW

These considerations have certainly kept me from applying for any EA jobs for many years!

(I have my own EA startup now, which is probably the best of both worlds anyway, but that just means that this topic will become important for me from the other perspective.)

I’ve written about my worries here.

Basically, I feel like we’re back in 2006 before there was any EA or GiveWell, and someone gives me the advice that World Vision does good stuff and I should donate to them. It has about zero information content for me and leaves me just as ignorant about the best use of my resources as it found me. What are they trying to achieve and how? What are the alternatives? How do I compare them to each other? What criteria are important or irrelevant? How fungible are my contributions and what other activities am I leveraging?

Likewise with jobs I’m at a complete loss. How reliable are our interview processes? What are the probability distributions around the projected-performance scores that they produce? How can a top candidate know how much their distribution overlaps with that of the runner-up and by what factor they are ahead? Maybe even more importantly: How can a top candidate know what other options the runner-up will have so that the top candidate can decline the offer if the runner-up would otherwise go into AI capabilities (or is only an expected 2.5 rejections away from the AI capabilities fallback) or if the runner-up would otherwise have to give up on altruistic things because they’d run out of financial runway?

80,000 Hours has several insightful posts on the differences between the best and the second best candidate, reliability of interview processes, the trajectory of the expected added value of additional applicants, etc. Those are super interesting and (too) often reassuring, but a decision where I want to work for the next 5+ years is a decision that is about as big for me as a decision where to donate half a million or so. So these blog posts don’t quite cut it for me.  Nor do I typically know enough about an organization to be sure that I’m applying the insights correctly.

What I would find more reassuring are adversarial collaborations, research from parties that don’t have any particular stake in the hiring situation, attempts to red-team the “Hiring is largely solved” kind of view, and really strong coordination between orgs. (Here’s a starting point.)

Questionnaires tell me that I have a serious case of impostor syndrome, so I don’t trust my intuitions on these things and don’t want to write a red-teaming attempt for fear it might be infohazardous if I’m wrong on balance. Then again I thought I must be somehow wrong about optimizing for impartial impact rather than warm-fuzzies in my charity activism before I found out about EA, and now I regret not being open about that earlier.

One thing that I have going for myself is that I’m not particularly charismatic, so that if I did end up as the supposed top candidate, I could be fairly sure that I could only have gotten there by skill, by chance, or by some unknown factor. So I feel like the riskiness of jobs for me forms a Laffer curve where jobs with no other applicants are trivially safe and jobs with hundreds of good applicants are safe again because the chance factor is really unlikely.  In between be dragons.

Imma suggested reserving a large pot of donation money (and time for volunteering and coaching, and a promise to keep applying for jobs for a year, not work on AI capabilities, etc.) and then signaling that I’ll donate this pot according to the preferences of the organizations that I’m applying to if they all reject me. I can’t make the pot large enough to be really meaningful, but maybe it can serve as a tie breaker.

Comment by Dawn Drescher (Telofy) on Book a chat with an EA professional · 2022-07-25T20:38:51.726Z · EA · GW

My top list contains:

  1. Impact markets obviously. :-3
  2. Creating a world-modeling ecosystem to quantify the impact of even highly uncertain interventions. Squiggle is the main effort in this space that comes to mind, but Aryeh Englander is also working on a unified Bayesian network approach to it.

Outside EA:

  1. I feel like it should be possible to improve note-taking in verbal conversations a lot with the sort of technologies we have now. The most effective note-takers I know still do it fully manually using magical multitasking skills. It feels like there’s a decent chance that that could be improved with speed-to-text + GPT-3-like summarization. Plus the user could be instructed to ask follow-up questions in a way that repeats something that the other said to add some redundancy.
  2. Less serious: Swiping keyboards seem oddly broken. Unless I don’t know of the best one. “Thank yurt”? Seriously? What sort of crazy swiping mistake of mine could’ve possibly overridden the >> 20k times more likely collocational prior that “Thank y-” ought to be completed to “Thank you”? I didn’t even know what a yurt is! xD
Comment by Dawn Drescher (Telofy) on Book a chat with an EA professional · 2022-07-25T20:16:07.493Z · EA · GW

Whee! Thanks!

Comment by Dawn Drescher (Telofy) on Book a chat with an EA professional · 2022-07-20T13:38:03.350Z · EA · GW

Cool! I’d love to join in! I’ve created a “conversation menu” (h/t to a friend for the idea!) on my profile. Here’s a copy-paste that I probably won’t update when I update the original on the profile.

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Some topics for potential calls:

  1. Impact markets
  2. Other market mechanism for public and common goods
  3. AI safety, especially to avert s-risks
  4. AI timelines and whether to still buy NMN
  5. Evidential cooperation in large worlds
  6. Any tensions or gaps in our world models that you’ve been thinking about
  7. MIRI-esque decision theory
  8. Moral cooperation and trade
  9. Any events that have been on your mind lately
  10. Proportionality of safety mechanisms
  11. Phonetics and phonology
  12. Improving collective decision-making
  13. Creating powerful quantitative world models for priorities research
  14. Software engineering – Python, TypeScript, Solidity, etc.
  15. Why I’m always so impressed by mathematicians
  16. Autism, anxiety, worry, guilt OCD, depression, fatigue
  17. Cultural effects on mental health
  18. Understanding people
  19. English-language literature
  20. Impostor syndrome, or how incompetent exactly am I
  21. Anything you would like me or others to understand about you
  22. Well-being of farmed and wild animals today and within the next 1m years
  23. Including invertebrates and micro-organisms of course
  24. Well-being of organisms other than animals
  25. Life optimizations
  26. Bouldering, Trackmania, Othello, and lockpicking
  27. World and space governance
  28. Automated, decentralized governance
  29. Trees of nested simulations and incentives to create them
  30. Emulated minds
  31. Intersectionality, genderlessness, and social anxiety
  32. Acausal trade across levels and with other branches in the simulation tree
  33. Incubators for EA and especially longtermist projects
  34. Trade-offs between entrepreneurialness and safety
  35. Turning the Long Reflection into a Hasty Reflection
  36. EA community strategy and health
  37. Longterm prediction markets
  38. Cute animals and plushies
  39. Vertical agriculture to reduce insect suffering
  40. Anything else you’re interested in discussing
  41. What you’ve been up to the past few days
  42. Dummy item to reach 42

Also note that I’m not necessarily expert in all of this stuff! I suck at Trackmania for example.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-17T07:41:39.573Z · EA · GW

Thanks! I think those are important especially in situation where organizations more or less explicitly collaborate on something but then fail to include the other as owner of certificates that they issue. That could cause the other to respond by ending their collaboration.

I’ve written about similar problems in The Bulk of the Impact Iceberg and The Attribution Moloch.

Comment by Dawn Drescher (Telofy) on Impact Markets: The Annoying Details · 2022-07-15T16:32:15.709Z · EA · GW

Thanks for this incredibly comprehensive overview!

I’d like to use this chance to position our GoodX project “Impact Markets” in this context. (Thanks for linking to our MVP post!) I’m trying to write this in a way that is also understandable to people who haven’t thought about impact markets much, so sorry in advance if some of it is unnecessarily verbose.

Our main selling point is simply that altruists are more likely to have some sort of investor in their network (because there are countless ones ) than to have a funder in their network who we would trust (because there are like half a dozen or so – Open Phil, the Future Fund, the EA Funds, the SFF, et al.). Hence, to a funder, the altruist is going to succeed with the baseline probability of some suitably chosen reference class – say 10%. But to the investor, who has more knowledge of their altruistic friend, they are more likely to succeed – say 20%. There are also reasons to think that the investor can make the altruist succeed at a lower cost than the funder. The result is that the funder and the investor can find a reward size that saves the funder money and time (they don’t need to negotiate this in each case) and yet earns the investor a profit.

An added benefit is that funders can hire a wider range of staff. These staff still need to be excellent at priorities research or at applying priorities research, but they don’t need to be excellent at startup picking anymore.

What Is The Basic Format Of The Market?

We want to implement these in a sensible order, from simple to complex. We haven’t considered “block” impact certificates outside an early experiment, so they’re all considered to be fractional in our case.

If “block” certs are legally easier, that could be a reason to rethink that! Our vision for where impact markets might go is that they integrate with the classic financial market: Think tanks (incorporated as, say, public benefit corporations) issue impact certificates, generate a profit from their sale, and control their stock price with the profits, e.g., by paying dividends or doing buybacks. All the investment happens on the level of the classic stock of the think tanks. The impact certificates are just extremely simple products, consumables, that are so simple that it is dead obvious to the SEC that they are not securities. Our (Owen’s) metaphor is (consumable) bottles of wine.

This seems to us like it would make impact markets hard to use for independent researchers (the main audience we have in mind for issuers) because they would have to first start a corporation for their blogs. But maybe there are ways around that. (Maybe accredited investors can invest into the stock of unincorporated associations.)

Note that this consideration (the previous two paragraphs) is much more fundamental than the question whether certs should be fractional or not. It just relates to the question if non-fractional certs are legally easier. But if the investment happens on the level of classic stock anyway, then impact certs could also be issued retroactively, which makes them legally unproblematic according to what we’ve heard from the lawyers we talked to.

Note also that one of our cofounders, Dony, has thought a lot about dominant assurance contracts in the past. I remember him thinking about how they might combine with impact markets. That’s an interesting angle (dominant or not), but not one we’ve prioritized yet.

What Is Being Sold?

We want to keep it simple and so only focus on the core idea of making the “startup picking” part of funding delegatable. For that, the finer points of the metaphysical significance of impact purchases don’t matter. It just matters that there is a trusted retro funder that provides a prize pool and makes good decisions on what projects to reward. We want to leave the philosophy to philosophers or culture to figure out over time. (I can see that these are interesting questions, but I don’t think that that’s where the big counterfactual impact is.)

This sounds similar to what you write under “Credit for funding the project.” But I at least have moved away from the term “moral credit” for this because while some people interpret it simply as “I can get money for holding this contract,” others again interpret metaphysical significance into it (and in various conflicting ways too).

The funders that we envision serving with our solution are funders like the EA Funds, Open Phil, the Future Fund, et al., who are only interested in the counterfactual impact and not in being seen as virtuous by others or by God or some other variation.

We call what our certs certify a “right to retroactive funding,” so a right that is completely limited to only the entitlement to some share of the prize pool to the extent to which it is awarded to the project. (We want to eventually make it possible for people to attach other rights to their certificates, such as “bragging rights” if they so choose, but it’ll be up to them to define how that should work. This agnosticism about the rights that people attach to certs is just an implication of our using the Hypercertificate standard. The author of the Hypercertificates standard, David Dalrymple, and Protocol Labs are working on a solid, standards-based implementation of impact markets that we want to be compatible with.)

Another way to think about it is that if currently someone enters an article that they’ve written into a prize contest, then they can win a prize. If they’ve collaborated with five others on the article, they can agree to split the prize among themselves if they win it. They can (if they all so choose) make that agreement without ever clarifying the metaphysical significance of their agreement.

How Should The Market Handle Projects With Variable Funding Needs?

We’re going for the second solution from your list here because we put a strong emphasis on the precision and verifiability of impact certificates. This has various reasons:

  1. we want impact certificates to be like products that charities can sell and that investors can preorder,
  2. we want to make it easy for buyers to verify that the impact is not doubly sold under different framings, and
  3. we want to make it easy for buyers to verify that the set of issuers is complete according to strong cultural norms.

So “cure malaria in Senegal” is not a definition of an impact cert that would work on our market. It doesn’t have enough details about the concrete actions that’ll be taken, it doesn’t make clear that the issuers really owns the cert to the extent that they claim, etc.

Instead something like this could work: “We want to collaborate with Concern Universal on a distribution of 5 million long-lasting insecticide-treated bednets in the Matam region of Senegal between Jan. 1, 2023 and Apr. 1, 2023. We currently own 60% of the certificate; Concern Universal owns 40%. The census, the distribution, and the follow-up surveys are conducted by paid staff of Concern Universal who have forfeited any claim to the certificate in their work contracts. No one else, to our knowledge, can make a legitimate claim to the certificate.”

This clarifies ownership, timeframe, location, scope, etc. The organization would sell scores or hundreds of such certificates that can all follow a standard template.

Unrelatedly, I don’t think this impact certificate would work: LLIN distributions are an intervention that is highly likely to work. Funders may assign a 90% chance to their success; a highly involved investor might assign a 95% chance to the success. That gives the investor a very, very minimal edge over the funder. The funder would likely consider it unnecessary overhead to use impact markets for this, or, if they do end up using them, they’ll pay very little over the counterfactual prospective funding for the very slightly reduced risk. That’ll not be enough for the investor to beat some counterfactual investment like a standard ETF. That’s in contrast to hits-based giving where funders often assign a 10% chance of success, which leave a lot of (multiplicative) room for highly involved investors to be more optimistic.

Should Founders Be Getting Rich Off Impact Certificates?

We haven’t thought about this much and would love to be convinced one way or the other.

But if there are projects that everyone recognizes as good, then an impact marketplace will shift the surplus from final oracular funders (who are forced to pay prices more commensurate with their benefits) to someone else. If we let founders get rich, it shifts the surplus to founders. If we don’t let founders get rich, it shifts the surplus to fast investors who may not have added any value - that is, to the first people who snapped up the obviously-underpriced certificates. See Section 7 for more on this problem.

I don’t have a good solution to this latter issue other than either not using impact markets, or allowing founders to get rich. I sketch a hack-ish solution in Section 11C.

My take is basically the first, i.e. that impact markets won’t be used for projects where it doesn’t make sense for both sides (investors/founders and funders) to use them. These will just continue to use prospective funding. This just seems like a brute fact of the preferences of the market actors and not like something that is under the control of the marketplace at all.

Some more elaboration in case it’s not clear what I mean (but it probably is): Funders will pay as little as they can get away with paying. They may start by offering prizes that are just slightly more than what they would’ve paid prospectively for a single project (with, say, a 10% chance of success). Maybe no investor will be interested. Then they’ll raise their bid slightly again and again until investors start to be interested and the first few projects actually get funded from investor money. But this is still strictly less than what the funders would’ve otherwise paid in total for prospective funding. Sure, some of that money goes to investors who can buy yachts from it, but the counterfactual is not that that money would’ve gone to beneficiaries but that it (and more) would’ve gone to failed projects.

Or in made-up numbers: In the prospective-funding world, a funder pays out a total of $1m to 100 projects ($10k each), 10 of which succeed. In the retrospective world, the funder pays out $800k to 10 projects ($80k each), all of which have succeeded. These $80k are maybe split $40k to the founders and $40k to the investors, so that one could say that $40k or more have gone to waste in some sense. But that’s compared to a counterfactual that was never attainable in the first place. More realistically, the funder achieved the same impact while saving $200k that can now go to more projects to help beneficiaries.

Unrelatedly, established charities (as opposed to independent researchers or startup charities) often have track records that make impact markets redundant (see above) or have 18+ months of runway that retro funding would funge with, so that, one way or another, funders are probably not very interested in using impact markets for them.

If impact markets were “a success”, they might have to more than dectuple the amount they pay this person, with no change to their output.

Realistically, I think, a funder wouldn’t just dectuple the investment but would try to pay as little as possible that is just enough to incentivize the creation of this researcher. By construction, the six-figure salary (say, $200k) is not enough to create an additional engineer, but maybe $300k is enough, in which case the funder would bid $300k, not $10m. Of course it could just so happen that the only price that incentivizes the creation of another researcher is no less than $10m, but then the counterfactual is not another researcher for $200k but no researcher at all.

Again the funder is well-advised to keep their true valuation of the impact (if they can come up with such a thing) a secret and just bid as low as they can get away with. (Just like when you buy a second-hand laptop. You may privately decide that you’ll bid up to $1k for it and yet start by bidding $100 because it might just be enough to convince the seller to part with the laptop.)

Admittedly, the current AI safety researchers may get greedy and try to use whatever leverage they have to get more money, but they could do that already.

How Do We Kickstart The Existence Of An Impact Market?

The Committed Pot of Money is the main or most fundamental solution we have in mind. The lower-down solutions are more powerful and they may be the desirable end states but they also come with risks that we’re wary of.

If I put myself into the shoes of an early investor on the Committed Pot of Money impact market, I don’t model myself as judging my chances to break even as a random draw. Rather, by buying in early I can buy more at a lower price, so that I later have an outsized share in the expected returns compared to later investors. But I also have an opinion on what the retro funder will value, and that model has a much stronger effect on my subjective expected returns than the amount that has been invested in total in all projects.

(If I buy SOL tokens at $5 because I think Solana is a great technology with a high success chance, I don’t start to think that Solana is going to be much less successful only because there is heavy investment into Solana and many other technologies. I’ll Aumann-update a bit on all the investments into Avalanche, Cardano, etc., but not overwhelmingly much.)

Should The Market Use Cryptocurrency?

My thoughts: Crypto-literate reviewers say that crypto exchanges take fiat and handle transactions within themselves in a non-crypto way - but then the crypto exists in case someone wants to bring it to a different exchange. This seems like a best of both worlds scenario.

Agreed. Plus we can just start with simple non-crypto solutions and keep all other options open in case there are eventually strong arguments to transition to something more like FTX (centralized) or Serum (decentralized).

How Should The Market Price IPOs?

We have an auction process that A Donor developed for us  (see A Donor’s comment) that, we think, solves B neatly. We haven’t implemented it yet. (It is based on the view that it is desirable to reward people for aiding a fast price discovery.)

How Should The Oracular Funder Buy Successful Projects?

We’ve considered both – buying shares immediately if possible or creating the funding floor of limit orders. For now we’re going with the first solution simply for simplicity, but I definitely see the appeal of the second.

What Should The Final Oracular Funder’s Decision Process Be?

My thoughts: Guess this one is up to the final oracular funders.

Agreed. And by implication I think it’ll be an auction where the funders bid as little as they can get away with paying and only bid more if they don’t observe enough projects getting started or invested in.

Who Are We Expecting To Have As Investors?

My thoughts: Institutional types probably won’t bite for a little while, so we'll need to be prepared for ordinary people.

Agreed. Plus, institutional types are again fewer and have to optimize more for scale, which causes them to suffer from some of the same problems that funders currently suffer from. Investors should rather be the sort of people who are sufficiently many and sufficiently unconstrained by scale that every independent researcher or startup entrepreneur can so happen to know one or two of them quite well.

It’s apparently now possible to become an accredited investor by passing an exam, which lowers the bar. But impact certs as we conceive of them can be consumed (or “dedicated” or “burned”) by altruists, which plausibly turns them into consumables. Consumables, such as wine, can also be preordered and resold. I find it plausible that impact certs are so much more like wine and so much less like stock that they are not securities in the eye of the law or the SEC. Plus, all of this is easier in many countries other than the US and UK.

Conclusion: What Kind Of Impact Market Should We Have?

I think the above has elucidated the design that we’re interested in. It’s similar to the “Maximum Capitalism” design but without the metaphysical bits about “losing credit” and such. The founders have done their work, and if they (maybe unwisely) elected to sell 100% of their impact cert, they’ve still done their work and can get their recognition and kudos for it. (For comparison, if I start a successful startup and make an exit, I can still claim to have started a successful startup and get the reputational benefits.)

Issues Around Unregistered Securities

We have talked with lawyers about this and people with more experience with quasi impact markets. The devil is in the details. There are various ways to avoid the impression that impact certs are securities – as mentioned above, we conceive of them like bottles of wine that can be preordered, stored, resold, and consumed – but we don’t yet have full legal clarity on the issue.

Issues Around Tax-Deductability

In our view, investing in impact certs should not be tax-deductible. It’s done with a profit motive. But consuming the certificate is something that is altruistic, and it would make sense to see to it that that is tax deductible. We envision one potential system where you have the option to do all your cert trades through a nonprofit. The nonprofit just executes all your deals on your behalf. So long as you haven’t consumed your certs, the nonprofit does nothing else. But once you consume a cert, that nonprofit (not the issuer of the cert) writes the donation receipt for you.

Issues Around Oracular Funders’ Nonprofit Status

There may be larger prize contests (such as the XPrize) organized by nonprofits. One could research those to look for legal precedents.

Issues Around Governance And Principal-Agent Problems


Issues Around Middlemen Holding Money

We’re currently planning to let people transact directly with each other to avoid these questions for the time being to be lean and all.

Accidentally Encouraging High-Risk Negative-In-Expectation Projects

This is getting at some of the big crucial problems for us that we’ve been mulling over for a year or so. Here is a list of all the current risks and mitigation mechanisms that we see.

Attributed Impact (mentioned in the linked document ) also addresses moral trade, the lack of which I consider an even greater problem as well as a superset that contains the problem of the distribution mismatch.

Using Impact Certificates For Evil

The above document also addresses this.

Incentivizing Reward-Hacking

That is an interesting failure mode to look out for.

People Could Lose A Lot Of Money

I think we should promote a norm that people shouldn’t invest any money in impact certificates that they don’t want to lose. Otherwise, I’m not sure this is any worse than eg crypto, which already lets small investors lose all their money quickly.

Agreed. A marketplace could also set a default limit on investments and only raise it for investors who have somehow proven that they are financial savvy. Only allowing accredited investors to invest would have that effect, though it may be a bit too exclusive.

Comment by Dawn Drescher (Telofy) on A tale of 2.75 orthogonality theses · 2022-07-08T23:10:55.698Z · EA · GW

Thanks for the thorough argumentation!

I’m unsure whether this is just a nitpick or too much of a personal take, but which precise version of the orthogonality thesis goes through has little effect on how worried I am about AGI, and I’m worried that the nonexpert reader this article is meant for will come away thinking that it does.

The argument for worry about AGI is, in my mind, carried by:

  1. Cooperation failures in multipolar AGI takeoffs, and
  2. Instrumental convergence around self-preservation (and maybe resource acquisition but not sure),

combined with the consideration that it might all go well a thousand times, but when the 1042nd AGI is started, it might not. And I imagine that once AGIs are useful, lots of them will be started by many different actors.

Conversely, I filed the orthogonality thesis away as a counterargument to an argument that I’ve heard a few times, something like, “Smarter people tend to be nicer, so we shouldn’t worry about superintelligence, because it’ll just be super nice.” A weak orthogonality thesis, a counterargument that just shows that that is not necessarily the case, is enough to defend the case for worry.

I think I subscribe to a stronger formulation of the orthogonality thesis, but I'd have to think long and hard to come up with ways in which that would matter. (I’m sure it does in some subtle ways.)

Comment by Dawn Drescher (Telofy) on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-07-08T16:58:33.538Z · EA · GW

I draw two more conclusions from this excellent post:

  1. That we should avoid strategizing around “pivotal acts.”
  2. That AGI labs should avoid excusing their research into AGI by just assuming that others are probably even closer to AGI so that it doesn’t make a difference that they are racing toward AGI too.
Comment by Dawn Drescher (Telofy) on Notes on quantifying the impact of hiring/funding EAs · 2022-06-29T17:36:01.176Z · EA · GW

Very cool formalization! What do you think of the following way of applying it:

  1. Hiring managers that are looking for similar candidates meet (e.g., online) to hash out a single standardized application process for all the similar open positions.
  2. When the applications are in and they have narrowed them down to the set candidates any of them find at all interesting, they start the process.
  3. Is there a strong reason why they would need to agree on relative impact scores for their organizations? I imagine they’ll find it hard to agree on those. Maybe they can just assume a vector of [1, 1, 1, …] for all the impact scores?
  4. They add more pseudo-organizations to the mix, which could have an impact of -100 in spaces like AGI (representing the average non-safety AGI lab) or 0 in most other spaces. (They don’t have control over the choice between other orgs, so I don’t think it makes sense to add several different values, but there need to be as many pseudo-organizations as there are candidates, in case all organizations decide not to hire.)
  5. They generate the candidate-organization matrix but also include columns for “no candidate at org n,” because not hiring is also an option. (“No candidate” can “work” for multible or all organizations in parallel, so this gets a bit complicated.)
  6. I think in most cases they can assume that everyone will be full time, which could simplify the process in those cases.
  7. Then they pick the row that maximizes the impact.

Step 5 seems like the one that’ll require a lot more work in practice. There could be an independent team of forecasters that does the estimation or all hiring managers estimate this for all orgs, or the hiring manages of org A makes the case for/against each candidate at org A, and then all other hiring manages estimate the impact.

There could be the risk that if one org was wrong and their eventual pick soon quits or doesn’t do a good job, that then the remaining allocation is not optimal anymore because that person would’ve been a good fit at another org that now doesn’t have capacity to hire them anymore.

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-26T17:31:38.263Z · EA · GW

First of all, what we’ve summarized as “curation” so far could really be distinguished as follows:

  1. Making access for issuers invite-only, maybe keeping the whole marketplace secret (in combination with #2) until we find someone who produces cool papers/articles and who we trust and then invite them.
  2. Making access for investors/retro funders invite-only, maybe keeping the whole marketplace secret (in combination with #1) until we find an impact investor or a retro funder who we trust and then invite them.
  3. Read every certificate either before or shortly after it is published. (In combination with exposé certificates in case we make a mistake.)

Let’s say #3 is a given. Do you think the marketplace would fulfill your safety requirements if only #1, only #2, or both were added to it?

Does your current plan not involve explaining to all the retro funders that that they should consider the ex-ante EV as an upper bound?

It involves explaining that. What we wrote was to argue that Attributed Impact is not as complicated as it may sound but rather quite intuitive. 

How does the potential harm that other people can cause via Twitter etc. make launching a certain impact market be a better idea than it would otherwise be?

If you want to open a bazaar, one of your worries could be that people will use it to sell stolen goods. Currently these people sell the stolen goods online or on other bazaars, and the experience may be a bit clunky. By default these people will be happy to use your bazaar for their illegal trade because it makes life slightly easier for them. Slightly easier could mean that they get to sell a bit more quickly and create a bit more capacity for more stealing.

But if you enact some security measures to keep them out, you quickly reach the point where the bazaar is less attractive than the alternatives.  At that point you already have no effect anymore on how much theft there is going on in the world in aggregate.

So the trick is to tune the security measures just right that they make the place less attractive than alternatives to the thieves and yet don’t impose prohibitively high costs on the legitimate sellers.

Do you intent to allow people to profit from outreach interventions that attract new retro funders? (i.e. by allowing people to sell certificates of such outreach interventions.)

My intent so far was to focus on text that is accessible online, e.g., articles, papers, some books. There may be other classes of things that are similarly strong candidates. Outreach seems like a bad fit to me. I’ve so far only considered it once when someone (maybe you) brought it up as something that’d be a bad fit for an impact market and I agreed.

I disagree. I think this risk can easily materialize if the description of the certificate is not very specific (and in particular if it's about starting an organization, without listing specific interventions.)

Also very bad fit for an impact market as we envision it. To be a good fit, the object needs to have some cultural rights along the lines of ownership or copyright associated with it so market participants can agree on an owner. It needs to have a start and an end in time. It need so generate a verifiable artifact. Finally, it should not try super hard to try to fit something into that mold that doesn’t fit. There are a bunch of examples in my big post. So a paper, article, book, etc.  (a particular version of it) is great. Something ongoing like starting an org is not a good fit. Something where you influence others and most of your impact is leveraging behavior change of others is really awkward because you can’t credibly assign an owner.

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-26T09:38:55.550Z · EA · GW

Okay, but to keep the two points separate:

  1. Allowing people to make backups: You’d rather make it as hard as possible to make backups, e.g., by using anti-screenscraping tools and maybe hiding some information about the ledger in the first place so people can’t easily back it up.
  2. Web3: Seems about as bad as any web2 solution that allows people to easily back up their data.

Is that about right?

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-25T23:45:06.622Z · EA · GW

Shutting down an impact market, if successful, functionally means burning all the certificates that are owned by the market participants, who may have already spent a lot of resources and time in the hope to profit from selling their certificates in the future.

It could be done a bit more smoothly by (1) accepting no new issues, (2) completing all running prize rounds, and (3) declaring the impact certificates not burned and allowing people some time to export their data. (I don’t think it would be credible for the marketplace to declare the certs burned since it doesn’t own them.)

Also, my understanding is that there was (and perhaps still is) an intention to launch a decentralized impact market (i.e. Web3 based), which can be impossible to shut down.

My original idea from summer 2021 was to use blockchain technology simply for technical ease of implementation (I wouldn’t have had to write any code). That would’ve made the certs random tokens among millions of others on the blockchain. But then to set up a centralized, curated marketplace for them with a smart and EA curation team.

We’ve moved away from that idea. Our current market is fully web2 with no bit of blockchain anywhere. Safety was a core reason for the update. (But the ease-of-implementation reasons to prefer blockchain also didn’t apply so much anymore. We have a doc somewhere with all the pros and cons.)

For our favored auction mechanisms, it would be handy to be able to split transactions easily, so we have thought about (maybe, at some point) allowing users to connect a wallet to improve the user experience, but that would be only for sending and receiving payments. The certs would still be rows in a Postgres database in this hypothetical model. Sort of like how Rethink Priorities accepts crypto donations or a bit like a centralized crypto exchange (but that sounds a bit pompous).

But what do you think about the original idea? I don’t think it's so different from a fully centralized solution where you allow people to export their data or at least not prevent them from copy-pasting their certs and ledgers to back them up.

My greatest worries about crypto stem less from the technology itself (which, for all I know, could be made safe) but from the general spirit in the community that decentralization, democratization, ungatedness, etc. are highly desirable values to strive for. I don’t want to have to fight against the dominant paradigms, so that doing it on my server was more convenient. But then again big players in the Ethereum space have implemented very much expert-run systems with no permissionless governance tokens and such.  So I hope (and think) that there are groups that can be convinced that an impact market should be gated and curated by trusted experts only.

But even so, a solution that is crypto-based beyond making payments easier is something that I consider more in the context of joining existing efforts to make them safer rather than actions that would influence their existence.

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-25T22:51:57.284Z · EA · GW

I love the insurance idea because compared to our previous ideas around shorting with hedge tokens that compound automatically to maintain a -1x leverage, collateral, etc. (see Toward Impact Markets), the insurance idea also has the potential of solving the incentive problems that we face around setting up our network of certificate auditors! (Strong upvotes to both of you!)

(The insurances would function a bit like the insurances in Robin Hanson’s idea for a tort law reform.)

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-25T22:36:38.499Z · EA · GW

Dawn’s (Denis’s) Intellectual Turing Test Red-Teaming Impact Markets

[Edit: Before you read this, note that I failed. See the comment below.]

I want to check how well I understand Ofer’s position against impact markets. The “Imagined Ofer” below is how I imagine Ofer to respond (minus language – I’m not trying to imitate his writing style though our styles seem similar to me). I would like to ask the real Ofer to correct me wherever I’m misunderstanding his true position.

I currently favor using the language of prize contests to explain impact markets unless I talk to someone intimately familiar with for-profit startups. People seem to understand it more easily that way.

My model of Ofer is informed by (at least) these posts/comment threads.

Dawn: I’m doing these prize contests now where I encourage people to help each other (monetarily and otherwise) to produce awesome work to reduce x-risks and finally I reward all participants in the best ones of the projects. I’m writing software to facilitate this. I will only reward them in proportion to the gains from moral trade that they’ve generated, and I’ll use my estimate of their ex ante EV as a ceiling for my overall evaluation of a project.

This has all sorts of benefits! It’s basically a wide-open regrantor program where the quasi-regrantors (the investors) absorb most of the risk. It scales grantmaking up and down – grantmakers have ~ 10x less work and can thus scale their operation up by 10x, and the investors can be anyone around the world, so they can draw on their existing networks for their investments, so they can consider many more much smaller investments or investments which require very niche knowledge or access. Many more ideas will get tried, and it’ll be easier for people to start projects even when they still lack personal contact to the right grantmakers.

Imagined Ofer: That seems very dangerous to me. What if someone else also offers a reward and also encourages people to help each other with the projects but does not apply your complicated ex ante EV ceiling? Someone may create a flashy but extremely risky project and attract a lot of investors for it.

Dawn: But they can do that already? All sorts of science prizes, all the other EA-related prizes, Bountied Rationality, new prizes they promote on Twitter, etc.

Imagined Ofer: Okay, but you’re building a software to make it easier, so presumably you’ll thereby increase the number of people who will offer such prizes and increase the number of people who will attract investments in advance because the user experience and networking with investors is smoother and because they’re encouraged to do so.

Dawn: That’s true. We should make our software relatively unattractive to such prize offerers and their audiences, for example by curating the projects on it such that only the ones that are deemed to be robustly positive in impact are displayed (something I proposed from the start, in Aug. 2021). I could put together a team of experts for this.

Imagined Ofer: That’s not enough. What if you or your panel of experts overlook that a project was actually ex ante net-negative in EV, for example because it has already matured and so happened to turn out good? You’d be biased in a predictably upward direction in your assessment of the ex ante EV. In fact, people could do a lot of risky projects and then only ever submit the ones that worked out fine.

Dawn: Well, we can try really hard… Pay bounties for spotting projects that were negative in ex ante EV but slipped through; set up a network of auditors; make it really easy and effortless to hold compounding short positions on projects that manage their -1x leverage automatically; recruit firm like Hindenburg Research (or individuals with similar missions) to short projects and publish exposés on them; require issuers to post collateral; set up a mechanisms whereby it becomes unlikely that there’ll be other prizes with any but a small market share (such as the “pot”); maybe even require preregistration of projects to avoid the tricks you mention; etc. (All the various fixes I propose in Toward Impact Markets.) 

Imagined Ofer: Those are only unreliable patches for a big fundamental problem. None of them is going to be enough, not even in combination. They are slow and incomplete. Ex ante negative projects can slip through the cracks or remain undetected for long enough to cause harm in this world or a likely counterfactual world.

Dawn: Okay, so one slips through, attracts a lot of investment, gets big, maybe even manages to fool us into awarding it prize money.  It or new projects in the same reference class have some positive per-year probability of being found out due to all the safety mechanisms. Eventually a short-seller or an exposé-bounty poster will spot them and make a lot of money for doing so. We will react and make it super-duper clear going forward that we will not reward projects in that reference class ever again. Anyone who wants to get investments will need to make the case that their project is not in that reference class.

Imagined Ofer: But by that time the harm is done, be it to a counterfactual world. Next time the harm will be done to the factual world. Besides, regardless of how safe you actually make the system, what’s important is that there can always be issuers and investors who believe (be it wrongly believe) that they can get their risky project retro-funded. You can’t prevent that no matter how safe you make the system.

Dawn: But that seems overly risk averse to me because prospective funders can also make mistakes, and current prizes – including prizes in EA – are nowhere near as safe. Once our system is safer than any other existing methods, the bad actors will prefer the existing methods.

Imagined Ofer: The existing methods are much safer. Prospective funding is as safe as it gets, and current prizes have a time window of months or so, so by the time the prizes are awarded, the projects that they are awarded to are still very young, so the prizes are awarded on the basis of something that is still very close to ex ante EV.

Dawn: But retroactive funders can decide when to award prizes. In fact, we have gone with a month in our experiment. But admittedly, in the end I imagine that cycles of a year or two are more realistic. That is still not that much more. (See this draft FAQ for some calculations. Retro funders will pay out prizes of up to 1000% in the success case, but outside the success case investors will lose all or most of their principal. They are hits-based investors, so their riskless benchmark profit is probably much higher than 5% per year. They’ll probably not want to stay in certificates for more than a few years even at 1000% return in the success case.)

Imagined Ofer: A lot more can happen in a year or two than in a month. EA, for example, looked very different in 2013 compared to 2015, but it looked about the same in January vs. February 2015. But more importantly, you write about tying the windfall clauses of AGI companies to retro funding with enormous budgets, budgets that surely offset even the 20 years that it may take to get to that point and the very low probability.

Dawn: The plan I wrote about has these windfalls reward projects that were previously already rewarded by our regular retro funders, no more.

Imagined Ofer: But what keeps a random, unaligned AGI company from just using the mechanism to reward anyone they like?

Dawn: True. Nothing. Let’s keep this idea private. I can unpublish my EA Forum post too, but maybe that’s the audience that should know about it if anyone should. As an additional safeguard against uncontrolled speculation, how about we require people to always select one or several actual present prize rounds when they submit a project?

Imagined Ofer: That might help, but people could just churn out small projects and select whatever prize happens to be offered at the time whereas in actuality they’re hoping that one of these prizes will eventually be mentioned in a windfall clause or their project will otherwise be retro funded through a windfall clause or some other future funder who ignore the setting.

Dawn: Okay, but consider how far down the rabbit hole we’ve gone now: We have a platform that is moderated; we have relatively short cycles for the prize contest (currently just one month); we explicitly offer prizes for exposés; we limit our prizes to stuff that is, by dint of its format, unlikely to be very harmful; we even started with EA Forum posts, a forum that has another highly qualified moderation team. Further, we want to institute more mechanisms – besides exposés – that make it easy to short certificates to encourage people to red-team them; mechanisms to retain control of the market norms even if many new retro funders enter; even stricter moderation; etc. We’re even considering requiring preregistration, mandatory selection of present prize rounds (even though it runs counter to how I feel impact markets should work), and very narrow targets set by retro funders (like my list of research questions in our present contest). Compare that to other EA prize contests. Meanwhile, the status quo is that anyone with some money and Twitter following can do a prize contest, and anyone can make a contract with a rich friend to secure a seed investment that they’ll repay if they win. All of our countless safeguards should make it vastly easier for unaligned retro funders and unaligned project founders to do anything other than use our platform. All that remains is that maybe we’re spreading the meme that you can seed-invest into potential prize winners, but that’s also something that is already happening around the world with countless science prizes. What more can we do!

Imagined Ofer: This is not an accusation – we’re all human – but money and sunk time-cost fallacy corrupt. For all I know this could be a motte-and-bailey type of situation: The moment a big crypto funder offers you a $1m grant, you might throw caution to the wind and write a wide-open ungated blockchain implementation of an impact market.

Dawn: I hope I’ve made clear in my 20,000+ words of writing on impact market safety that were unprompted by your comments (other than the first one in 2021) that my personal prioritization has long rested on robustness over mere positive EV. I’ve just quit my well-paid ETG job as software engineer in Switzerland to work on this. If I were in it for the money, I wouldn’t be. (More than what I need for my financial safety.) Our organization is also set up with a very general purview so that we can pivot easily. So if I should start work on a more open version of the currently fully moderated, centralized implementation, it’s because I’ve come to believe that it’s more robustly positive than I currently think it is. (Or it may well be possible to find a synthesis of permissionlessness and curation.) The only thing that can convince me otherwise are evidence and arguments.

Imagined Ofer: I think that most interventions that have a substantial chance to prevent an existential catastrophe also have a substantial chance to cause an existential catastrophe, such that it’s very hard to judge whether they are net-positive or net-negative (due to complex cluelessness dynamics that are caused by many known and unknown crucial considerations). So the typical EA Forum post with sufficient leverage over our future to make a difference at all is about equally likely to increase or to decrease x-risk.

Dawn: I find that to be an unusual opinion. CEA and others try to encourage people to post on the EA Forum rather than discourage them. That was also the point of the CEA-run EA Forum contest. Personally, I also find it unintuitive that that should be the case: For any given post, I try to think of pathways along which it could be beneficial and detrimental. Usually there are few detrimental pathways, and if there are any, there are strong social norms around malice and government institutions such as the police in the way of pursuing the paths. A few posts come to mind that are rare, unusual exceptions to this theme, but it’s been several years since I read one of those. Complex cluelessness also doesn’t seem to make a difference here because it applies equally to any prospective funding, to prizes after one month, and to prizes after one year. Do you think that writing on high-leverage topics such as x-risks should generally be discouraged rather than encouraged on the EA Forum?

Imagined Ofer: Even if you create even a very controlled impact market that is safer than the average EA prize contest you are still creating a culture and a meme regarding retroactive funding. You could inspire someone to post on Twitter “The current impact markets are too curated. I’m offering a $10m retro prize for dumping 500 tons of iron sulfate into the ocean to solve climate change.” If someone posted this now no one would take them seriously. If you create an impact market with tens of millions of dollars flowing through it and many market actors, it will become believable to some rouge players that this payout is likely real.

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-23T16:23:48.109Z · EA · GW

We’ve considered a wide range of mechanisms and ended up most optimistic about this one.

When it comes to prediction markets on funding decisions, I’ve thought about this in two contexts in the past:

  1. During the ideation phase, I found that it was already being done (by Metaculus?) and not as helpful because it doesn’t provide seed funding.
  2.  In Toward Impact Markets, I describe the “pot” safety mechanism that, I surmised, could be implemented with a set of prediction markets. The implementation that I have in mind that uses prediction markets has important gaps, and I don’t think it’s the right time to set up the pot yet. But the basic idea was to have prediction markets whose payouts are tied to decisions of retro funders to buy a particular certificate. That action resolves the respective market. But the yes votes on the market can only be bought with shares in the respective cert or by people who also hold shares in the respective cert and in proportion to them. (In Toward Impact Markets I favor the product of the value they hold in either as determinant of the payout.)

But maybe you’re thinking of yet another setup: Investors buy yes votes on a prediction market (e.g. Polymarket, with real money) about whether a particular project will be funded. Funders watch those prediction markets and participants are encouraged to pitch their purchases to funders. Funders then resolve the markets with their actual grants and do minimal research, mostly trust the markets. Is that what you envisioned?

I see some weaknesses in that model. I feel like it’s rather a bit over 10x as good as the status quo vs. our model, which I think is over 100x as good. But it is an interesting mechanism that I’ll bear in mind as a fallback!

Comment by Dawn Drescher (Telofy) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T20:50:59.304Z · EA · GW

Going Forward

  1. We will convene a regular working group to more proactively iterate and improve the mechanism design focused on risk mitigation. We intend for this group to function for the foreseeable future. Anyone is welcome to join this group via our Discord.
  2. We will attempt to gain consultation from community figures that have expressed interest in impact markets (Paul Christiano, Robin Hanson, Scott Alexander, Eliezer Yudkowsky, Vitalik Buterin). This should move the needle towards more community consensus.
  3. We will continue our current EA Forum contest. We will not run another contest in July.
  4. We will do more outreach to other projects interested in this space (Gitcoin, Protocol Labs, Optimism, etc.) to make sure they are aware of these issues as well and we can come up with solutions together.

Do we think that impact markets are net-negative?

We – the Impact Markets team of Denis, Dony, and Matt – have been active EAs for almost a combined 20 years. In the past years we’ve individually gone through a prioritization process in which we’ve weighed importance, tractability, neglectedness, and personal fit for various projects that are close to the work of QURI, CLR, ACE, REG, CE, and others. (The examples are mostly taken from my, Denis’s, life because I’m drafting this.) We not only found that impact markets were net-positive but have become increasingly convinced (before we started working on them) that they are the most (positively!) impactful thing in expectation that we can do. 

We have started our work for impact markets because we found that it was the best thing that we could do. We’ve more or less dedicated our lives to maximizing our altruistic impact – already a decade ago. We were not nerdsniped into it and adjusted our prioritization to fit.

We’re not launching impact certificates to make ourselves personally wealthy. We want to be able to pay the rent, but once we’re financially safe, that’s enough. Some of us have previously moved countries for earning to give.

Why do we think impact markets are so good?

Impact markets reduce the work of funders – if a (hits-based) funder hopes for 10% of their grantees to succeed, then they cut down on the funders’ work by 10x. The funders pay out correspondingly higher rewards which incentivize seed investors to pick up the slack. This pool of seed investors can be orders of magnitude larger than current grant evaluators and would be made up of individuals from different cultures, with different backgrounds, and different networks. They have access to funding opportunities that the funders would not have learned of, they can be confident in these opportunities because they come out of their existing networks, and they can make use of economies of scale if the projects they fund have similar needs. These opportunities can also be more and smaller than opportunities that it would’ve been cost-effective for a generalist funder to evaluate.

Thus impact markets solve the scaling problem of grantmaking. We envision that the result will be an even more vibrant and entrepreneurial EA space that makes maximal use of the available talent and attracts more talent as EA expands.

What do we think about the risks?

The risks are real – we’ve spent June 2021 to March 2022 almost exclusively thinking about the downsides, however remote, to position us well to prevent them. But abandoning the project of impact markets because of the downsides seems about as misguided to us as abandoning self-driving cars because of adversarial-example attacks on street signs.

A wide range of distribution mismatches can already happen due to the classic financial markets. Where an activity is not currently profitable, these don’t work, but there have been prize contests for otherwise nonprofitable outcomes for a long time. We see an impact market as a type of prize contest.

Other things being equal, simpler approaches are easier to communicate …

Attributed Impact may look complicated but we’ve just operationalized something that is intuitively obvious to most EAs – expectational consequentialism. (And moral trade and something broadly akin to UDT.) We may sometimes have to explain why it sets bad incentives to fund projects that were net-negative in ex ante expectation to start, but the more sophisticated the funder is, the less likely it is that we need to expound on this. There’s also probably a simple version of the definition that can be easily understood. Something like: “Your impact must be seen as morally good, positive-sum, and non-risky before the action takes place.”

If there is no way to prevent anyone from becoming a retro funder …

We already can’t prevent anyone from becoming a retro funder. Anyone with money and a sizable Twitter following can reward people for any contributions that they so happen to want to reward them for – be it AI safety papers or how-tos for growing viruses.

Even if we hone Attributed Impact to be perfectly smooth to communicate and improve it to the point where it is very hard to misapply it, that hypothetical person on Twitter can just ignore it. Chances are they’ll never hear of it in the first place.

The price of a certificate tracks the maximum amount of money that any future retro funder will be willing to pay for it …

The previous point applies here too. Anyone on Twitter with some money can already outbid others when it comes to rewarding actions.

An additional observation is that the threshold for people to seed-invest into projects seems to be high. We think that very few investors will put significant money into a project that is not clearly in line with what major retro funders already explicitly profess to want to retro-fund only because there may later be someone who does.

Suppose that a risky project that is ex-ante net-negative ends up being beneficial …

There are already long-running prize contests where the ex ante and the ex post evaluation of the expected impact can deviate. These don’t routinely seem to cause catastrophes. If they are research prizes outside EA, it’s also unlikely that the prize committees will always be sophisticated enough that contenders will trust them to evaluate their projects according to its ex ante impact. Even the misperception that a prize committee would reward a risky project is enough to create an incentive to start the project.

And yet we very much do not want our marketplace to be used for ex ante net-negative activities. We are eager to put safeguards in place above and beyond what any other prize contest in EA has done. As soon as  any risks appear to emerge, we are ready to curate the marketplace with an iron fist, to limit the length of resell chains, to cap the value of certificates, to consume the impact we’re buying, and much more.

What are we actually doing?

  1. We are not currently working on a decentralized impact marketplace. (Though various groups in the Ethereum space are, and there is sporadic interest in the EA community as well.)
    1. This is our marketplace. It is a React app hosted on an Afterburst server with a Postgres database. We can pull the plug at any time.
    2. We can hide or delete individual certificates. We’re ready to make certificates hidden by default until we approve them.
    3. You can review the actual submissions that we’ve received to decide how risky the average actual submission is.
    4. We would be happy to form a curation committee and include Ofer and Owen now or when the market grows past the toy EA Forum experiment we have launched so far.
  2. This is our current prize round.
    1. We have allowed submissions that are directly related to impact markets (and received some so that we don’t want to back down from our commitment now), but we’re ready to exclude them in future prize rounds.
    2. We would never submit our own certificates to a prize contest that we are judging, but we’d also be open to not submitting any of our impact market–related work to any other prize contests if that’s what consensus comes to.
    3. An important safety mechanism that we have already started implementing is to reward solutions to problems with impact markets. A general ban on using such rewards would remove this promising mechanism.
    4. We don’t know how weak consensus should be operationalized. Since we’ve already launched the marketplace, it seems to us that we’ve violated this requirement before it was put into place. We would welcome a process by which we can obtain a weak consensus, however measured, before our next prize round.

Miscellaneous notes

  1. Attributed Impact also addresses moral trade.
  2. “A naive implementation of this idea would incentivize people to launch a safe project and later expand it to include high-risk high-reward interventions” – That would have to be a very naive implementation because if the actual project is different from the project certified in the certificate, then the certificate does not describe it. It’s a certificate for a different project that failed to happen.
Comment by Dawn Drescher (Telofy) on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T13:49:28.534Z · EA · GW

Yeah, that sounds perfectly plausible to me.

“A bit confused” wasn’t meant to be any sort of rhetorical pretend understatement or something. I really just felt a slight surprise that caused me to check whether the forum rules contain something about ad hom, and found that they don’t. It may well be the right call on balance. I trust the forum team on that.

Comment by Dawn Drescher (Telofy) on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T13:01:10.413Z · EA · GW

Maybe, but I find it important to maintain the sort of culture where one can be confidently wrong about something without fear that it’ll cause people to interpret all future arguments only in light of that mistake instead of taking them at face value and evaluating them for their own merit.

The sort of entrepreneurialness that I still feel is somewhat lacking in EA requires committing a lot of time to a speculative idea on the off-chance that it is correct. If it is not, the entrepreneur has wasted a lot of time and usually money. If additionally it has the social cost that they can't try again because people will dismiss them because of that past failure, it makes it just so much less likely still that anyone will try in the first place.

Of course that’s not the status quo. I just really don’t want EA to move in that direction.

Comment by Dawn Drescher (Telofy) on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T11:29:00.001Z · EA · GW

I've nevertheless downvoted this post because it seems like it's making claims that are significantly too strong, based on a methodology that I strongly disendorse.


I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form. There is a version of the post – rephrased and reframed – that I think would be perfectly fine even though I would still disagree with it.

And I say that as someone who loved Paul’s response to Eliezer’s list!

Separately, my takeaway from Ben’s 80k interview has been that I think that Eliezer’s take on AI risk is much more truth-tracking than Ben’s. To improve my understanding, I would turn to Paul and ARC’s writings rather than Eliezer and MIRI’s, but Eliezer’s takes are still up there among the most plausible ones in my mind.

I suspect that the motivation for this post comes from a place that I would find epistemically untenable and that bears little semblance to the sophisticated disagreement between Eliezer and Paul. But I’m worried that a reader may come away with the impression that Ben and Paul fall into one camp and Eliezer into another on AI risk when really Paul agrees with Eliezer on many points when it comes to the importance and urgency of AI safety (see the list of agreements at the top of Paul’s post).

Comment by Dawn Drescher (Telofy) on Critiques of EA that I want to read · 2022-06-20T20:49:32.607Z · EA · GW

A bit of a tangent, but:

Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.

That seems like it could be a defection in a moral trade, which is likely to burn gains of trade. Often you can just talk to the other funder and split 50:50 or use something awesome like the S-Process.

But I’ve been in the situation where I wanted to make a grant/donation (I was doing ETG), knew of the other donor, but couldn’t communicate with them because they were anonymous to me. Hence I resorted to a bit of proto-ECL: There are two obvious Schelling points, (1)  both parties each fill half of the funding gap, or (2) both parties each put half of their pre-update budget into the funding gap. Point 2 is inferior because the other party knows, without even knowing me, that more likely than not my donation budget is much smaller than half the funding gap, and because the concept of the funding gap is subjective and unhelpful anyway. Point 1 should thus be the compromise point of which it is relatively obvious to both parties that is should be obvious to both parties. Hence I donated half my pre-update budget.

There’s probably a lot more game theory that can be done on refining this acausal moral trade strategy, but I think it’s pretty good already, probably better than the status quo without communication.

Comment by Dawn Drescher (Telofy) on Critiques of EA that I want to read · 2022-06-20T20:38:53.219Z · EA · GW

Maybe something along the lines of: Thinking in terms of individual geniuses, heroes, Leviathans, top charities implementing vertical health interventions, central charity evaluators, etc. might go well for a while but is a ticking time bomb because these powerful positions will attract newcomers with narcissistic traits who will usurp power of the whole system that the previous well-intentioned generation has built up.

The only remedy is to radically democratize any sort of power, make sure that the demos in question is as close as possible to everyone who is affected by the system, and build in structural and cultural safeguards against any later attempts of individuals to try to usurp absolute power over the systems.

But I think that's better characterized as a libertarian critique, left  or right.  I can’t think of an authoritarian-left critique. I wouldn’t pass an authoritarian-left intellectual Turing test, but I have thought of myself as libertarian socialist at one point in my life.

Comment by Dawn Drescher (Telofy) on Critiques of EA that I want to read · 2022-06-20T20:28:45.049Z · EA · GW

Strong upvote. My personal intuitions are suffering focused, but I’m currently convinced that I ought to do whatever evidential cooperation in large worlds (ECL) implies. I don’t know exactly what that is, but I find it eminently plausible that it’ll imply that extinction and suffering are both really, really bad, and s-risks, especially according to some of the newer, more extreme definitions, even more so.

Before ECL, my thinking was basically: “I know of dozens of plausible models of ethics. They contradict each other in many ways. But none of them is in favor of suffering. In fact, a disapproval of many forms of suffering seems to be an unusually consistent theme in all of them, more consistent than any other theme that I can identify.[1] Methods to quantify tradeoffs between the models are imprecise (e.g., moral parliaments). Hence I should, for now, focus on alleviating the forms of suffering of which this is true.”

Reducing suffering – in all the many cases where doing so is unambiguously good across a wide range of ethical systems – still strikes me as at least as robust as reducing extinction risk.

  1. ^

    Some variation on universalizability, broadly construed, may be a contender.

Comment by Dawn Drescher (Telofy) on Critiques of EA that I want to read · 2022-06-20T20:15:52.553Z · EA · GW

Here are four more things that I’m somewhat skeptical of and would like someone with more time on their hands and the right brain for the topic to see whether they hold water:

  1. Evidential cooperation in large worlds is ridiculously underexplored considering that it might “solve ethics” as I like to epitomize it. AI safety is arguably more urgent, but maybe it can even inform that discipline in some ways. I have spent about a quarter of a year thinking about ECL, and have come away with the impression that I can almost ignore my own moral intuitions in favor of what little I think I can infer about the compromise utility function. More research is needed.
  2. There is a tension between (1) the rather centralized approach that the EA community has traditionally taken and that is still popular, especially outside key organizations like CEA, and the pervasive failures of planned economies historically, and between (2) the much greater success of Hayakian approaches and the coordination that is necessary to avert catastrophic coordination failures that can end our civilization. My cofounders and I have started an EA org to experiment with market mechanisms for the provision of public and common goods, so we are quite desperate for more thinking of how we and EAs in general should resolve those tensions.
  3. 80k and others have amassed evidence that it's best for hundreds or thousands of people to apply for each EA job, e.g., because the difference between the best and second best candidate are arguably large. I find this counterintuitive. Counterintuitive conclusions are interesting and the ones we’re likely to learn most from, but they are also more often than not wrong. In particular, my intuition is that, as a shallow heuristic, people will do more good if they focus on what is most neglected, all else equal. It seems suspicious that EA jobs should be an exception to this rule. I wonder whether it’s possible to make a case against along the lines of this argument, quantitatively trading off the expected difference between the top and the second best candidate against the risk of pushing someone (the second best candidate 0 to several hops removed) out of the EA community and into AI capabilities research (e.g., because they run out of financial runway), or simply by scrutinizing the studies that 80k’s research is based on.
  4. I think some EAs are getting moral cooperation wrong. I’ve very often heard about instances of this but I can’t readily cite any. A fictional example is, “We can’t attend this workshop on inclusive workplace culture because it delays our work by one hour, which will cause us to lose out on converting 10^13 galaxies into hedonium because of the expansion of space.” This is, in my opinion, what it is like to get moral cooperation a bit less wrong. Obviously, all real examples will be less exaggerated, more subtle, and more defensible too.
Comment by Dawn Drescher (Telofy) on Critiques of EA that I want to read · 2022-06-20T19:41:42.855Z · EA · GW

Alternative models for distributing funding are probably better and are definitely under-explored in EA

I’m particularly optimistic about “impact markets” here, where you get:

  1. countless mostly profit-oriented investors that use their various kinds of localized knowledge (language, domain expertise, connections, flat mates) to fund promising projects and compete with each other on making the best predictions, and
  2. retroactive funders who reward investors and projects that, after some time (say, one or two years), look successful. 

That model promises to greatly cut down on the work that funders have to do, and separates the concern “priorities research” or “What does success look like?” from the concern “due diligence” or “What sort of author or entrepreneur is likely to achieve such success and how to find them?”

The SFF is using a similar system internally.

Fittingly, we, Good Exchange, have received funding through the FTX Regrantor Program and are running our first MVP here.

Note that we started Good Exchange because we were already optimistic about this approach, and that it’s likely the most impactful thing that we can do with our time.

Some other solution concepts that come to mind:

  1. Retrox – an experiment in “democratizing” retroactive funding, where the electorate is one of select experts.
  2.  Manifund – an impact market similar to ours but based on Manifold dollars.
  3. Quadratic funding without matching pool
  4. Using delegated voting and PageRank to determine weights of experts in votes on funding decisions (Matt Goldenberg and Justin Shovelain have thought more about this)
Comment by Dawn Drescher (Telofy) on How to allocate impact shares of a past project · 2022-06-12T15:54:02.852Z · EA · GW

For example, if funders hire a part-time teacher to develop a curriculum, then they attribute 100% of the development.


Do you mean that if someone is already hired and paid to do a job, that job should not, by default, be considered additionally rewarded through an impact allocation? Or at least if someone is paid at market rate?

Anyone can suggest a non-claimant, who then can choose to be notified.

That seems useful. I would like to err on the side of making sure that more rather than fewer people need  to be explicitly asked for their consent for this. Otherwise it’s too easy for someone to miss the call. That’s sort of the opening pages of the Hitchhiker’s Guide to the Galaxy. xD

Comment by Dawn Drescher (Telofy) on Retroactive funding impact quantification and maximization · 2022-06-12T14:44:01.537Z · EA · GW

Maybe, people could select which ones work best for them or form teams considering all others' expertise.


Oh yes, defs! :-)

But yes, for the total impact that an issuer makes even the non-purchased certificates' counterfactuals should be used.

Agreed. Preregistration also solves a lot of problems with our version of p-hacking (i-hacking ^.^).

Hmm... yes, but maybe actually some organizers would not take the risk, so some projects would get unfunded or delayed and many more projects could be run but eventually not purchased (which could reduce some persons' 'risk or learning' budgets or be counterfactually beneficial).

I think we’d need more assumptions for this to happen. When it comes to business ventures, some are better funded from investments and some from loans. But if we imagine a world without investments, with only loans, and then offer investments, it’s not straightforward to me that that would harm the businesses that are better suited for loans. Similarly, I’d think (at first approximation at least) that the projects that are not suited for retro funding will still apply for and receive prospective funding at almost the same rate that it is available today. The bottleneck is currently more the investment of funding than the availability of funding.

So, (1)*(2)+(3)*[4]

Right, those are multiplicative too… That makes them a bit not-robust (volatile?). Would it also be valid to conceive of each of those as a multiplier on the current counterfactual impact, e.g., 10x increase in seed funding, 10x improvement in the allocation of seed funding, 10x through economies of scale, etc.? Here’s a sample Guesstimate. But it feel to me like we’re much too likely to err in upward direction this way even if it’s just an estimate of the success (non–black swan) case. Sort of like the multi-stage fallacy give too low estimates if you multiply many probabilities, so I feel like multiplying all these multipliers probably also ignores lots of bottlenecks.

But that said, almost all the factors are multiplicative here except for two components of the allocation – better allocation thanks to knowledge of more languages and cultures and thanks to being embedded in more communities. (I suppose a person is in about as many communities regardless which language/s they speak.) The language component may seem unusually low, which is because so much of the world speaks English and because the US are such a big part of the world economy.

I’m assuming that more entrepreneurial people will join the fray who are not sufficiently altruistic to already be motivated to start EA charities but who will do a good job if they can get at least somewhat rich from it. 

Finally, I’m assuming that retro funders reinvest all their savings into the retro funding and priorities research, and that priorities research still has < 10x room for improvement, which seems modest to me (given evidential cooperation in large worlds for example).

I’d be interested what you think of this version – any other factors that need to be considered, or anything that should be additive instead of multiplicative? But we can discuss all that in our call if you like. No need to reply here.

1. surveys make sense but why anonymous - you could get greater sincerity in a ('safe' group) conversation, perhaps.

Perhaps, but I’m worried that they may be incentivized to fib about it because it makes their impact seem more valuable and because it makes it more likely that we’ll continue to program (from which they want to benefit again), since we might discontinue it if we don’t think it’s sufficiently impactful.

Hm, not really, maybe like 126 for an RCT if you observe a greater than 50% difference

Thanks! Still a lot. I’ve updated away from this being an important factor to measure… I’d rather find a way to compare the success rates of investors vs. current prospective funders. If none of them are consistently better, they may lose interest in the system.

Ask funders: Estimate the impact of our framework compared to its non-existence. Explain your reasoning. Then, summarize quotes and see what people critique.

Yeah, that sounds good in both cases. It’s probably not easy to talk to them but it would be valuable.

But why would you weigh by the extent of profit-orientation?

If we’re just reallocating EA money, we’re not adding more. A big source of the impact stems from attracting for-profit money.

Looking forward to our call! Feel free to just respond verbally then. No need to type it all out. :-D 

Comment by Dawn Drescher (Telofy) on How to allocate impact shares of a past project · 2022-06-11T20:49:07.888Z · EA · GW

Thank you for engaging with this question as well!

So, to terribly oversimplify, your idea is to form an independent committee of sorts that has some sort of voting procedure (and maybe a veto period?) and all sorts of useful features like that and that votes or otherwise agrees on how impact in a project should be allocated?

That seems feasible for really big impacts. Dony imagines Stanislav Petrov’s impact certificate for saving the world from WW3. It seems sensible to form a committee that is trusted by everyone who was around Stanislav Petrov at the time and might’ve also had an influence and that then allocates the impact among all these “participants.”

In many other cases it’s going to be both easier and harder: A archetypal case may be that of a workshop. There is no strong cultural norm around how impact should be allocated among organizers, funders, and maybe some pro-bono instructors, but still the set of people who need to be considered is fairly limited. Typically, for example, the parents of an instructor won’t make a claim. In that sense it’s easier, but it’s also harder in that a single workshop (well, the median workshop) won’t be significant enough that we can assemble a trusted committee to make the decision.

Also, typically, the reason for the autonomous allocation is that the person who wants to know their shared doesn’t want to impose on the time of the others to settle the matter. So if otherwise they would’ve had a 30-minute call with the other n-1 participants, and scheduling the call takes 5 minutes for each of them, then you get a time cost of 35*(n-1) minutes for everyone else. So an alternative method needs to be cheaper than 35*(n-1) minutes to be worth it. (I’ll assume for simplicity that the person’s own time doesn’t matter because they’re probably ready to invest > 10x the amount of time, but of course there are hypothetical methods that are so complex that even that person would not deem them to be worth their time.)

Hence why I was thinking that there may be some norms that we could set that will establish for some subset of projects (say, all since the norm was established) that impact can be assumed to be allocated according to some simple rule unless otherwise specified… But I haven’t thought of anything.

I’d be delighted to know if you have any more ideas that could help us here! :-D

Comment by Dawn Drescher (Telofy) on Retroactive funding impact quantification and maximization · 2022-06-11T20:26:04.849Z · EA · GW

Oh, thank you for engaging my our question! We were delighted to see and read your post! (Have you joined our Discord already or would you be interested in having a call with us? Your input would be great to have, for example when we’re brainstorming something!)

When I’m considering how I would apply your methodology in practice, I mostly run into the hurdle that it’s hard to assess counterfactuals. In our competition we’ve come up with the trick that we suggest particular topics that we think it would be very impactful for someone to work on, a topic so specific that we can be fairly sure that the person wouldn’t otherwise have picked it.

That allows us to assess, for example, that you would most likely have investigated something else in the time that you spent to write this post. But (also per your model) that could still result in a negative impact if what you would’ve otherwise investigated would’ve been more important to investigate. But that depends on lots of open questions from priorities research and on your personal fit for the investigations, so it’s something that it’s hard for us take into account.

There is also the problem that issuers may be biased about their actual vs. counterfactual impact because they don’t want to believe that they’ve made a mistake and they may be afraid that their certificate price will be lower.

One simplification that may be appropriate in some cases is to assume that the same n projects are (1) funded prospectively or (2) founded retroactively. That way, we can ignore the counterfactual of the funders’ spending. Since retro funding will hopefully be used for the sorts of projects where it is appropriate, it’s probably by and large a valid approximation that the same projects get funded, just differently (and hopefully faster and smarter).

But a factor that makes it more complicated again is, I think, that it’ll be important to make explicit the number of investors. In the prospective case, the investors are just the founders who invest time and maybe a bit of money. But in the retrospective case, you get profit-seeking investors who would otherwise invest into pure for-profit ventures. The degree to which we’re able to attract these is an important part of the success of impact markets.

So ideally, the procedure should be one where we can measure something that is a good proxy for (1) additional risk capital that flows into the space, (2) effort that would not otherwise have been expended on (at all) impactful things, and (3) time saved for retro funders.

Our current metric can capture 2 to a very modest extent, but our market doesn’t yet support 1 and and we’re the only EA retro funders at the moment, and we wouldn’t otherwise have done prospective funding.

Some random ideas that come to mind (and sorry for rambling! ^.^'):

  1. Anonymous surveys asking people what they would’ve done without impact markets – where they would’ve invested, what they would’ve investigated, etc. (Sadly, we can’t ask them what they actually did, or it might deanonymize them. But we can probably still average over the whole group.)
  2. Recruit a group of people, ask them what they’re planning to do, then offer them the promise of retro funding, and then observe what they actually end up doing.
  3. Tell people that they’ll randomly get either the small prospective or the large, conditional retrospective funding if they preregister their projects. See in which group there is more follow-through.

These all have weaknesses – the first relies on people’s memory and honesty, the second is costly and we can’t really prevent people from self-selecting into the group if they prefer retro funding, and the last one is similar. I’m also worried that small methodological mistakes can ruin the analysis for us. And a power analysis will probably show that we’d need to recruit hundreds of participants to be able to learn anything from the results. 

So yeah, I’d love some sort of quick and dirty metric that is still informative and not biased to an unknowable extent.

What do you think? Do any solutions come to mind?

Oh, in mature markets retro funders will try to estimate at what “age” an investment into impact might break even with some counterfactual financial market investment of an investor. I have some sample calculations here. Retro funders can then announce that they’ll buy-or-not-buy at a point in time that’ll allow the investors to still make a big profit. But randomly, they can announce much later times. The investors who invest in the short term but less so in the longer term can be assumed to be mostly profit-oriented. If we can identify investors on this market, we can sum up the invested amounts weighed by the degree to which they are profit-oriented. But that’s all not viable yet at all…

Well, that’s all my thoughts. xD I’d be very curious what you think!

Comment by Dawn Drescher (Telofy) on Optimizing Public Goods Funding with blockchain tech and clever incentive design (RETROX) · 2022-06-11T16:29:18.591Z · EA · GW

That’s such a cool project! My cofounder Dony just linked your post to me. I’m glad I didn’t miss it. Have you considered marrying the badgeholder voting with the S-Process? At least insofar as it is prospective (the future funding stream)? The optimism ecosystem seems like a great fit for this project. Are there plans for Optimism to use it for their retro funding?

Dony, Matt, and I have founded Good Exchange, funded from a grant through the Future Fund Regranting Program. We want to develop what we’ve termed “impact markets,” markets that replicate the incentive structures of the for-profit world for nonexcludable goods. We’ve had talks at Funding the Commons, you can read more about our plans in our forum sequence, and we’re currently running a contest that is an early implementation of our system.

We’re not currently building this on a blockchain, but maybe there are other synergies between our projects! We’d be delighted to see on Discord or have a call!

(Note that in the first link to your app the scheme is missing, and that I can’t access the whitepaper. Maybe a private repository?)

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T23:16:11.800Z · EA · GW

I think the signalling benefit from providing ultimate consumers is more important than failing to signal there are speculators. I think speculators are logically downstream of consumers, and impact markets are bottlenecked by lack of clarity about whether there will be consumers.

That sounds sensible to me. Two considerations that push a bit against in my mind are:

  1. I want to make a binding commitment to a particular consumption schedule, and that burns option value. So if trust in the consumption is the bottleneck and if it fluctuates, I would like to still have the option to increase the consumption rate when the trust drops. It feels like it’s a bit too early to think about the mechanics here since the market is still so illiquid that we can’t easily measure such fluctuations in the first place.
  2. A source of trust in impact markets could also stem from particular long-term commitments such as windfall clauses. In this case the consumption schedule would have to be tuned such that the windfall funder can still buy and consume the certificates, and it’s usually unclear when the windfall will happen if it happens. So maybe the consumption schedule should always be some thing asymptotic along the lines of consuming half the remaining certificates by some date, and then half again, and so on.

I guess you'd want to handle this by saying that people shouldn't buy impact for any work trying to establish them at the moment, since it's ex ante risky?

Hmm, I don’t understand this? Can you clarify what you’re referring to?

Our strategy mostly rests on Attributed Impact, an operationalization of how we value the impact of an action that someone performs. (This is a short summary.)

Its key features include that it addresses moral trade (including the distribution mismatch problem) by making sure that the impact of actions that are negative in ex ante expectation is worthless regardless of how great they turn out or how great they are for some group of moral patients. (In fact it uses the minimum of ex ante and current expected value, so it can go negative, but we don’t have a legal handle on issuers to make them pay up unless we can push for mandatory insurance or staking.)

Another key feature is that it requires issuers to justify that their certificate has positive Attributed Impact. It also has a feature that is meant to prevent threats against retro funders. Those, in combination with our commitment to buy according to Attributed Impact, will, or so we hope, start a feedback cycle where issuers are vocal about Attributed Impact to sell their certs to the retro funders, possible investors scrutinize the certs to see if they think that we will be happy with the justification, and generally everyone uses it by default just as now people use the keyword “longtermism” to appeal to funders. (Just kidding.) That’ll hopefully make Attributed Impact the de facto standard for valuing impact, so that even less aligned new retro funders will find it easier to go along with the existing norms rather than to try to change them, especially since they probably also appreciate the antithreat feature.

But we have a few more lines of defense against Attributed Impact drift (as it were), such as “the pot.” It’s currently too early in my view to try to implement them.

I’ve recently been wondering though: Many of these risks apply to all prize contests, not only certificate-based ones. Also anyone out there, any unaligned millionaire, is free to announce big prizes for things we would disapprove of. Our goal has so far been to build an ecosystem that is so hard to abuse that these unaligned millionaires will choose to stay away and do their prize contests elsewhere. But that only shifts around where the bad stuff happens.

Perhaps there are even mechanisms that could attract the unaligned millionaires and ever so slightly improve the outcomes of their prize contests. But I haven’t thought about how that might be achieved.

Conversely, the right to retro funding could be tied to a particular first retro funder to eliminate the risk of other retro funders joining later. But that probably also just shifts where the bad stuff happens, so I’m not convinced.

I’d be curious if you have any thoughts on this!

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T19:14:32.641Z · EA · GW

So you’re saying it’s fine for them not to make the distinction because they’re so quick that it hardly matters, but that it’s important for us? That makes sense. I suppose that circles back to my earlier comment that I think that our wording is pretty clear about the ex ante nature of the riskiness, but that we can make it even more clear by inserting a few more sentences into the post that make the ex ante part very explicit. 

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T17:08:36.808Z · EA · GW

I can see the appeal in the commitment to consumption. We might just do that if it inspires trust in the market. Then again it sends a weird signal if not even we want to use our own system to sustain our operation. “Dogfooding” would also allow us to experience the system from the user side and notice problems with it even when no one reports them to us.

Also people are routinely trusted not to make callous decisions even if it’d be to their benefit. For example, charities are trusted to make themselves obsolete if at all possible. The existence of the Against Malaria Foundation hinges on there being malaria. Yet we trust them to do their best to eliminate malaria.

Charities often receive exploratory grants to allow them to run RCTs and such. They’re still trusted to conduct a high-quality RCT and not manipulate the results even though their own jobs and the ex post value of years of their work hinge on the results. 

Even I personally used to run a charity that was very dear to me, but when we became convinced that the program was nonoptimal and found out that we couldn’t change the bylaws of the association to accommodate a more optimal program, we also shut it down.

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T16:53:46.014Z · EA · GW

I don’t know if they were, so either way it was probably also not obvious to some post authors that they’d be judged by ex ante EV, and it’s enough for one of them to only think that they’ll be judged by ex post value to run into the distribution mismatch.

At least to the same extent – whatever it may be – as our contest. Expectational consequentialism seems to me like the norm, though that may be just my bubble, so I would judge both contests to be benign and net positive because I would expect most people to not want to gamble with everyone’s lives, to not think that a contest tries to encourage them to gamble with everyone’s lives, and to not want to just disguise their gamble from the prize committee.

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T15:30:16.814Z · EA · GW

We didn’t think about this because we’re not planning this at all. But we’re in the process of forming a public benefit corporation. Our benefit statement is “Increase contributions to public and common goods by developing and deploying innovative market mechanisms.” The PBC will be the one doing the purchases, so if we ever sell the certs again the returns will flow black to the PBC account and will be used in line with the benefit statement.

That’s sort of like when a grant recipient buys furniture for an office but then, a few years later, moves to a group office with existing furniture and sells their own (now redundant) furniture on eBay. Those funds then also flow back to the account of the grant recipient unless they have some nonstandard agreements around their furniture.

But of course we can run this by FTX if it ever becomes an action-relevant question. 

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T15:13:16.688Z · EA · GW

Hmm, I love writing high-fidelity content. Just thinking, “how can I express what I mean as clearly as I can” rather than “how can I simplify what I mean to maximize the fidelity/complexity ratio” is a lot easier for me. But a lot of smart people disagree, and point to how shallow heuristics and layered didactic approaches are essential bridge inferential gaps under time constraints.

So I would like to pose the question to anyone else reading this: If you read “Toward Impact Markets” and you read the above post, do you think we should’ve gone for the same level of fidelity above? Or not? Or something in between?

EA Forum posts can disseminate info hazards that can be extremely harmful. (And this does not seem very unlikely, considering that the ideas that are discussed on the EA Forum are often related to anthropogenic x-risks.)

Excluding whole categories of usually valuable content from contests, though, seems like a very uncommon level of caution. I’m not saying that I *know* that it’s exaggerated caution, but there have been many prize contests for content on the EA Forum, and none of them were so concerned about info hazards. Some of them have had bigger prize pools too. And in addition the EA Forum is moderated, and the moderators probably have a protocol for how to respond to info hazards.

I’ve long pushed for something like the “EA Criticism and Red Teaming” contest (though I usually had more specific spins on the idea in mind), I’m delighted it exists, and I think it’ll be good. But it is a lot more risky than ours. It has a greater prize pool, the most important red-teaming should focus on topics that are important to EA at the moment, so “longtermism” (i.e. “how do we survive the next 20 years”) topics like biosecurity and AI safety, and the whole notion of red-teaming is conceptually close to info hazards too. (E.g., some people claim that some others invoke “info hazard” as a way to silence epistemic threats to their power. I mostly disagree, but my point is about how close the concepts are to each other.)

The original EA Forum Prize referred readers to the About page at the time (note that they, too, opted to put the details on a separate linked page), which explicitly discourages info hazards, rudeness, illegal activities, etc., but spends about a dozen words on fleshing this out precisely as opposed to our 10k+ words. Of course if you can communicate the same thing in a dozen and in 10k+ words, then a dozen is better, but if you think that “non-risky” is not clear about whether it refers to actions that are risky while they’re being performed or only to actions whose results remain risky indefinitely, then “What we discourage (and may delete) … Information hazards that concern us” is also unclear like that. Maybe someone is aware of an info hazard so dangerous that the moment they post it they can see from their own state of existence or nonexistence whether they got lucky or not. I think that both framings clearly discourage such sharing, but regardless, the contests are parallel in this regard. (Or, if anything, ours is safer because we are very, very explicit about the ex ante ceiling in our detailed explainer, with definitions, examples, diagrams, etc.)

But I don’t want to just throw this out there as an argument from authority, “If the EA Forum gods do it, it got to be okay.” It’s just that there is a precedent (over the course of four years or so) for lower levels of caution than ours and nothing terrible happening. That is valuable information for us when we try to make our own trade-off between risks and opportunity costs. (But of course all the badness can be contained in one Black Swan event that is yet to come, so there’s no certainty.)

Comment by Dawn Drescher (Telofy) on Experiment in Retroactive Funding: An EA Forum Prize Contest · 2022-06-02T08:58:05.089Z · EA · GW

We’ve gone through countless iterations with this announcement post that usually took the shape of one of us drafting something, us then wondering whether it’s too complicated and will cause people to tune out and ignore the contest, and us then trying to greatly shorten and simplify it.

There’s a difficult trade-off between the high-fidelity communication of our long explainer posts and the concision that is necessary to get people to actually read a post when it comes to participating in a contest. Our explainer posts get very little engagement. To participate in the contest it’s not necessary to understand exactly how our mechanisms work, so we hope to reach more people by explaining things in simpler terms without words like “ex ante” and comparisons to constructed counterfactual world histories.

Like, grocery shopping would be a terrible experience if every customer had to understand all the scheduling around harvest, stocks and flows between warehouses, just-in-time delivery, pricing in of some expected number of produce that expire before they’re bought, etc. If anyone who wants to use impact markets has to spend more time up front to learn more about them than the markets are worth to them, that’d be a failure.

This is exacerbated in this case where a submitter has a < 100% chance to get a reward of a few hundred dollars. That comes down to quite little money in expectation, so we’ve been trying hard to make the experience as light on the time commitment as possible while linking our full explainer posts at every turn to make sure that people cannot miss the high-fidelity version if they’re looking for it. Once we have bigger budgets, we can also ask people to engage more upfront with our processes.

That said, we’ve thought a lot about the bolded key sentence “morally good, positive-sum, and non-risky.” We hope that everyone who submits will read it. By “non-risky” we mean “ex ante non-risky.” We hoped that the term captured that as it’s not common to talk about “risks” ex post. Even in sentences like “the Cuban Missile Crisis was risky,” the sentence doesn’t say that the event is a risk for us today after the fact but that, at the time when it was happening, it was risky.

But I’ll ask Dony to go over the post again and see if we can clarify this in a place where it doesn’t cause more confusion than it resolves. Maybe my bolded text below can be inserted below the first sentence that you cited.

For now, let me reiterate for every potential submitter reading this:

We will value impact according to Attributed Impact in its latest version at the time, so if writing your post would’ve been net negative in expectation before you wrote it (ex ante), it cannot be valued positively at any later time! The ex ante expected value is the ceiling of any potential future valuation of the impact, regardless how great it happens to turn out.

Every submitter also has to answer questions like “What positive impact did you expect before you started the project? What were unusually good and unusually bad possible outcomes? (Please avoid hindsight bias and take the interests of all sentient beings into account.)” before we will buy any of the impact. (I should reword that a bit, maybe, “What positive impact was to be expected …,” to make it fit with Attributed Impact.)

Here is a section (verbatim) that I originally wrote for the post that we cut entirely for length:


Most of the problems that impact markets might cause are detailed in Toward Impact Markets.

We are particularly concerned with the following:

  1. Issuers might be incentivized to:
    1. try many candidate interventions, some of which might backfire terribly, but then only issue certificates only in the rare interventions that succeeded,
    2. try many candidate interventions, some of which might be terrible for some moral systems, but issue the certificates under different aliases and sell them to different retro funders,
    3. try an intervention many times, usually with disastrous results, but issue an impact certificate only for the rare iteration of the intervention that succeeded,
    4. do something good once but then reframe it slightly to sell the impact from it multiple times to different people on different marketplaces,
    5. compete with other issuers for funding by badmouthing them or withholding resources from them when otherwise they would’ve collaborated,
    6. pander to the perceived preferences of the retro funders even in cases where the issuers have a clearer picture of what is impactful,
    7. generate externalities for individuals that are not themselves represented on the market and who the retro funders are not aware of,
    8. issuers using the markets to issue disguised threats against retro funders.
  2. Investors might be incentivized to:
    1. do little research and just invest large sums into a wide range of projects regardless of whether they’re likely to backfire on the off-chance that (1) one of them actually turns out good or (2) at some point in the future there will be a very rich retro funder that will think that a project turned out good,
    2. invest mostly in things that are highly verifiable to avoid the ambiguity about the purview of certificates that comes with lower levels of verifiability, thereby disadvantaging some interventions for reasons unrelated to their impact,
    3. actively trade certificates to the point of creating a lot of noise that distracts issuers from their object-level work,
    4. do 1.e. and 1.f. above.
  3. Retro funders might:
    1. get scammed by some of the above tricks,
    2. abuse their power by incentivizing projects that are disastrous for some moral systems.

We are optimistic that the brunt of these are solvable in a mature impact market. We don’t have a fully general mechanism but a range of incremental ones. Most of them can be summarized as an attempt to facilitate moral trade on a financial market:

  1. Issuers:
    1. commit to and justify their actions according to an operationalization of impact called Attributed Impact according to which an action that is net negative in ex ante expectation can never be positive in value even if it so happens to turn out well,
    2. can sell only impact in classes of actions that are very unlikely to be extremely harmful, namely articles on the EA Forum (and at a later stage maybe other similar artifacts),
    3. can sell only impact in classes of actions that have passed multiple rounds of vetting – for example in this case because the moderators of the EA Forum allowed the post and because we allowed its certificate to be issued on our platform,
    4. can, conversely, sell impact from exposés of other certificates where the issuers cheated in some fashion to hide negative externalities actual or probabilistic,
    5. con, conversely, sell impact from articles that changes the evaluation of the impact of other certificates,
    6. can, conversely, sell impact from articles detailing new problems of or attack vectors against impact markets.
  2. Investors:
    1. are incentivized by retro funders just enough that those who add information to the market by making good predictions are profitable.
  3. Retro funders:
    1. should commit to Attributed Impact to push issuers and investors to commit to Attributed Impact too, thereby averting negative externalities and threats,
    2. have the option to delegate the decision-making or the prefiltering of funding opportunities to us,
    3. have the option to pivot entirely to retro funding, which should free up so much staff time that they can build expertise in recognizing exploits,
    4. have at some point the support of “the pot,” an investment mechanism that acts as a semi-automated retro funder and reinforces the Schelling point of Attributed Impact.

The remaining problems are mostly related to (1) imperfections in the implementation of these solutions and (2) flaws in the retro funder alignment. If a really generous retro funder joins the market who is unconcerned with moral cooperation or cheating and has enough capital to spend, then impact investors are ready to stay invested in countless projects for decades until the unaligned investor arrives, then that retro funder can have a bad influence on the market even when people merely expect the retro funder to join but when it hasn’t happened.

We don’t think that there is a mechanism that can prevent this from happening because anyone is already free to retroactively reward whoever they like. But we recognize that by writing about impact markets and by running contests like these, we’re making the option more salient.

We want to hit the right balance between minimizing the opportunity costs from delaying the implementation of impact markets and minimizing the direct costs from harm that impact markets might cause. There are those who think that we have an “extreme focus on risks” and those who think that we’re rash for wanting to realize impact markets at all. We would love to get your opinion on where we stand on this balance and how we can improve!

Comment by Dawn Drescher (Telofy) on Being an individual alignment grantmaker · 2022-06-01T09:57:58.146Z · EA · GW

Yes, that’ll be important!