Posts

OpenBook: New EA Grants Database 2023-01-28T17:42:59.919Z
Reminder: you can donate your mana to charity! 2022-11-29T18:30:27.458Z
In Defense of SBF 2022-11-14T16:10:33.183Z
Predict which posts will win the Criticism and Red Teaming Contest! 2022-09-27T22:46:39.806Z
What We Owe the Past 2022-05-05T12:06:27.282Z
Manifold for Good: Bet on the future, for charity 2022-05-02T18:06:43.565Z
Predicting for Good: Charity Prediction Markets 2022-03-22T17:44:29.507Z
akrolsmir's Shortform 2022-03-03T09:49:35.628Z
Create a prediction market in two minutes on Manifold Markets 2022-02-09T17:37:46.684Z

Comments

Comment by Austin (akrolsmir) on Some intuitions about fellowship programs · 2023-01-17T01:01:23.776Z · EA · GW

The Manifold Markets team participated in the program Joel ran; it was trajectory-changing. It felt more like YCombinator than YCombinator itself. We met a bunch of other teams working on adjacent things to us, collaborated on ideas and code, and formed actual friendships - the kind I still keep up with, more than half a year later. Joel was awesome, I would highly encourage anyone thinking of fellowships to heed his advice.

I was inspired afterwards to run a mini (2 week) program for our team + community in Mexico City. Beyond the points mentioned above, I would throw in:

  • Think very carefully about who comes; peer effects are the most important aspect of a fellowship program. Consider reaching out to people who you think would be a good fit, instead of just waiting for people to apply.
  • The best conversations happen during downtime. E.g. the 30m bus ride between the office and the hotel; late night after a kickback is officially over.
  • Casual repeated interactions lead to friendships; plan your events and spaces so that people run into people again and again.
  • Start off as a dictator when eg picking places to get dinner, rather than polling everyone and trying to get consensus. In the beginning, people just need a single Schelling point; as they get to know each other better they'll naturally start forming their own plans.
  • Perhaps obvious, but maintain a shared group chat; have at least one for official announcements, and a lounge for more casual chatting. Slack or Discord are good for this.
Comment by Austin (akrolsmir) on EA could use better internal communications infrastructure · 2023-01-12T17:47:39.830Z · EA · GW

Consider Google internal communications (I used to work there). Google has ~100k fulltimers, far more than the total number of EA fulltime professionals. And internal communications can leak (eg the Damore memo). But only a small fraction of these internal messages actually get leaked; and the feeling of posting there is much less "posting on Twitter"and more "posting in private group chat".

Being able to cold-message almost anyone in the company, and have the expectation that they will see your message and respond, also leads to norm of shared trust in the communication actually happening instead of getting ghosted.

Comment by Austin (akrolsmir) on EA could use better internal communications infrastructure · 2023-01-12T17:37:55.693Z · EA · GW

I think this is a straightforwardly good idea; I would pay a $5k bounty to someone who makes "EA comms" as good as e.g. internal Google comms, which is IMO not an extremely high bar.

I think an important point (that Ozzie does identify) is that it's not a simple as just setting up a couple systems, but rather doing all of the work that goes in shepherding a community and making it feel alive. Especially in the early days, there's a difference between a Slack that feels "alive" and "dead" and a single good moderator/poster who commits to posting daily can make the difference. I don't know that this needs to be a fulltime person; my happy price for doing this myself would be like $20k/year?

Regarding leaks: I don't think the value of better internal comms is in "guaranteed privacy of info". It's more in "reducing friction to communicate across orgs" and in "increasing the chance that your message is actually read by the people". And there's a big difference between "an ill-intentioned insider has the ability to screenshot and repost your message to Twitter" to "by default, every muckraker can scroll through your entire posting history".

Public venues like EA Forum and Facebook are a firehose that are very difficult for busy people to stay on top of; private venues like chat groups are too chaotically organized and give me kind of an ugh-field feeling.

Some random ideas:

  • Create the "One EA Slack/Discord to rule them all".  Or extend out of an existing eg Constellation chat.
  • Ask EAG attendees to use that instead of Swapcard messaging, so that all EAG attendees are thrown into one long-lived messaging system
  • Integrate chat into EA Forum (DMs feel too much like email at the moment)
  • Integrate chat into Manifold (though Manifold is much less of a Schelling point for EA than EAF)
  • Start lists of Google Groups (though this competes a bit against the EAF's subforums)
Comment by Austin (akrolsmir) on Against philanthropic diversification · 2022-12-22T23:12:02.062Z · EA · GW

In past years, I believed that donating to many causes was suboptimal, and was happy to just send money to Givewell's Top Charities fund. But I've diversified my donations this year, partly due to 2., 3. and 4. Some other considerations:

7. From the charity's perspective, a diversified donor base might provide more year-over-year stability. A charity should be happier to have 100 donors paying $1k a year than 1 donor paying $100k, in terms of how beholden it is to its donors.

8. Relatedly, a small charity might have a easier time fundraising if they can use a broad donor base as evidence to larger funders about the impact of their work.

9. Wisdom of the crowds/why capitalism is so good: there's a lot of knowledge held in individual donors' heads about which charities are doing the best work; diversifying allows for more granular feedback/bits of information flow in the overall charitable system.

Comment by Austin (akrolsmir) on wayne's Shortform · 2022-12-01T02:45:55.038Z · EA · GW

Haha, I wrote a similarly titled article sharing the premise that Sam's actions seem more indicative of a mistake than a fraud: https://forum.effectivealtruism.org/posts/w6aLsNppuwnqccHmC/in-defense-of-sbf

I appreciated the personal notes about SBFs interactions with the animal welfare community. I do think the tribalism EA tribalism element is very real as well. Also appreciate the point about trying to work on something intrinsically motivating - I'm not sure that it's possible for every individual but I do feel like my own intrinsic love of work helps a lot with putting in a lot of time and effort!

Comment by Austin (akrolsmir) on Reminder: you can donate your mana to charity! · 2022-11-30T15:16:17.340Z · EA · GW

Thanks for asking! Manifold has received a grant to promote charitable prediction markets which we can regrant from. But otherwise, we could also fund these donations via mana purchases (some of our users buy in more mana if they run out, or want to support Manifold.markets)

Comment by Austin (akrolsmir) on In Defense of SBF · 2022-11-15T15:34:59.539Z · EA · GW

Thank you - I think you did a good job of capturing what I was trying to say. we shouldn't go full fluffy rainbows, and we should directionally update against SBF compared to before FTX imploded; but what I'm seeing is way overcorrected and I'm trying to express why.

Comment by Austin (akrolsmir) on In Defense of SBF · 2022-11-15T15:33:14.260Z · EA · GW

Yeah, perhaps I could have been more clear in my argumentation structure. Point 1 is a consideration on the object-level: was it willful? But points 2 and 3 assume that even if it was willful, the community response goes too far in condemnation, and condemnation without regard for loyalty/ambition might hurt its ability to actually do good in the world.

Comment by Austin (akrolsmir) on In Defense of SBF · 2022-11-14T19:48:21.604Z · EA · GW

Yeah, idk, it's actually less of a personal note than a comment on decision theory among future and current billionaires. I guess the "personal" side is where I can confidently say "this set of actions feels very distasteful to me" because I get to make claims about my own sense of taste; and I'm trying to extrapolate that to other people who might become meaningful in the future.

Or maybe: This is a specific rant to the "EA community" separate from "EA principles". I hold my association with the "EA community" quite loosely; I only actually met people in this space like this year as a result of Manifold, whereas I've been donating/reading EA for 6ish years. The EA principles broadly make sense to me either way; and I guess I'm trying to figure out whether the EA community is composed of people I'm happy to associate with.

Comment by Austin (akrolsmir) on Tracking the money flows in forecasting · 2022-11-09T17:35:02.072Z · EA · GW

Thanks for the shoutout! Super cool to have all this info collate in one place (it'll be fun to set up certs on Manifold so people can invest in how they feel about different platforms 😉)

In case it matters, Manifold last raised at a $22m postmoney valuation, with a total of about 2.4m in investment and 0.6m in grants: https://manifoldmarkets.notion.site/Manifold-Finances-0f9a14a16afe4375b67e21471ce456b0

Also, I think our growth has gotten significantly better in the last couple months - and would be extremely interested in estimates on DAU on other platforms haha.

Comment by Austin (akrolsmir) on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-09T15:29:38.622Z · EA · GW

I think Manifold was experiencing an (unrelated) database outage when you posted this comment, and the markets should be up again; please let me know if this isn't the case!

Comment by Austin (akrolsmir) on Changing Licences on the EA Forum · 2022-10-08T00:18:28.600Z · EA · GW

Definitely appreciate the clarity provided here; I'm a huge fan of the Creative Commons licenses.

I'd put in my vote for dropping the Commercial clause; very biased, of course, but at Manifold we've really enjoyed pulling EA Forum content (such as the Criticism and Red Teaming Contest: https://manifold.markets/CARTBot) and setting up tournaments for them. We didn't charge anyone to participate (and we're actually paying out a bit for tournament prizes), but all the same Manifold is a commercial venture and we're benefiting from the content -- a noncommercial license might make us more reluctant to try cool things like this.

Comment by Austin (akrolsmir) on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T21:00:44.824Z · EA · GW

I'm also not sure how much work is being covered by the word "range" here - it's true that eg Google would hire one engineer for 200k and another for 800k, but the roles and responsibilities and day to day work of both would look completely different.

Comment by Austin (akrolsmir) on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T20:58:10.460Z · EA · GW

Hm so the subject of salary range is actually quite different than transparency - I mostly think large ranges would be good because it allows for EA to pay market rates to highly skilled/in demand workers. Imo the current ethos of artificial/sacrificial wages in EA is pennywise, pound foolish, and leads to the problem of having very mission-aligned but not extremely competent people in EA orgs. I think it's a major reason EA struggles to attract mid to late career talent, especially leadership/managers/mentors.

Re: adversarial, I don't have the sense that either 1) employers care about having their pay ranges publicized on levels.fyi or similar services, nor that 2) companies have a right to keep such information private.

Comment by Austin (akrolsmir) on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T16:47:35.310Z · EA · GW

Thanks for adding the context! I think your specific points are factually correct regarding wide variations of pay in the software industry, though I don't think that actually refutes the point that salary ranges and comp expectations are well-known and easy to look up -- certainly much more so than in EA or most other for-profit sectors. (Government/academic work is the exception here, where you can often find eg specific teacher salaries posted publicly)

If you eg look at https://www.levels.fyi/ for Google, you can directly see what an L3 (fresh grad), L4 (1-2 years of experience), L5 ("Senior", typically 3-4 years) make in the industry. RSU/equity/vesting schedule does complicate things, but they are likewise shared on the site, here's Stripe's (note their L3 corresponds to Google L5)

I acknowledge the point made in Morrison's comment, but just think that it's a bad norm that favors employers, who tend to have an informational advantage over employees in the first place, and am unsure why EA orgs and especially EA individuals/employees should want to perpetuate this norm.

On an optics level, I think you should just be up front and confident in your valuations. If someone asks why, you can mention something like "this ML researchers makes $1m in salary because they would otherwise make $2m at Deepmind".

Comment by Austin (akrolsmir) on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T16:31:38.035Z · EA · GW

Link updated, sorry about that!

Comment by Austin (akrolsmir) on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T03:35:58.798Z · EA · GW

Inside the tech world, there's a norm of fairly transparent salaries driven by levels.fyi (and Glassdoor, to a lesser extent). I think this significantly reduces pay gaps caused by eg differential negotiating inclinations, and a similar gathering place for public EA salary metrics is one of my pet project proposals.

Manifold Markets takes the somewhat unusual step of just making all of our salaries public: https://manifoldmarkets.notion.site/Manifold-Finances-0f9a14a16afe4375b67e21471ce456b0

Comment by Austin (akrolsmir) on Cause Exploration Prizes: Announcing our prizes · 2022-09-10T02:51:56.626Z · EA · GW

Manifold Markets ran a prediction tournament to see whether forecaster would be able to predict the winners! For each Cause Exploration Prize entry, we had a market on "Will this entry win first or second place?". Check out the tournament rules and view all predictions here.

I think overall, the markets did okay -- they managed to get the first place entry ("Organophosphate pesticides and other neurotoxicants") as the highest % to win, and one of the other winners was ranked 4th ("Violence against women and girls").  However, they did miss out on the two dark horse winners ("Sickle cell disease" and "shareholder activism"), which could have been one hypothetical way markets would outperform karma. Specifically, none of the Manifold forecasters placed a positive YES bet on either of the dark horse candidates.

 

I'm not sure that the markets were much better predictors than just EA Forum Karma -- and it's possible that most of the signal from the markets were just forecasters incorporating EA Forum Karma into their predictions. The top 10 predictions by Karma also had 2 of the 1st/2nd place winners:

And if you include honorable mentions in the analysis, EA Forum Karma actually did somewhat better. Manifold Markets had 7/10 "winners" (first/second/honorable), while EA Forum Karma had 9/10.

Thanks again for the team at OpenPhil (especially Chris and Aaron) for hosting these prizes and thereby sponsoring so many great essays! Would love to see that writeup about learnings, especially curious what the decision process was that lead to these winners and honorable mentions.

Comment by Austin (akrolsmir) on Open EA Global · 2022-09-02T17:29:07.163Z · EA · GW

I think anime/gaming expos/conventions might be a good example actually - in those events, the density of high quality people is less important than just "open for anyone who's interested to come". Like, organizers will try to have speakers and guests lined up who are established/legit, but 98% of the people visiting are just fans of anime who want to talk to other fans.

Notably, it's not where industry experts converge to do productive work on creating things, or do 1:1s; but they sure do take advantage of cons and expos to market their new work to audiences. By analogy, a much larger EA Expo would have the advantage of promoting the newest ideas to a wider subset of the movement.

Plus, you get really cool emergent dynamics when the audience size is 10x'd. For example, if there are a 1-2 people in 1000  who enjoy creating EA art, at 10000 people you can have 10-20 of them get together and meetup and talk to each other

Comment by Austin (akrolsmir) on Can You Predict Who Will Win OpenPhil's Cause Exploration Prize? Bet on it! · 2022-09-02T01:12:32.803Z · EA · GW

Haha thanks for the shoutout, Nathan! Our writeup and tournament announcement is now up at https://forum.effectivealtruism.org/posts/ktZCeDaMZgr9dCjsX/prediction-tournament-who-will-win-the-cause-exploration

Comment by Austin (akrolsmir) on Prediction Markets are Somewhat Overrated Within EA · 2022-09-01T07:46:42.714Z · EA · GW

Also re funding -- obviously, super super biased here but I think something like "experimentation is good", "the amount of EA money that's been spent on prediction markets is quite low overall, in the single digit millions" and "it's especially unclear where the money would be better spent"

Prediction pools (like Metaculus-style systems) are maybe the solution I'm most aware of in this space, and I think executing on these could also be quite valuable; if you have good proposals on how to get better forecasts about the future, I think a lot of people would happily fund those~

Comment by Austin (akrolsmir) on Prediction Markets are Somewhat Overrated Within EA · 2022-09-01T07:42:40.560Z · EA · GW

FWIW my strongest criticism of prediction markets might look something like "Prediction Markets are very general purpose tools, and there's been a lot of excitement about them from a technocratic perspective, but much less success at integrating them into making better decisions or providing novel information, especially relative to the counterfactual of eg paying forecasters or just making random guesses"

Comment by Austin (akrolsmir) on Prediction Markets are Somewhat Overrated Within EA · 2022-09-01T07:39:20.641Z · EA · GW

Austin from Manifold here - thanks for this writeup! I do actually agree with your core thesis (something like "prediction markets get a lot of hype relative to how much value they've added"), though I think your specific points are not as convincing?

  • 1 Re: long-run reliability, this is actually a thing we think a lot about. Manifold has introduced different loan schemes such that the cost of capital of investing in a long-term market is lower, but I could imagine better market structures or derivatives that correctly get people to bet on important longterm questions
  • 2 The existence of free money is worth noting, and points out some limits of prediction markets:
    • Markets which don't get enough attention will be less accurate
    • Markets without enough liquidity (fewer traders, less $ traded) will be less accurate
    • Efficient Market Hypothesis isn't unilaterally true - markets exist on a spectrum of efficiency, and simply setting up a "market" doesn't magically make the prices/predictions good
  • That said, "hey look, these markets are clearly wrong" is painting prediction markets with an overtly broad brush, and might lead you to miss out on markets that are actually valuable. By analogy, you wouldn't hold up  a random antivaxxer's tweet as proof that all of Twitter is worthless; rather, you should think that the context and the people who are producing the tweet or market actually make a difference
  • 3 The asymmetric payout for being right in Doom scenarios has been discussed eg in this bet between Yudkowsky and Caplan. I think this is simultaneously true, and also not super relevant in practice, since it turns out the motivation (at least in Manifold markets) is often closer to "I want to make this number correct, either for altruistic info-providing reasons, or for egotistical show people I was right reasons, not completely rational bankroll-maximizing cost-benefit analyses"
Comment by Austin (akrolsmir) on Open EA Global · 2022-09-01T06:51:07.122Z · EA · GW

+100 on this. I think the screening processes for these conferences overweight legible in-groupy accomplishments like organizing an EA group in your local town/college, and underweights regular impressive people like startup founders who are EA-curious -- and this is really really bad for movement diversity.

Yes, I might be salty because I was rejected from both EAG London and Future Forum this year... 

But I also think the bar for me introducing friends to EA-curious is higher, because there isn't a cool thing I can invite them into. Anime conventions such as Anime Expo or Crunchyroll Expo are the opposite of this - everyone is welcome, bring your friends, have a good time -- and it works out quite well for keeping people interested in the subject.

Comment by Austin (akrolsmir) on Rethink Priorities 2022 Mid-Year Update: Progress, Plans, Funding · 2022-07-26T17:55:49.806Z · EA · GW

Thanks for writing this up! Really appreciate the clear and transparent writeup across hiring, output, and financial numbers, and think that more orgs (including Manifold!) should strive for this level of clarity. One thing I would have been curious to see is how much money came in from each funding source, haha.

I set up a prediction market to see how RP will do against its funding goals:

Comment by Austin (akrolsmir) on EAs should use Signal instead of Facebook Messenger · 2022-07-21T23:05:42.974Z · EA · GW

Half-baked draft that has been sitting around for a while: https://blog.austn.io/posts/why-you-should-switch-from-google-docs-to-notion

I would spend more time on this cause if I felt that EA orgs actually would listen to me on this point. And I actually do pitch this quite often to when I talk to leaders at EA orgs, haha.

Comment by Austin (akrolsmir) on EAs should use Signal instead of Facebook Messenger · 2022-07-21T23:04:22.734Z · EA · GW

Yes, Notion has been around for 6 years and has raised hundreds of millions in funding: https://www.crunchbase.com/organization/notion-so

Comment by Austin (akrolsmir) on EAs should use Signal instead of Facebook Messenger · 2022-07-21T08:42:47.242Z · EA · GW

Hrm, I strongly disagree with this post.

  • I don't see that security/privacy is especially important as a feature of a messaging system, when compared to something like "easy to use" or "my friends are already on it"
  • Basically all sensitive/important EA communication already happens over Slack or Gmail. This means that the consideration for the switching isn't especially relevant to "EA", vs just regular consumers.
  • This post reads as fairly alarmist against FB messenger, but doesn't do a good job explaining or quantifying what the harms of a possible security breach are, nor how likely such a breach might be
  • I don't think EA want to be spending weirdness points convincing people to use a less-good system - switching costs are quite high!

Fwiw, I do agree that choosing good software is quite important - for example, I think EA orgs are way overindexed on Google Docs, and a switch to Notion would make any one org something like 10% more productive within 3 months.

Comment by Austin (akrolsmir) on arxiv.org - I might work there soon · 2022-07-18T20:46:55.924Z · EA · GW

This is really awesome! Along the things that Hauke mentioned around scientometrics, I'd love to figure out a native integration for predicting different kinds of metrics for new research papers. Then other scientists browsing on Arxiv can quickly submit their own thoughts on the quality and accuracy of different aspects of each paper, as a more quantitative and public way of delivering feedback to the authors.

A quick sketch: On every new paper submission, we automatically create a markets for:

  • "How many citations will this paper have?"
  • "Will this paper have successfully replicated in 1 year?" 
  • "Will this paper be retracted in the next 6 months?"
  • Along with letting the author set up markets on any key claims made within the paper, or the sources the paper depends on

Manifold would be happy to provide the technical expertise/integration for this; we've previously explored this space with the folks behind Research.bet, which I would highly encourage reaching out to as well.

Comment by Austin (akrolsmir) on Retroactive funding with subsidy markets · 2022-07-17T16:31:39.388Z · EA · GW

None of the examples illustrate the investors making positive returns. The scheme is delibirately set up, so that in the limit of ideal markets, the investors make nothing,

 

Ah, I wasn't sure whether this was a core principle of the proposal or not. In that case: why do the investors bother to participate at all? What incentivizes them to do a good job?

This is the problem you're pointing to under "Silly Money", I think -- that investors have no skin in the game.

Comment by Austin (akrolsmir) on Retroactive funding with subsidy markets · 2022-07-17T03:45:33.096Z · EA · GW

Hey! Thanks for writing this up, I'm a huge fan of weird funding proposals haha. Let me try and summarize the proposal to see if I understand it. I found some of the terms to be confusing, so refer to the Donald's terminology in "quotes" and my personal interpretation in (italics)

  1. Set aside a "subsidy market" (aka funding pool) to match "investor" (allocator) money
  2. Each "venture" (project) starts with a "cost bar" (funding target)
  3. Investors buy into each venture  via dutch auction; call the total raised T.
  4. The subsidy market scales up the T by a multiplicative factor R; eg if R = 2.5, then the subsidy market provides a 150% match.
  5. If R*T > cost bar, the venture is good to go; excess is returned to the subsidy market.
  6. Later, once the venture is complete, "funders" (final oracular funders) decide how much good the venture achieved, and pay the total to the investors.
  7. Excess funds are also returned to the investors.

 

None of the examples seem to illustrate the investors actually earning positive return, so I'll draw one up:

  • Project with cost bar $2m
  • Raises a total of 10k shares * $100 per share = $1m
  • Subsidy market scales this up to $2.5m (then takes back $0.5m since that's above the bar)
  • Project ends up spending $1.5m and generating 3m utils; $0.5m is scaled down to 200k and sent to the investors ($20 per share)
  • Funder pays $3m for the project ($300 per share)

So in total: 

  • Investors paid $1m and got $3.2m = +2.2m
  • Project gained 1.5m to spend = +1.5m
  • Funders spent 3m = -3m
  • Subsidy market spent 1.5m, took back 0.5m, scaled down 0.3m = -0.7m

which all balances out.

The main new thing in this proposal seems to be the "subsidy market" which 1) pays out as a matching pool for projects which counterfactually wouldn't have been funded, and 2) absorbs surplus when a project is overfunded? And 2) is an attempt to solve Scott's question of "who gets the surplus from a profitable venture"? It's this part I'm most confused about

It's not clear to me that this subsidy market leads to better outcomes -- specifically, it seems to mess with the incentives such that the people running the venture don't care about spending the money well? Your first counterexample with the 20m utils seems to address this, but it's not very reassuring - the case where "the fact that $10m and $1m buy the same thing is known up front" is a pretty big ask, IMO.

Also, with the way the system is set up, the subsidy market seems to earn money when projects don't actually need its funding (in the High Returns example), and lose money when its funds are actually useful (in my example). This is deeply weird to me -- if I were viewing the subsidy market as a lender, it would seem "fair" somehow to pay it back extra if its funds were actually used, rather than when it sits by twiddling its thumbs.

One adjustment/framing that makes more intuitive sense to me is to make the subsidy market as just another shareholder; e.g. if it scales up T to 2.5T and thus is bankrolling 1.5T/2.5T = 60% of the operation, it should just get 60% of the total profit among all investors.

Comment by Austin (akrolsmir) on Announcing Future Forum - Apply Now · 2022-07-06T19:18:33.880Z · EA · GW

I've had the pleasure of meeting Isaak in person, and it's clear that thoughtfulness and agency are both values that he not only espouses, but also embodies.  (Ask him sometime about his experience starting a utilitarian student movement -- before ever having heard of "effective altruism"!)

The Future Forum looks incredibly exciting and I would highly encourage you to apply~

Comment by Austin (akrolsmir) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T22:13:30.022Z · EA · GW

I think impact markets should be viewed in that experimental lens, for what it's worth (it's barely been tested outside of a few experiments on the Optimism blockchain). I'm not sure if we disagree much!

Curious to hear what experiments and better funding mechanisms you're excited about~

Comment by Austin (akrolsmir) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T22:09:54.198Z · EA · GW

Thanks for your responses!

I'm not sure that "uniqueness" is the right thing to look at.

Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.

There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities - but these would apply equally to impact-focused entities too.

We should be cautious about pushing the world (and EA especially) further towards the "big things happen due to individuals following their local financial incentives" dynamics.

I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.

Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money - but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.

Comment by Austin (akrolsmir) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T15:33:39.026Z · EA · GW

Hm, naively - is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don't think this a unique reason to avoid impact markets.

My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than "the thing that happens in regular markets already".

Additionally - I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?

Finally: on a meta level, the amount of risk you're willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we're likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That's not my current read of our xrisk situation, but would love to be convinced otherwise!)

Comment by Austin (akrolsmir) on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-17T16:47:29.287Z · EA · GW

In July 2022, there still aren’t great forecasting systems that could deal with this problem. The closest might be Manifold Markets, which allows for the fast creation of different markets and the transfer of funds to charities, which gives some monetary value to their tokens. In any case, because setting up such a system might be laborious, one could instead just offer to set such a system up only upon request.

Manifold would be enthusiastic about setting up such a system for improving grant quality, through either internal or public prediction markets! Reach out (austin@manifold.markets) if you participate in any kind of grantmaking and would like to collaborate.

The primary blocker, in my view, is a lack of understanding on our team on how the grantmaking process operates - meaning we have less understanding of the the use case (and thus how to structure our product) than eg our normal consumer product. A few members of OpenPhil have previously spoken with us; we'd love to work more to understand how we could integrate.

A lesser blocker is the institutional lack of will to switch over to a mostly untested system. I think here "Prediction Markets in the Corporate Setting" is a little pessimistic wrt motives; my sense is that decisionmakers would happily delegate decisions, if the product felt "good enough" - so this kind of goes back to the above point.

Comment by Austin (akrolsmir) on New cooperation mechanism - quadratic funding without a matching pool · 2022-06-13T18:01:06.385Z · EA · GW

Hey Filip! I've been working on an implementation of QF for Manifold! Preview: https://prod-git-quadfund-mantic.vercel.app/charity

Specifically, we actually DO have a matching pool, but there are some properties of fixed-matching-pool QF that is not super desirable; aka it turns into a zero-sum competition for the fixed pool. We're trying to address this with a growing matching pool, would love to see if your mechanism here is the right fix. More discussion: https://github.com/manifoldmarkets/manifold/pull/486#issuecomment-1154217092

Comment by Austin (akrolsmir) on New cooperation mechanism - quadratic funding without a matching pool · 2022-06-05T22:35:24.646Z · EA · GW

Very cool! Manifold has been considering quadratic funding for a couple situations:

And in the latter scenario, we had been thinking of a matching pool-less approach of redistributing contributions according to the quadratic funding equation. But of course, the downside of "I wanted to tip X but the commenter is getting less!" always is kind of weird. I like this idea of proportionally increasing commitments out of a particular limit, it seems like a much easier psychological sell.

Really appreciate the animations btw - super helpful for giving a visual intuition for how this works!

Comment by Austin (akrolsmir) on akrolsmir's Shortform · 2022-05-31T15:01:59.729Z · EA · GW

For reference - malaria kills 600k a year. Covid has killed 6m to date.

If you believe creating an extra life is worth about the same as preventing an extra death (very controversial, but I hold something like this) then increasing fertility is an excellent cause area.

Comment by Austin (akrolsmir) on akrolsmir's Shortform · 2022-05-31T14:56:50.099Z · EA · GW

Missing-but-wanted children now substantially outnumber unwanted births. Missing kids are a global phenomenon, not just a rich-world problem. Multiplying out each country’s fertility gap by its population of reproductive age women reveals that, for women entering their reproductive years in 2010 in the countries in my sample, there are likely to be a net 270 million missing births—if fertility ideals and birth rates hold stable. Put another way, over the 30 to 40 years these women would potentially be having children, that’s about 6 to 10 million missing babies per year thanks to the global undershooting of fertility.

https://ifstudies.org/blog/the-global-fertility-gap

Comment by Austin (akrolsmir) on Impact is very complicated · 2022-05-22T19:27:56.531Z · EA · GW

Haha thanks for pointing this out! I'm glad this isn't an original idea; you might say robustness itself is pretty robust ;)

Comment by Austin (akrolsmir) on Impact is very complicated · 2022-05-22T11:56:24.420Z · EA · GW

It becomes clear that there's a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we've got:

  1. It's common sense that not being bit by mosquitos is nice, all else equal.
  2. The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
  3. Lots of smart people recommend this intervention.
  4. There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like "what about this edge case" rather than taking issue with the central premise.

Even if one of these fails, there are still the others. You're very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.

 

I really liked this framing, and think it could be a post on it's own! It points at something fundamental and important like "Prefer robust arguments".

You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.

Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments "robust".

One example of a robust argument that I particularly liked: the case for cutting meat out of your diet. You can make a pretty good argument for it from a bunch of different angles:

  • Animal suffering
  • Climate/reducing emissions
  • Health and longevity
  • Financial cost (price of food)

By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.

Some signs that an argument is robust:

  • Many people who think hard about this issue agree
  • People with very different backgrounds agree
  • The argument does a good job predicting past results across a lot of different areas

Robustness isn't the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.

Comment by Austin (akrolsmir) on What share of British adults are vegetarian, vegan, or flexitarian? · 2022-05-16T14:24:20.347Z · EA · GW

Thanks, this was really interesting; I love the visualization of how diets are changing over time!

I was inspired to start a prediction market on how my own diet will change (I'm currently pescatarian): https://manifold.markets/Austin/what-will-my-diet-look-like-over-th

Comment by Austin (akrolsmir) on Norms and features for the Forum · 2022-05-16T14:03:59.401Z · EA · GW

Sinclair has been working on allowing authors to embed Manifold prediction markets inside of a LessWrong/EA Forum post! See: https://github.com/ForumMagnum/ForumMagnum/pull/4907

So ideally, you could set up a prediction market for each of these things eg

  • "How many  epistemic corrections will the author issue in the next week?"
  • "Will this post win an EA Forum prize?"
  • "Will this post receive >50 karma?"
  • "Will a significant critique of this post receive >50 karma?"
  • "Will this post receive a citation in major news media?"

And then bet on these directly from within the Forum!

Comment by Austin (akrolsmir) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-10T22:27:00.289Z · EA · GW

I actually do think that getting Flynn elected would be quite good, and would be open to other ways to contribute. eg if phonebanking seems to be the bottleneck, could I pay for my friends to phonebank, or is there some rule about needing to be "volunteers"?

Comment by Austin (akrolsmir) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-10T22:22:16.899Z · EA · GW

I have donated $2900, and I'm on the fence about donating another $2900. Primarily, I'm not sure what the impact of a marginal dollar to the campaign will accomplish -- is the campaign still cash-constrained?

My very vague outsider sense was that the Flynn campaign had already blanked the area with  TV ads, so that additional funding might not do that much, eg from local coverage from a somewhat hostile source

Comment by Austin (akrolsmir) on What We Owe the Past · 2022-05-06T12:47:16.359Z · EA · GW

Thank you so, so much for  writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more "me trying to lay out my intuitions" and less "I know exactly how we should change EA on account of these intuitions". I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!

I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences - but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don't know - and to that end your review is very enlightening! And some is: there's a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I'd push for a return to more of Sequences-style shorter chunks.


I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can't change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary. 

I still believe in (2), but I'm not confident I can articulate why (and I might be wrong!). Once again, I'd draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.

Can we change the past? I feel like the answer is not 100% obviously "no" -- I think this post by Joe Carlsmith lays out some arguments for why:

Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of “acausal control,” leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.

 (but it's also super technical and I'm at risk of having misunderstood his post to service my own arguments.)


In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more "this was a good idea, here's your prize", and less "here's some money to go do X".

I'm not entirely sure what % of my belief in this comes from "this is a morally just way of paying out to the past" vs "this will be effective at producing better future outcomes"; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.

To this end, I've been working on a proposal for equity for charities -- still in a very early stage, but as you work as a fund manager, I'd love to hear your thoughts (especially your criticism!)

Finally (and to put my money where my mouth is): would you accept a $100 bounty for your comment, paid in Manifold Dollars aka a donation to the charity of your choice? If so, DM me!

Comment by Austin (akrolsmir) on What We Owe the Past · 2022-05-06T01:27:26.729Z · EA · GW

I deeply do not share the intuition that younger versions of me are dumber and/or less ethical. Not sure how to express this but:

  • 17!Austin had much better focus/less ADHD (possibly as a result of not having a smartphone all the time), and more ability to work through hard problems
  • 17!Austin read a lot more books
  • 17!Austin was quite good at math
  • 17!Austin picked up new concepts much more quickly, had more fluid intelligence
  • 17!Austin had more slack, ability to try out new things
  • 17!Austin had better empathy for the struggles of young people

This last point is a theme in my all-time favorite book, Ender's Game - that the lives of children and teenagers are real lives, but society kind of systematically underweights their preferences and desires. We stick them into compulsory schooling, deny them the right to vote and the right to work, and prevent them from making their own choices.

Comment by Austin (akrolsmir) on What We Owe the Past · 2022-05-06T01:17:11.301Z · EA · GW

Thanks - this comparison was clarifying to me! The point about past people being poorer was quite novel to me.

Intuitively for me, the strongest weights are for "it's easier to help the future than the past" followed by "there are a lot of possible people in the future", so on balance longtermism is more important than "pasttermism" (?). But I'd also intuit that pasttermism is under-discussed compared to long/neartermism on the margin - basically the reason I wrote this post at all.

Comment by Austin (akrolsmir) on Market Design Meets Effective Altruism · 2022-05-01T19:09:00.705Z · EA · GW

Yup, I think that should be possible. Here's a (very wip) writeup of how this could work: https://manifoldmarkets.notion.site/Charity-Equity-2bc1c7a411b9460b9b7a5707f3667db8