david_reinstein's Shortform

post by david_reinstein · 2021-05-31T14:43:29.796Z · EA · GW · 30 comments


Comments sorted by top scores.

comment by david_reinstein · 2022-05-14T17:48:11.245Z · EA(p) · GW(p)

Modest proposal on a donation mechanism for people doing direct work? 


Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes.  E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility. 

So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases. There are ways to do better at this (set up a DAF, bunch your donations…) but they are  costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)



Funders/orgs (e..g, Open Phil, RP) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds  (or ‘advise on’, with the advice generally followed).

Key anticipated concerns,  responses

Concern: This will lead to a ‘pressure to donate/relinquish’ if the employers, managers, funders are aware of it

Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders. (Details need working out, obviously, unless something like this already exists)


Concern - Legal issues: Is this feasible? Would these relinquishments be seen by governments as actually income?

Response: ??


Concern - crowding out: If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact

Response:  This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here. … To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders

Concern - “Org reputation … why not give back to the org?”

Maybe a stretch, but I could imagine someone arguing “If your (e.g., RP) employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate your employees don’t think RP is the best use of funds”?

Responses: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.

Responses: Keeping the recipient of these funds hidden to outsiders

comment by david_reinstein · 2021-05-31T14:43:30.072Z · EA(p) · GW(p)

ImpactMatters acquired by CharityNavigator; but is it being incorporated/presented/used in a good way?

Note: moved to 'regular' post here [EA · GW] ...

Replies from: aarongertler, David_Moss
comment by Aaron Gertler (aarongertler) · 2021-06-22T06:35:32.709Z · EA(p) · GW(p)

I spent a few minutes looking at the impact feature, and I... will also go with "not satisfied". 

From their review of Village Enterprise:

Impact & Results scores of livelihood support programs are based on income generated relative to cost. Programs receive an Impact & Results score of 100 if they increase income for a beneficiary by more than $1.50 for every $1 spent and a score of 75 if income increases by more than $0.85 for every $1 spent. If a nonprofit reports impact but doesn't meet the threshold for cost-effectiveness, it earns a score of 50.

My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).

But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money. 

(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)


Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.

comment by David_Moss · 2021-06-02T13:09:17.039Z · EA(p) · GW(p)

There was some discussion of the original acquisition here [EA · GW].

Historically, Charity Navigator has been extremely hostile to effective altruism, as you probably know, so perhaps this isn't surprising. 

Replies from: david_reinstein
comment by david_reinstein · 2021-06-02T18:04:57.351Z · EA(p) · GW(p)

Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post [EA · GW]

That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.

I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.

comment by david_reinstein · 2022-04-13T14:38:48.612Z · EA(p) · GW(p)

Imposter (syndrome) ?

Building on my  response to the Don't think just apply [EA(p) · GW(p)] thread.

... It reminded me of my thoughts on the ‘imposter syndrome’.

I think there are many people who  are under-confident in their abilities, both overall, and in relation to other people. Perhaps this disproportionally tends to affect people in the EA and rationalist community, because we are more introspective and skeptical.

But there are also people in this world who are in some way ‘imposters’, in the sense that they don’t have the training for their position, or they (or their organization) is claiming much more than they are actually doing. In some cases it is useful for these people and orgs to consider “how can we level up our abilities and accomplishments, and moderate our claims?"[1]

This is also real, and we don’t want to convey that “everyone who thinks they are over their heads/over-claiming is merely suffering from imposter syndrome”.  Maybe some have IS, but some are actually having a meaningful and useful insight that they can benefit from… if not paralyzed by inaction and shame.

This ‘not everything is IS’ also applies to considering individuals and companies making big claims, or modest ones. I don’t think we should always judge these in the light of “these people/orgs are all probably better than they say, because everyone has IS these days.”I also think that sometimes so-called IS may often reflect  ‘a whole sector is under-trained and overclaiming’. E.g., if ‘everyone doing [machine learning, economic analysis, whatever] doesn’t understand the principles, is doing a lot of guesswork, and writes things up as if they are clear and certain’… this is a problem. If you are particularly concerned that you are doing the above, you may not be an imposter ‘relative to others in the sector’, but it still seems like a good insight to have. And perhaps more people ‘revealing that they are not wearing imperial clothes’ could help change the dynamic.

  1. In my own case, for example, I think I was underprepared for certain aspects of my PhD program. As an undergraduate I jumped right into Calculus 1 without taking pre-calculus. Here I struggled desperately and barely passed … and I lost out on learning some fundamentals and deep mathematical insights. ↩︎

Replies from: Lukas_Gloor, DaveC
comment by Lukas_Gloor · 2022-04-13T19:09:45.104Z · EA(p) · GW(p)

I really like those points! 

In some cases it is useful for these people and orgs to consider “how can we level up our abilities and accomplishments, and moderate our claims?"

I agree this is an important message for some people and circumstances. For instance, it would probably have been a good message for me when I started doing research on longtermist strategy (from an s-risk perspective) in 2014-2017. I mostly pushed through impostor syndrome because there weren't many other people doing similar things, so it felt like "I know it's bad but, looking around, it may just be good enough to be useful." In hindsight, I think the feeling was telling me that I should have focused less on searching for conclusions (by "winging it") and more on  improving my understanding and skill building. (That said, "searching for conclusions" is a crucial habit and people should be trying it with some amount of their attention from the very start, otherwise it's difficult to acquire it later.) 

comment by Dave Cortright (DaveC) · 2022-04-13T18:43:04.170Z · EA(p) · GW(p)

The Dunning-Kruger effect is real. But with a few basic sanity checks, I believe any thoughtful EA can determine whether it's imposter syndrome vs actual under-qualification. 

If you have evidence to support your non-trivial investment in the area—classes, degrees, self-directed learning, projects, jobs—you are probably at least qualified for an entry-level position in a given area.

Probably the easiest way to check is by asking an impartial 3rd party, like an 80kH Advisor, or even just someone who already has experience working in that field.

Replies from: Linch
comment by Linch · 2022-04-13T22:55:05.069Z · EA(p) · GW(p)

The Dunning-Kruger effect is real.

Note that this is heavily contested. A lot of the observed phenomenon in the studies (qualitatively: incompetent people thinking they're average, great people thinking they're only good) can be explained by "better than average" effect + metrics not being perfect + natural mean regression.

And of course pop science accounts of Dunning-Kruger is even more unhinged than what D-K claimed. 

My own best guess is that the claimed effect is real but small. 

Replies from: DaveC, david_reinstein
comment by Dave Cortright (DaveC) · 2022-04-18T08:27:54.720Z · EA(p) · GW(p)

Me: “The Dunning-Kruger effect is real.”
Linch: “…the claimed effect is real…”

Great to know that we are in agreement, Linch! The logical follow-up question is what other factor(s) has (have) a higher impact on the effect?

comment by david_reinstein · 2022-04-14T00:14:11.503Z · EA(p) · GW(p)

Interesting. Of course my point is independent of the D-K effect, although that would enhance it.

I’m not saying worse ppl are more. overconfident. I’m just saying ‘some ppl are overconfident or overstating’

I’m also suggesting to there may be a secular overstatement of abilities and accomplishments in some fields. Less so among EAs I suspect.

comment by david_reinstein · 2022-02-06T05:05:05.452Z · EA(p) · GW(p)

Sports betting promotion capture for charity

I pledge to donate 70% of the net gain to effective charities immediately or within the year 2022. - David Reinstein

I’ll try to follow the guidelines given in the post here: EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) - EA Forum [EA · GW]

5 Feb 2022

I'm driving up to upstate New York today to visit my dad and take advantage of the promotions and staying through the weekend. I live about 11-2 hours from the NYS border so I can return for a followup if necessary.

… in order to donate to highly effective charities.

Steps taken (some in error)

Caesars Soon after Robi’s earlier post … (25 Jan)

Started a Caesar’s account,

deposited and withdrew $50 while out of state (I had thought it meant ‘deposit at least $50 and you will get the full credit’... didn’t realize first deposit was the determining factor),

requested that they let me redo this to qualify for the maximum bonus (no response)


First bet up to 1000 will be refunded in betting credit if lost.
Terms and conditions HERE

Jan 25 – drove to Connecticut, started an account

  • Tried to fund account
  • With bank transfer (my internet bank doesn’t seem eligible)
  • With Paypal (unsuccesful, I forget why) -With UK card (foreign cards not accepted)
  • With bank's debit card – declined, but since then I’ve asked the account being unblocked

5 Feb 2022 : Trying this from New York State

Needed to download the ‘verify location app’ and install it. Even after this, bank transfer didn’t work, but Paypal funded by my bank did work, deposited 1k.

Next I tried to look at bets but it again said I was out of state and ineligible. I’m in upstate NY so that must be wrong. Next I tried clearing the cache, didn’t work. So I downloaded it on my phone and this did work.

Ok now to bet – I want a long odds bet but maybe not too long … because I want to demonstrate that this works OK (and I also have a psychological barrier to betting on long shots, I think). And I want a decent odds bet, no big house ‘vig’ so I signed up for the free OddsJam trial. But this was scaryish because the default was the 1 year subscription ($999!), which seems a big risk if I forget to cancel. But I finally figured out how to do ‘monthly’ which only has an $89 forgetfulness risk..

Within OddsJam it doesn’t tell you which ones are the highest EV unless you signup for the non-free-trial premium arbitrage version or something. But you can go to ‘odds’ and find some things that look decent, resolving soon (in my case, important, because I have a limited time). I selected a few sports. Nothing resolving while I’m sleeping, because then I can’t sleep!

Basketball seems maybe the best … low vig, frequent stuff happening now:

Georgetown-Providence seems OK: low vig, For Providence FanDuel offers the best odds of all the casinos. Wait – I got it backwards: Providence is favored here … the ‘negative sign’ means ‘less return’ I guess. Opposite of what you want for the first bet. OK, trying again.

OK, maybe Milwaukee is the one. Let me see what “+440” means.

Confirming it on Fanduel

It’s fairly long odds, ok:

$100 bet pays $440 (not including the stake) … maybe that’s what the “+440” meant.

But I need to bet $1000 to take advantage here, and the Max wager is $455 for this bet!

I’ll check again tomorrow if that’s upped. Maybe they need more people on both sides. If not maybe I go with a slightly shorter odds bet on the NBA “Brooklyn Nets” (TIL Brooklyn has a basketball team)

Feb 6, morning

“Best odds” on Oddsjam change from day-to-day. Of course, this doesn’t mean that Fanduel is offering poor odds, but still seems like a decent heuristic to choose one where ‘Fanduel offers the best’. I ended up going with a 5-1 (long odds) bet on Maryland over Ohio State (consulting OddsJam on this of course)...

When I made the bet it did not give me any indication that the ‘credit refund’ would happen! I just have to have faith that I’ve complied with the offer!? Wait until 1pm to see if I'm very lucky, and if not, whether they refund me with $1000 in credit as promised. (I followed all the rules as far as I can tell).

Updates: credit was refunded. I then ended up making a bunch of diverse bets and more or less got the money back

I also gained $250 from the Betrivers promotion which I was able to “play through” ended up netting $250 for a lot of work and stress.

Update: earned $280 in The form of seven free $40 bets for betting $5 on the Rams in the Super Bowl. But I now have to go back to New York State to place the bets before it expires

comment by david_reinstein · 2022-05-05T14:37:43.490Z · EA(p) · GW(p)

"Room for more funding": A critique/explanation

Status: WIP, rough, needs consolidating

David Reinstein: I have argued against this idea of 'room for more funding' as a binary thing. generally imagine in these areas there is always room for more funding, at least in the horizon of a year or more.

It's just a combination of

  • diminishing returns, perhaps past a threshold of 'these interventions are better than alternatives'
  • limited capacity because of short-run constraints that take some time to adjust (hire more staff, negotiate more vaccine access, assess new areas to administer vaccines)

Almost no cost function should have an 'infinite slope' past a certain output, particularly not in the non-very-short run. Similarly here.


comment by david_reinstein · 2022-04-06T04:59:41.749Z · EA(p) · GW(p)

Skills/training/EA jobs, 'Should you do an Economics PhD (or masters… see later)'; what do you need to learn to work at an EA org? How to level up on this stuff and prove value

These were the most frequent questions I got and discussions I had at the conference, mainly with UG students, but also with people at career pivot points. (Maybe the second-biggest was from PhD students and academics looking to learn more about EA and RP, and to have more impact)

I'm working on an essay/post/resource answering these questions and giving my opinions and experience  in a Google Doc here. I'd love your feedback. 

comment by david_reinstein · 2022-05-02T22:14:57.298Z · EA(p) · GW(p)

EA aligned version of "Oxfam stores" ... in the USA+

I used to buy a lot and give away a lot of stuff at Oxfam stores in the UK. I don’t agree with all of the approaches and campaigns but I think that they do a great deal of good. I think that before their prostitution scandal broke the stores were earning about £20 million per year.

Do we have anything like that in the US? We have Goodwill and the Salvation Army but those are doing domestic charity only and thus an order of magnitude less effective, I suspect.

This made me think: would there be any value potential in having a store like this, especially in the USA that was supporting a variety of EA causes (Global health, animal welfare, reducing existential risk…)? If it’s done right it might raise a few tens of millions of dollars per year at least. (Maybe much more. I'm seeing very inconsistent numbers for Goodwill’s and Salvation Army’s revenues for example). [1]

I suspect that much of this would be counterfactual because people in the USA tend to give domestically only, and the people using this store would otherwise be going to Goodwill or Salvation Army.

My impression is that donations themselves might be the minority of the benefit. The presence of the Oxfam stores in the UK also had big community building and public awareness benefits (for Oxfam). Nearly every moderate sized city/town had an Oxfam store which was pretty stylish and had lots of volunteers and maybe some activities around it. It was also not just students but a lot of other people were involved with it.

Any thoughts on whether this idea might have legs?

  1. Annual report reports ~$47 million in goods sales, but Forbes reports $5.8 billion in revenue, mostly 'other income' ↩︎

Replies from: Amber
comment by Amber Dawn (Amber) · 2022-06-18T18:23:10.607Z · EA(p) · GW(p)

This is a really interesting idea! I'm very fond of charity shops so I love the idea of making ones for EA charities. I have no idea how easy or hard it is to do and how it compares to other fundraising tactics, but it seems like it could have a big impact both from profits and from raising awareness. It could be a good thing to do for people with experience starting or running shops. 

comment by david_reinstein · 2022-02-08T01:25:16.771Z · EA(p) · GW(p)

General notes on "Sports Betting for EA" … and ‘ways to (not) screw it up’

Some lessons from my experiment and understanding (writing up my experience HERE [EA(p) · GW(p)], when I get a a chance)

As written elsewhere [EA · GW], there are basically 2-3 types of rewards.  

  1. The "deposit match"  rewards give you some house money (“bonus”) when you sign up and make yout first deposit.  The ones I've seen will give you this house money in an amount equal to the amount you deposited on that first deposit.
  2. Risk-free bets:  When you start an account and make a deposit,  some online casinos give you your first bet "risk-free".  What this means is that if you place an eligible bet and lose you will be refunded the amount you bet –  not in cash but in what I'm calling house money.
  3. Rewards for taking particular actions, making particular bets, winning certain bets, etc.  For example DraftKings is offering a bonus prize of a few hundred dollars (house money, I presume( if you win your first bet of $5 or more within a certain category . 


What is this “house money?":  the rewards and bonuses cannot be  withdrawn immediately. There are certain “play through requirements”.   From what I'm seeing if there are "1X playthrough requirements”  and 20x (or something) play through  requirements.^[Don't bother with bonuses involving the latter, the process of playing through them you will be giving  the casino back a lot of money as they take some cut with every bet].  

But even with 1X requirement there are some caveats, and some bets do not count as playing through. [See below ‘Check that your bets…’]

Ways to (not) screw it up (or inconvenience yourself)

Don’t miss an opportunity when signing up for an account or depositing money.  

  • It's not clear to me to what extent the "promotion codes” are necessary to get the promotion, but they might be in some cases.
  • If there is a deposit match, when you make your first deposit,  make sure that very first deposit is the amount that can achieve the maximum deposit match 


Make sure you are in the right state and can verify this.    Some sites have specific tracking software you need to download others seem to just use something within your browser. However, in my experience it sometimes gets it wrong and says you are not in the state when actually you are.  It seems to work better when you are closer to multiple wifi-spots.  Sometimes clearing your cache might help or using an incognito browser but I didn't have consistent results for that.   Note also that your browser should be set to allow location support.  If you download the casino’s geo tracking software you also need to give that software permission for location support.

But in my experience, where my computer  failed to demonstrate I was in the state I was in, my telephone (Iphone) almost always worked, particularly once I downloaded the casinos’ apps.

You will need to upload/share some photo ID  such as a driver's license, and at least in one case (Caesars)  a utility bill as well. I don't think you need to show residence in the state, just that you are really who you say you are. If you are not comfortable doing this (I do think it's pretty secure), don't bother.

Make sure you have access to your  phone for two-factor identification. The sites and apps are continually asking you for this, and you often get logged out and have to log yourself in again with this 2-factor


When you make a ‘risk-free bet’, make sure the terms apply. I didn't actually have any issues with this, but there are so many terms and conditions I would make sure before you bet.  Also, the general advice is "for risk-free bets, you should bet on something at least somewhat risky, otherwise you are wasting the reward.” 

Check that your bets are actually with the ‘house money’ (bonus/reward) and are eligible for playthrough requirements.  

Not all bets ‘count equally’.     You might accidentally bet your cash and not the house money.  You might bet with the house money in a way that doesn’t qualify as ‘play through.’

 Not all bets allow you to use the house money, or qualify as ‘playing through’ that money   In some cases if you bet it on a "nearly sure thing” (e.g., what they call -200 or shorter odds)  this does not count towards your play-through! They may not let you use the house money for this, or if they do, even if you win you will not be able to withdraw it without playing more. Be sure you know the rules, and….

When in doubt go to their chat helpline and ask directly. They were often helpful. But even there, I'm not sure all of their help team necessarily gets it right, in at least one case they didn’t seem aware of the caveat above. …  But at least you will have a record of the chat you can show to complain.


Make sure you have time (in the state) to use the house money, and don’t wait too long:  The rewards seem to expire after a period that is sometimes a day or week or something.  

How to make reasonably good and safe bets/betting portfolios (ideally with house money)

You need to bet on something.  As noted above, for the risk-free bets you want something fairly risky, something with “+200 odds or higher” perhaps. (I think +200 means that if you bet $1000 and win you get $2000 plus your initial $1000 stake, and can withdraw $3000).  

Finding OK bets, OddsJam

As stated elsewhere, OddsJam is a site that  provides a range of information. You can sign up for a trial account but….

Don’t forget to cancel an OddsJam trial, and probably best to trial a ‘monthly membership’ rather than yearly. If you trial a yearly  and you forget to cancel in a week you could be out $1,000.  


There's a premium version of OddsJam with no free trial  that claims to find the highest expected value bets. But the regular one  does have a list of sports and sporting events, and tells you which casinos offer the best odds, and what the house ‘vig’ is (the amount of all bets the house gets on  average, I guess, because of the  spread between the odds on each team … e.g., one may be -100 and the other only +50.

I wouldn't bet on anything with a “vig” of 5% or more.  My impression was that US basketball games had pretty low vigs but (Euro?) soccer games had ridiculously high ones.

Next,  you  might look for a game where one side has odds somewhat close to the level you're looking for, the amount of risk you want to take (unless you are hedging, in which case that might not matter, see below).  On OddsJam  you can browse for that while also  comparing whether  your casino  is offering something close to the best odds among the casinos. (Larger “+” numbers and “-” numbers closer to zero are better).  

Low risk bets and portfolios 

So now you have a ‘house money’ reward (perhaps just the return of the money you lost on the risk-free bet)  and you want to get it out.  If you have an account with multiple sites and similar amount you want to get out of each, there's a pretty easy way to do this:  you can bet on opposite teams on opposite sites, i.e., fully hedge your bet.  Check how much will pay if you win on each side on each site and bet an amount on each that makes that roughly equal.  You can do this in combination with OddsJam to try to guarantee as close to 100% of your money back as possible (over 100% is also conceivable but would seem to be a rare ‘arbitrage opportunity’). 


If you have only  an account with one site (or more money you are trying to get out of one sites than another)  I think you should:

  • look for eligible bets with odds as low as possible (very likely to win, low payoff, but remember these still need to be risky enough to qualify for the playthrough requirement), and
  • make several small bet rather than one large bet to lower the overall variance of your outcome (see the ‘law of large numbers’, ‘portfolio diversification’ etc. 


Warning: this can be stressful and distracting to work

The return seems to be potentially pretty good, but (particularly for the casinos with “risk-free bets” rather than deposit matching) there are ways you can lose money if you are not careful.  It can be pretty stressful and you have to concentrate on what you're doing. Even though the process of doing it correctly they only take a few hours, if you are anticipating yet and discriminating over your bad choices, or losing sleep anticipating upcoming games... This can take up a lot more of your life.  For some people this process might be fun, for others traumatic.  Other people might really like the excitement and adrenaline  But it may take you away from other things you are trying to focus on.  This was the case for me;  so far, it’s been interesting and sometimes fun and rewarding to see when ‘I got it right.’  But overall it was not relaxing, rather stressful and pulled me away from other important things.


Replies from: Sam Anschell
comment by Sam Anschell · 2022-02-08T08:32:26.214Z · EA(p) · GW(p)

Thanks so much for sharing these tips! With regards to the info on OddsJam:

  • I use the app "Todoreminder" to remind myself to cancel subscriptions like this.
  • You can always buy a $10 visa gift card online and register with that instead of your credit card if you're worried about forgetting to cancel your subscription.
  • If you're using OddsJam premium, I wouldn't worry about betting on specific sports. OddsJam will show you where the best bets are for the sports book you're betting with accounting for the vig. Generally speaking though, you're right that two-way markets (meaning only two outcomes can happen) take less vig than futures markets (e.g., the winner of this year's NBA championship) or three-way markets (like soccer where a game can end in a tie).
comment by david_reinstein · 2022-05-18T20:11:31.146Z · EA(p) · GW(p)

Are you engaging in motivated reasoning ... or çommitting other reasoning fallacies?

I propose  the following  good epistemic check using Elicit.org's "reason from one claim to another" tool

Whenever you have a theory that 

Feed this tool  your theory, negating one side or the other[1]


And see if any of the arguments it presents seem equally plausible to your arguments for 

If so, believe your arguments and conclusion less. 

Caveat: the tool is not working great yet, and often requires a few rounds of iteration, selecting the better arguments and telling it "show me more like this", or feeding it some arguments


  1. ^

    Or the contrapositives of either

comment by david_reinstein · 2021-06-29T20:53:43.395Z · EA(p) · GW(p)


  • reading EA forum posts
  • and comments
  • and some links
  • and adding some comments/thoughts of my own

HERE (podcast 'found in the struce' available on all platforms).

I think this will help people who have limited screen time get more from the EA Forum.

I’d like to encourage others to also narrate/record forum posts. I would love to listen to this too on those long drives/walks.

comment by david_reinstein · 2022-03-06T15:48:52.452Z · EA(p) · GW(p)

Facebook ads: can you really do A/B testing on a comparable audience?

On Facebook ‘Lift testing’

… can you really compare ‘ad A vs ad B’ to see which works better on a comparable audience?

Braun and Schwartz (Hat tip, Josh Lewis)… seem to think this is NOT possible in the current FB setup (and maybe not on most platforms either). ... Because of the way each ad design is separately targeted/optimized to its ‘best audience’.

Smartly seems to imply that multi-cell lift tests do not suffer from this problem.

... But it's unclear if this really implements 'target then randomize'.

comment by david_reinstein · 2022-03-06T14:00:50.374Z · EA(p) · GW(p)

What Economics research is EA-relevant?

Most of Economic research can be deemed EA-relevant in a general sense in that it usually focuses on welfare properties (of equilibria)…

But sometimes it’s the ‘potential pareto improvement/2nd welfare theorem’ stuff … could make the ‘pie higher’ and achieve any improved outcome you like it were to be redistributed.

E.g., one could claim (loosely)

A. … “efficient antitrust regulation is an EA cause because it aims to achieve the greatest level of Consumer + Producer surplus”

B. “…which could then yield the greatest social gains if we redistributed it to help the extreme global poor/animal welfare/existential risk reduction”

But you might ask: For A: “Is this the most important/easiest/biggest way to ‘make the pie higher’“? For B: “How likely is it that any gains could/would actually be redistributed to then ‘do the most good’”

comment by david_reinstein · 2022-02-02T20:10:09.551Z · EA(p) · GW(p)

On music streaming services

(Note: I am not making a claim that this is an EA cause candidate.)

Music is an "information good". It is "nonrival" and infinitely and freely sharable. Any positive price leads to "allocative inefficiency".

But a zero price obviously gives no incentive to produce and share music. The best solution is to separate what consumers pay from what musicians receive. The music (and all media and info) would be free to access, but the creators would get a payment equal to the value the listeners and users took from it.

But it's hard to

  1. Know what that VALUE is and

  2. Coordinate a way to get the FUNDS and compensate the creators.

For point 1 (measuring VALUE), the number of plays or amount of time spent listening seems like a possibly OK, but imperfect measure. E.g., 'listening in the background' has less value than 'listening and really getting into it. Still the 'compensation per stream' seems like the best feasible measure.

However, perhaps because of the market structure and lack of competition, it seems creators are given very little per stream, not enough to motivate the right amount of content to be produced. For point 2 (getting the FUNDS), an international government mandated tax and funding would be efficient but there are all sorts of difficulties there. (Compulsion, coordinating across governments, is it fair for non-listeners, etc).

Private streaming services seem like a good second-best, but the inefficiency comes when the streaming service charges customers too much, so not everyone joins. (Why: society could give these people access to ad-free music to these people at no extra cost, but they don't access it.) Perhaps the best solution would be some sort of streaming service that pays creators more (should that be subsidized?) and offers more differentiated prices to capture the true value consumers are getting. Hard to do, though.

The main point, I think, is that the 'classical economic model' really doesn't work well for information goods, which are becoming more and more of the economy.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2022-02-02T20:50:03.955Z · EA(p) · GW(p)

As one data point, I'd gladly pay an extra $5-15/month for a "tier" of Spotify that passed along, say, 90% of that extra money to artists. Spotify being mostly private makes it hard to get good digital bling from a higher-tier option, but maybe artists could offer extra rewards to people in that tier?

Much more simply, I'd love to have a "tip the artist" option next to any song, so that when I was especially appreciating something, I could tip the artist a dollar. I'd probably use that option 100-200 times/year.

This seems like it should be a win for Spotify — I see few people angry about Spotify making/keeping too much money, lots of people angry about artists being underpaid. And I think it should be possible to design a tier/tip option that sends the message "you're funding artists" without "we're not".

From some brief research, Spotify paid out over $5 billion to "rights holders"* in 2020 and grossed about $9 billion (they claim to pay out 70% of all revenue). And they have 6500 employees. All of these seem like reasonable numbers, and even boosting artist revenue by 20% would probably feel tiny to critics — now it's half a cent per stream instead of 0.4 cents, hooray — while being a pretty sharp cut for their staff/technical infrastructure.

*Note that this includes record labels; for many artists, Spotify's rate isn't nearly as problematic as the % their record labels take.

comment by david_reinstein · 2021-07-28T01:44:28.039Z · EA(p) · GW(p)

AI consciousness and valenced sensations: unknowability?

Variant of Chinese room argument? This seems ironclad to me, what am I missing:

My claims:

Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?

Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant

Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.

Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.

We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.

Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.

A colleague responded:

If we get the AI through a search process (like training a neural network) then there's a reason to believe that AI would feel positive sensations (if any sensations at all) from achieving its objective since an AI that feels positive sensations would perform better at its objective than an AI that feels negative sensations. So, the AI that better optimizes for the objective would be more likely to result from the search process. This feels analogous to how we judge bio-based living things in that we assume that humans/animals/others seek to do those things that make them feel good, and we find that the positive sensations of humans are tied closely to those things that evolution would have been optimizing for. A version of a human that felt pain instead of pleasure from eating sugary food would not have performed as well on evolution's optimization criteria.

OK but this seems only if we:

  1. Knew how to induce or identify "good feelings"
  2. Decided to induce these and tie them in as a reward for getting close to the optimum.

But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?

Please tell me why I'm wrong.

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2021-12-31T16:40:27.063Z · EA(p) · GW(p)

When you say "feeling", are you referring to conscious experience of the AI, or mechanistic positive and negative signals?

 - If consciousness, super-high uncertainty on what consciousness even is, what the correct ontology for it is. But can be discussed.

 - If positive and negative reward signals, then AI today already runs based on positive and negative reward signals as you mention.

Also source of consciousness ("which configurations of matter are conscious?") seems a bit different from moral status ("which configurations of matter do we care about?").

A paperclip maximiser could have consciousness, that doesn't have to mean we care too much about it or are willing to sacrifice our lives to ensure its survival.

Basically I think humans just care about anything that looks similar to human beings. (Which makes sense evolutionarily.)

Replies from: david_reinstein, david_reinstein
comment by david_reinstein · 2022-01-01T18:13:58.358Z · EA(p) · GW(p)

Also source of consciousness ("which configurations of matter are conscious?") seems a bit different from moral status ("which configurations of matter do we care about?").

A paperclip maximiser could have consciousness, that doesn't have to mean we care too much about it or are willing to sacrifice our lives to ensure its survival.

But why not? How do we justify that?

Basically I think humans just care about anything that looks similar to human beings. (Which makes sense evolutionarily.)

That may be what we do care about, but how can we justify that in terms of what we should care about?

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-01-02T06:23:42.029Z · EA(p) · GW(p)

How do we justify that?

Just first-hand experience honestly, I don't feel as much empathy for a paperclip maximiser as I do for human suffering. I doubt you can ground ethics in anything other than our experiences. (Like you might find intermediate things to ground ethical theories in, but those things will just further ground themselves in actual experiences.)

That may be what we do care about, but how can we justify that in terms of what we should care about?

I feel like  asking "what we should care about" is just our System-2 trying to make sense of the fact that our System-1 cares about a lot of inconsistent things - I'm currently skeptical you can find an objective answer to that question.

I also recently wrote this reply [EA(p) · GW(p)] about this, it's where my intuition currently points and I'm super happy to get any feedback (positive or negative).

comment by david_reinstein · 2022-01-01T18:12:42.666Z · EA(p) · GW(p)

When you say "feeling", are you referring to conscious experience of the AI, or mechanistic positive and negative signals?

The former. The latter has no moral patienthood I guess

  • If consciousness, super-high uncertainty on what consciousness even is, what the correct ontology for it is. But can be discussed.

I've been reading more about this and I realize there is great disagreement

If positive and negative reward signals, then AI today already runs based on positive and negative reward signals as you mention.

Of course, but their 'concious experience' of these signals need not agree with how they are coded in the algorithm. They could 'feel pain from maximizing' ... we just don't know.