Posts

[Stats4EA] Uncertain Probabilities 2020-05-26T21:40:59.254Z · score: 22 (10 votes)
[Stats4EA] Expectations are not Outcomes 2020-05-18T22:59:03.754Z · score: 32 (15 votes)
On Waiting to Invest 2020-04-09T20:05:29.151Z · score: 11 (8 votes)
[Link] Freedom Week 2019-04-25T17:30:44.324Z · score: 2 (1 votes)
Tech volunteering: market failure? 2019-02-17T16:51:33.851Z · score: 18 (12 votes)

Comments

Comment by matthewp on [Stats4EA] Uncertain Probabilities · 2020-05-29T08:46:08.629Z · score: 2 (2 votes) · EA · GW

One of the topics I hope to return to here is the importance of histograms. They're not a universal solvent. However they are easily accessible without background knowledge. And as a summary of results, they require fewer parametric assumptions.

I very much agree about the reporting of means and standard deviations, and how much a paper can sweep under the rug by that method.

Comment by matthewp on [Stats4EA] Uncertain Probabilities · 2020-05-29T08:36:35.478Z · score: 1 (3 votes) · EA · GW

Nice example, I see where you're going with that.

I share the intuition that the second case would be easier to get people motivated for, as it represents more of a confirmed loss.

However, as your example shows actually the first case could lead to an 'in it together' effect on co-ordination. Assuming the information is taken seriously. Which is hard as, in advance, this kind of situation could encourage a 'roll the dice' mentality.

Comment by matthewp on [Stats4EA] Expectations are not Outcomes · 2020-05-19T18:28:13.847Z · score: 3 (3 votes) · EA · GW
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA

Hopefully, we'll get there! It'll be mostly Bayesian though :)

Comment by matthewp on [Stats4EA] Expectations are not Outcomes · 2020-05-19T18:20:33.616Z · score: 3 (2 votes) · EA · GW

Thanks - that last link was one I'd come across and liked when looking for previous coverage. My sole previous blog post was about Pascal's Wager. I'd found though when speaking about it that I was assuming too much for some of the audience I wanted to bring along; notwithstanding my sloppy writing :D So, I'm going to attempt to stay focused and incremental.

Comment by matthewp on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-15T13:56:00.703Z · score: 5 (4 votes) · EA · GW
As long as the core focuses on unusual priorities – which using neglectedness as a heuristic for prioritization will mean is likely – there’s a risk that new members get surprised when they find out about these unusual priorities

Perhaps there are also some good reasons that people with different life experience both a) don't make it to 'core' and b) prioritize more near term issues.

There's an assumption here that weirdness alone is off-putting. But, for example, technologists are used to seeing weird startup ideas and considering the contents.

This suggests a next thing to find out is: who disengages and why.

Comment by matthewp on EA Forum: Data analysis and deep learning · 2020-05-12T18:26:10.400Z · score: 28 (19 votes) · EA · GW
TL;DR's for the EA Forum/Welcome: ”Effective altruists are trying to figure out how to build a more effective AI, using paperclips, but we're not really sure how it's possible to do so.

Ouch.

Comment by matthewp on A cause can be too neglected · 2020-05-09T14:48:06.449Z · score: 1 (1 votes) · EA · GW

Perhaps EA's roots in philosophy lead it more readily to this failure mode?

Take the diminishing marginal returns framework above. Total benefit is not likely to be a function of a single variable 'invested resources'. If we break 'invested resources' out into constituent parts we'll hit the buffers OP identifies.

Breaking into constituent parts would mean envisaging the scenario in which the intervention was effective and adding up the concrete things one spent money on to get there: does it need new PhDs minted? There's a related operational analysis about time lines: how many years for the message to sink in?

Also, for concrete functions, it is entirely possible that the sigmoid curve is almost flat up to an extraordinarily large total investment (and regardless of any subsequent heights it may reach). This is related to why ReLU functions are popular in neural networks: because zero gradients prevent learning.

Comment by matthewp on If you had $10-100mm and a skilled team to improve the COVID response to minimize economic and human damage, what would you do? Or, how would you decide what to do? · 2020-04-26T11:19:22.319Z · score: 1 (1 votes) · EA · GW

I would spend every penny unblocking the pathway to a vaccine.

  • Multithreading stages of the clinical trials. E.g. have combination vaccine safety trials started ahead of proof of efficacy for individual trials?
  • Reviewing what the target for effectiveness really is. E.g. would a vaccine which tips the population reproduction rate below 1 without providing individual level guarantees? How long would we wait to see if anything better would become available? Would we be prepared to risk needing two phases of vaccination? Would we make an earlier safe vaccine with low efficacy available optionally?
  • Cohort recruitment should be easy, volunteerism is at max right now. If it is not, the problem must be logistical. What support is needed (coach services, apps etc)?
  • Streamlining regulatory hurdles. Assume tests go well, is there any blocking legislation which does not make sense? What bills could we predict are necessary now to pre-empt the blockage?
  • Preparing for mass production of different kinds of vaccines. Assume each system works and ask what would be needed to scale it out. Make bets on any cheap elements of any relevant systems. Use the time before production starts to duplicate existing production facilities.
  • Readying the logistics for administration. E.g. given PPE shenanigans, it wouldn't surprise me if we had a lack of disposable syringes etc. Most of this could be prepared in advance of actually having a vaccine.

The basic ideas and test candidates are already known. The lag between now and mass roll out is therefore (mostly) dependent on our organizational skills.

<waving hands> UK GDP is ~£2.9 trillion. The recession will shave at least 10% off that. The government takes ~30% of GDP in tax. If bringing forward mass vaccination could shave a quarter off an 18 month recession, it would be revenue neutral to pay £100 billion to do it. So, if some of the above sounds too expensive, it's because a bigger budget is necessary and likely justified. It would take an order 1000x correction to change this reasoning. </waving hands>

Comment by matthewp on If you value future people, why do you consider near term effects? · 2020-04-19T21:22:41.112Z · score: 1 (3 votes) · EA · GW

"Our actions have dominating long-term effects that we cannot ignore."

To me, this is a strange intuition. Most actions by most people most of the time disappear like ripples in a stream.

If this were not the case, reality would tear under the weight of schemes past people had for the present. Perhaps it is actually hard to change the course of history?

Comment by matthewp on Discontinuous progress in history: an update · 2020-04-18T15:33:37.135Z · score: 8 (7 votes) · EA · GW

This is a nice piece of accessible scholarship. It would perhaps benefit from an explicit note on why the question is interesting in this context and to this audience.

Comment by matthewp on On Waiting to Invest · 2020-04-11T19:31:30.732Z · score: 1 (1 votes) · EA · GW

Ah, that's interesting and the nub of a difference.

The way I see it, a 'good' impact function would upweight the impact of low probability downside events and, perhaps, downweight low probability upside events. Maximising the expectation of such a function would push one toward policies which more reliably produce good outcomes.

Comment by matthewp on On Waiting to Invest · 2020-04-11T13:34:58.940Z · score: 4 (3 votes) · EA · GW

So, what do you think of the idea that aiming for high expected returns in long term investments might not be the best thing to do, given the skewed distribution? This is, we want to ensure that most futures are 'good'; not just a few that are 'excellent' lost in a mass of 'meh' or worse.

BTW, I did like the podcast - it does take something to make me tap out forum posts :)

Comment by matthewp on On Waiting to Invest · 2020-04-10T14:37:23.325Z · score: 4 (3 votes) · EA · GW

Thanks for the response. To clarify: in the second model both the drift and the diffusion term impact on the expected returns. If you substitute in a model return e^{q + sz}, with z a standard normal:

E[V(1)] = E[e^{q + s z}] = E[e^{sz}]e^q = e^{s^2/2} e^q > e^q

So, if we have fixed from some source that E[V(1)]=1.07=e^r then we cannot set q=r in the model with randomness while maintaining the equality. Where the equality cashes out as 'the expected rate of return a year from now is 7%'.

Empirically estimated long run rates already take into account the effects of randomness since they are typically some sort of mean of observed returns. If this were not the case one would always have to, at least, quote the parameters in pairs (drift=such and such, vol=such and such) and perform a calculation in order to get out the expected returns.

Comment by matthewp on Why is the EA Hotel having trouble fundraising? · 2019-04-29T20:57:29.621Z · score: 1 (1 votes) · EA · GW

"People don’t want to be associated with something low status and are likely to subject anything they perceive as low status to a lot of scrutiny."

Ouch! Alas, it is true in general. However, I think it's a dangerous heuristic when not backed by the kinds of substantive comments made in 1-6.

I do think toning down 5 might foster a better culture. Perhaps there is more information here I don't know. But this kinda sounds like someone tried something it didn't work out, and they don't get a second chance. That's not a great rubric to establish if you want people to take risks.

Comment by matthewp on Reasons to eat meat · 2019-04-29T05:51:40.880Z · score: 15 (5 votes) · EA · GW

Ego depletion is quite a narrow psychological effect. If the idea that people's moment to moment fatigue saps moment to moment willpower is debunked, that's far from showing that akrasia isn't a thing in general.

In a world where general-sense akrasia was not a thing there would be a far higher rate of people being ripped like movie stars, a far lower rate of smoking, a much high rate of personal savings etc than there is in the world we inhabit.

Comment by matthewp on Reasons to eat meat · 2019-04-24T06:45:30.061Z · score: 7 (2 votes) · EA · GW

The willpower argument is actually quite good. There are ways to reduce the amount of willpower required, but the kernel of the argument applies.

My prediction for people who constantly feel bad for not living up to an exacting standard is that a majority will fall off the boat entirely.

Comment by matthewp on Ben Garfinkel: How sure are we about this AI stuff? · 2019-03-06T22:24:55.058Z · score: 1 (1 votes) · EA · GW

Maximising paperclips is a misunderstood human value. Some lazy factory owners says, gee wouldn't it be great if I could get an AI to make my paperclips for me? Then builds an AGI and asks it to make paperclips, and it then makes everything into paperclips its utility function being unreflective of its owners true desire to also have a world.

If there is a flaw here it's probably somewhere in thinking that AGI will get built as some sort of intermediate tool and that it will be easy to rub the lamp and ask the genie to do something in easy to misunderstand natural language.

Comment by matthewp on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T19:43:57.616Z · score: 16 (11 votes) · EA · GW

Nice point.

'I also wish we didn't accidentally make donating to AMF or GiveDirectly so uncool.'

This reminds me of the pattern where we want to do something original, so we don't take the obvious solution.

Comment by matthewp on List of possible EA meta-charities and projects · 2019-02-10T21:23:23.539Z · score: 1 (1 votes) · EA · GW

"Making rationality more accessible."

Sounds great, and I've thought about this too. But what does it look like?

  • Seminar series. Probably in the workplace - this would not be so scalable but for me would be highly targeted.
  • Video lectures. Costly, probably get wide reach though. Maybe better done in short form, slick and well marketed.
  • Podcast. IMHO hard to beat Rationally Speaking. However, this content should be more introductory so perhaps more of an audio series than a podcast.

How to assess what the main topics should be though? I feel the pedagogy for rationality is lacking, because for many people who are interested they picked up the basics by osmosis before getting into it in a more organised way. I.e. what is the first thing someone should learn, the second etc. For me, everything revolves around an understanding of probability - but that's a long and somewhat indirect road to walk.