A technical note: Bayesianism is not logic, statistics is not rationality

post by jonathanstray · 2016-09-06T15:47:03.793Z · EA · GW · Legacy · 7 comments

Perhaps I am beating a dead horse for this community, but this is a very lucid explanation of what probabilistic/statistical reasoning cannot do. Namely: first order logic. There's really no way of encoding relations or quantifiers into purely Bayesian inference, which actually makes it quite weak in terms of model building. 

Further, integrating probability and logic is a huge unsolved problem! We actually have very little idea how to combine our two greatest successes in formalizing rationality. 

I found this tremendously clarifying, though not immediately useful. But it has definitely broadened my thinking.

https://meaningness.com/probability-and-logic

 

 

7 comments

Comments sorted by top scores.

comment by kierangreig · 2016-09-06T19:00:08.022Z · EA(p) · GW(p)

Further, integrating probability and logic is a huge unsolved problem! We actually have very little idea how to combine our two greatest successes in formalizing rationality.

I think MIRI reported making a big breakthrough on this.

comment by MichaelDickens · 2016-09-07T01:39:04.572Z · EA(p) · GW(p)

I don't think this sort of post is particularly relevant to the EA forum. It's about probability and logic, not altruism.

comment by John_Maxwell (John_Maxwell_IV) · 2016-09-08T06:25:07.675Z · EA(p) · GW(p)

It feels to me like inclusion should be based on plausible impact, whether direct or indirect, rather than immediate apparent relevance to effective altruism. If this essay improves our thinking, and makes the effective altruist movement better at a rate that's comparable to the other stuff posted here, then it's a valuable post.

* I might be a little biased because I think EA should be prioritizing epistemic rationality much more highly.

comment by RyanCarey · 2016-09-08T06:36:39.327Z · EA(p) · GW(p)

I agree with this pretty strongly. But also I think authors have to make an effort to bridge the gap with intermediate steps in their reasoning, rather than pouring unexplained insights - however genius they may be - onto a bewildered reader.

comment by JesseClifton · 2016-09-08T01:46:33.261Z · EA(p) · GW(p)

Examining the foundations of the practical reasoning used (and seemingly taken for granted) by many EAs seems highly relevant. Wish we saw more of this kind of thing.

comment by Owen_Cotton-Barratt · 2016-09-07T02:04:10.338Z · EA(p) · GW(p)

It's a little indirect, but it's a link to a nice essay on a topic which is relevant when we get to the "working stuff out" side of effective altruism.