Comment by halffull on Identifying Talent without Credentialing In EA · 2019-03-15T12:57:32.084Z · score: 1 (1 votes) · EA · GW
Fair – an implicit assumption of my post is that markets are efficient. If you don't think so, then what I had to say is probably not very relevant.

I assume you're not arguing for the strong EMH here (markets are maximally efficient), so the difference to me seems to be a difference of degree than kind (you think hiring markets are more efficient than Peter does, Peter thinks hiring markets are less efficient than you do.)

If you are arguing for the strong version of EMH here I'd be curious as to your reasoning, as I can't think of any credible economists who think that real world markets don't have any inefficiencies.

If you're arguing for a weaker version, I think it's worth digging in to cruxes... Why do you think that the hiring market is more efficient than Peter does?

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-15T08:56:17.271Z · score: 1 (1 votes) · EA · GW

Interesting, thanks!

Edit: I've now updated the post.

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-15T08:55:30.405Z · score: 1 (1 votes) · EA · GW

This is great, thanks for sharing!

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-14T20:13:17.682Z · score: 1 (1 votes) · EA · GW

I actually looked at standard categories but AFAICT there is no standard. Knightian uncertainty and statistical uncertainty are one standard that are almost synonymous with epistemic and aleatory, which are fairly synonymous with model uncertainty and base uncertainty, hence my use of knightian. However, those definitions don't include the difference between transparent and opaque risk mentioned above.

The definitions of risk and uncertainty seemed to be defined many different places, and frequently the definitions are swapped in different sources. Ignorance and uncertainty are sometimes seen as synonymous, sometimes not.

Basically, I created my own terms because I thought the current terms were muddied enough that using them would create more confusion than clarity. I used existing terms for solutions to different types of risk because the opposite was true.

One place where I didn't look as hard to find existing categories is the "Types of knightian risk" part. I couldn't find any existing breakdowns of this and as far as I know it's original, but there may be an existing list and I was simply using the wrong search terms.

How to Understand and Mitigate Risk (Crosspost from LessWrong)

2019-03-12T10:24:06.352Z · score: 7 (5 votes)
Comment by halffull on EA is vetting-constrained · 2019-03-09T17:17:23.537Z · score: 5 (6 votes) · EA · GW

I worked on this problem for a few years and agree that it's a bottleneck just in EA, but globally. I do think that the work on prediction is one potential "solution", but there are additional problems with getting people to actually adopt solutions. The incentives for the people in power to change to a solution that gives them less power is low, and there are lots of evolutionary pressures that lead to the current vetting procedures. I'd love to talk more to you about this as I'm working on similar things, although have moved away from this exact problem.

Comment by halffull on How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? · 2019-02-13T18:53:45.339Z · score: 1 (1 votes) · EA · GW

One option we were looking to use at Verity is the 'contest' model - In which an interested party can subsidize a particular question, and then split the pool between forcasters based on their reputation/score after the outcome has come to pass. This helps to subsidize specific predictions, rather than subsidizing more general predictions when paying people for their overall score. It has similarities to the subsidized prediction market model as well.

Comment by halffull on Against Modest Epistemology · 2017-11-18T00:11:17.857Z · score: 0 (2 votes) · EA · GW

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

This is what I'm talking about when I say "jut so stories" about the data from the GJP. One explanation is that superforecasters are going through this thought process, another would be that they discard non-superforecasters' knowledge, and therefore end up as more extreme without explicitly running the extremizing algorithm on their own forecasts.

Similarly, the existence of super-forecasters themselves argues for a non-modest epistemology, while the fact that the extremized aggregation beats the superforecasters may argue for somewhat of a more modest epistemology. Saying that the data here points one way or the other to my mind is cherrypicking.

Comment by halffull on Against Modest Epistemology · 2017-11-17T01:20:31.443Z · score: 1 (1 votes) · EA · GW

How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.

Doesn't this clearly demonstrate that the superforecasters are not using modest epistemology? At best, this shows that you can improve upon a "non-modest" epistemology by aggregating them together, but does not argue against the original post.

Comment by halffull on Against Modest Epistemology · 2017-11-16T22:26:19.150Z · score: -2 (4 votes) · EA · GW

It's an interesting just so story about what IARPA has to say about epistemology, but the actual story is much more complicated. For instance, the fact that "Extremizing" works to better calibrate general forecasts, but that extremizing of superforecaster's predictions makes them worse.

Furthermore, that contrary to what you seem to be claiming about people not being able to outperform others, there are in fact "superforecasters" who out perform the average participant year after year, even if they can't outperform the aggregate when their forecasts are factored in.