Comment by halffull on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-13T07:19:23.319Z · score: 1 (1 votes) · EA · GW

Thought about this for the last couple of days, and I'd recommend against it. the workshop is set up to be a complete, contained experience, and isn't really designed to be consumed only partially.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-14T21:46:35.802Z · score: 5 (3 votes) · EA · GW
There is an opportunity cost in not having a better backdrop.

Seems plausible.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-14T13:11:30.367Z · score: -2 (3 votes) · EA · GW

It's possible I'm wrong. I find it unlikely that veganism wasn't influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn't influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn't influenced by the existing political zeitgeist around inequality.

All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there's opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.

It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-13T09:54:53.124Z · score: 1 (4 votes) · EA · GW
I haven't seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA.

Veganism is probably a good example here. Institutional decisionmaking might be another. I don't think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-12T17:36:12.277Z · score: 4 (3 votes) · EA · GW
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs' understanding of the world.

It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.

Comment by halffull on Who is working on finding "Cause X"? · 2019-04-12T12:53:41.089Z · score: 6 (6 votes) · EA · GW

Rethink Priorities seems to be the obvious organization focused on this.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-12T09:54:52.728Z · score: 14 (14 votes) · EA · GW

An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.

For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.

Comment by halffull on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T19:09:15.545Z · score: 13 (9 votes) · EA · GW
I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think there's two things going on here.

The first is that weirdness and outside view are often deeply correlated, although not the same thing. In many ways the feeling of weirdness is a schelling fence. It protects people from sociopaths, joining cults, and other things that are a bad idea but they can't quite articulate in words WHY it's a bad idea.

I think you're right that the best interventions will many times be weird, so in this case its' a schelling fence that you have to ignore if you want to make any progress from an inside view... but it's still worth noting that weirdness is there and good data.

The second thing going on is that it seems like many EA institutions have adopted the neoliberal stategy of gaining high status, infiltrating academia, and using that to advance EA values. From this perspective, it's very important to avoid an aura of weirdness for the movement as a whole, even if any given individual weird intervention might have high impact. This is hard to talk about because being too loud about the strategy makes it less effective, which means that sometimes people have to say things like "outside view" when what they really mean is "you're threatening our long term strategy but we can't talk about it." Although obviously in this particular case the positive impact on this strategy outweighs the potential negative impact of the weirdness aura.

I feel comfortable stating this because it's a random EA forum post and I'm not in a position of power at an EA org, but were I in that position, I'd feel much less comfortable posting this.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-10T08:57:15.291Z · score: 4 (3 votes) · EA · GW

You can often get the timing to work late in the game by stalling the company that gave you the offer, and telling other companies that you already have an offer so you need an accelerated process.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-10T08:56:03.535Z · score: 9 (3 votes) · EA · GW

It matters less if you time your offers so you have multiple at the same time.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-09T19:31:15.284Z · score: 1 (1 votes) · EA · GW

Last I looked at the data for job negotiations, the rescission rate is actually much higher for jobs, around 10%.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-07T08:40:17.765Z · score: 2 (2 votes) · EA · GW

There's another good post on salary negotiation in an ETG context here: https://www.lesswrong.com/posts/Z6dmoLyfBdmo6HEss/maximizing-your-donations-via-a-job

Back when I was a career coaching, I used to run a popular workshop on salary negotiation for the local job search meetup. It broke down salary negotiation into a set of 6 skills you could practice such as timing your offers, deferring salary negotiation, overcoming objections etc.

The great thing about this was that after the initial presentation, job seekers could practice the skills with each other meaning I wasn't the bottleneck. Presentation is here: https://www.evernote.com/shard/s8/sh/d45a92cc-a8d0-4b3b-adf5-53b53e1efbc6/a423ecc18c4d0a386b5fe1d2284680f7

I could see a similar idea of a practice group working for EAs.

Comment by halffull on How x-risk projects are different from startups · 2019-04-05T21:41:21.404Z · score: 3 (4 votes) · EA · GW

I wasn't asking for examples from EA, just the type of projects we'd expect from EAs.

Do you think intentional insights did a lot of damage? I'd say it was recognized by the community and pretty well handled whole doing almost no damage.

Comment by halffull on How x-risk projects are different from startups · 2019-04-05T09:12:55.182Z · score: 6 (10 votes) · EA · GW

Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I'm more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight

Comment by halffull on The Case for the EA Hotel · 2019-04-03T16:45:20.060Z · score: 2 (2 votes) · EA · GW
My main point is that, even if the EA hotel is the best way of supporting/incenting productive EAs in the beginning of their careers, it doesn't solve the problem of selecting the best projects

What do you think about the argument of using the processes in the hotel to filter projects? I tend to think that one way to cross the chasm is "just try as many projects as possible, but have tight feedback loops so you don't waste too many resources."

Comment by halffull on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-02T11:41:11.446Z · score: 6 (3 votes) · EA · GW
Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don't get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I'm willing or able to evaluate, but it's not obvious that they're being less selective than I am about which ones they fund.

I think the EA hotel is trying to do something different from Y-Combinator - Y-Combinator is much more like EA grants, and the EA hotel is doing something different. Y-Combinator basically plays the game of get status and connections, increase deal-flow, and then choose from the cream of the crop.

It's useful to have something like that, but a game of "use tight feedback loops to find diamonds in the rough" seems to be useful as well. Using both strategies is more effective than just one.

Comment by halffull on Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions · 2019-04-01T13:00:12.203Z · score: 4 (3 votes) · EA · GW

I agree with Robin that this is a criminally neglected cause areas. Especially for people who put strong probability on AGI, Bioweapons, and other technological risks, more research into institutions that can make better decisions and outcompete our current institutions seems to be important.

Comment by halffull on The Case for the EA Hotel · 2019-04-01T10:57:05.893Z · score: 5 (3 votes) · EA · GW
Has anyone been asked to leave the EA Hotel because they weren't making enough progress, or because their project didn't turn out very well?

Not yet (I don't think. Maybe Toon or Greg can chime in here), but the hotel has noticed this and is working on procedures to have better feedback loops.

If not, do you think the people responsible for making that decision have some idea of when doing so would be correct?

As I understand it, the trustees are currently working to develop standards for this.

Comment by halffull on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T10:47:46.709Z · score: 2 (2 votes) · EA · GW
but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.

The post is organized by dependency not by strength of argument. First people have to convinced that funding projects make sense at all (given that there's so much grant money already in EA) before we can talk about the way in which to fund them.

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T16:17:29.157Z · score: 2 (1 votes) · EA · GW

I think my crux is something like "this is a question to be dissolved, rather than answered"

To me, trying to figure out whether a goal is egoistic or altruistic is like trying to figure out whether a whale is a mammal or a fish - it depends heavily on my framing and why I'm asking the question, and points to two different useful maps that are both correct in different situations, rather than something in the territory.

Another useful map might be something like "is this eudomonic or hedonic egoism" which I think can get less squirrely answers than the "egoic or altruistic" frame. Another useful one might be the "Rational Compassion" frame of "Am I working to rationally optimize the intutions that my feelings give me?"

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T16:06:51.672Z · score: 1 (1 votes) · EA · GW

Sure, but if one has the value of actually helping other people, that distinction disappears, yes?

As an example of a famous egoist, I think someone like Ayn Rand would say that fooling yourself about your values is doing it wrong.

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T15:44:05.458Z · score: 1 (1 votes) · EA · GW

If my values say "I should help lots of people", and I work to maximize my values (which makes my life meaningful) which category does that fit into? Does it matter if I'm doing it "because" it makes my life meaningful, or because it helps other people?

To me that last distinction doesn't even make a lot of sense - I try to maximize my values BECAUSE they're my values. Sometimes I think the egoists are just saying "maximize your values" and the altruists are just saying "my values are helping others" and the whole thing is just a framing argument.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T15:20:26.145Z · score: 7 (3 votes) · EA · GW

Right... or even worse, you're simply paying someone's rent and not working towards ownership at all.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T15:00:15.601Z · score: 14 (6 votes) · EA · GW

I've seen people need breaks and time off. I think the "Culture of care" goes a long way towards making sure this isn't the norm though.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:55:28.131Z · score: 3 (2 votes) · EA · GW

Good point. Either route seems viable. The EA hotel route might be slightly higher EV as the way you're gaining skills and status has the potential to do a lot of good, but I think probably many ways to cross/fill the chasm are needed.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:53:00.584Z · score: 7 (4 votes) · EA · GW

Because buying the hotel was a fixed cost, whereas rent is an ongoing one.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:52:08.898Z · score: 6 (4 votes) · EA · GW

I'm wary of making this particular post about my project, but happy to talk through private message, on a call, or as comments on the linked document. I'll probably make a post about project metis on the EA forum at some point to solicit feedback, but not quite yet.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T13:50:35.860Z · score: 11 (7 votes) · EA · GW

Blogging/research counts as a project as well, and one that could ultimately do a lot of good. Trying to understand insect sentience is definitely a project, even it's not billed as one. As is writing a thesis on the philisophical underpinnings of effective altruism. There's a wider span of "projects that can do good" than "startups that can make money" so the former might not always look the latter.

Studying I would not consider a project, but I still consider the EA hotel a great place for the few who are doing it outside of a project. This sentence explains why:

For EAs who are still looking for projects, it provides a bridge to focus on gaining skills and knowledge while getting chances to join new projects as they circulate through the hotel.

ETA: As the EA hotel gets more applicants, I suspect the distribution will shift towards a few more things that look more like traditional projects, but I still think foundational research and other "weird" projects should be a large consideration.

The Case for the EA Hotel

2019-03-31T12:34:14.781Z · score: 63 (35 votes)
Comment by halffull on Identifying Talent without Credentialing In EA · 2019-03-15T12:57:32.084Z · score: 1 (1 votes) · EA · GW
Fair – an implicit assumption of my post is that markets are efficient. If you don't think so, then what I had to say is probably not very relevant.

I assume you're not arguing for the strong EMH here (markets are maximally efficient), so the difference to me seems to be a difference of degree than kind (you think hiring markets are more efficient than Peter does, Peter thinks hiring markets are less efficient than you do.)

If you are arguing for the strong version of EMH here I'd be curious as to your reasoning, as I can't think of any credible economists who think that real world markets don't have any inefficiencies.

If you're arguing for a weaker version, I think it's worth digging in to cruxes... Why do you think that the hiring market is more efficient than Peter does?

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-15T08:56:17.271Z · score: 1 (1 votes) · EA · GW

Interesting, thanks!

Edit: I've now updated the post.

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-15T08:55:30.405Z · score: 1 (1 votes) · EA · GW

This is great, thanks for sharing!

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-14T20:13:17.682Z · score: 1 (1 votes) · EA · GW

I actually looked at standard categories but AFAICT there is no standard. Knightian uncertainty and statistical uncertainty are one standard that are almost synonymous with epistemic and aleatory, which are fairly synonymous with model uncertainty and base uncertainty, hence my use of knightian. However, those definitions don't include the difference between transparent and opaque risk mentioned above.

The definitions of risk and uncertainty seemed to be defined many different places, and frequently the definitions are swapped in different sources. Ignorance and uncertainty are sometimes seen as synonymous, sometimes not.

Basically, I created my own terms because I thought the current terms were muddied enough that using them would create more confusion than clarity. I used existing terms for solutions to different types of risk because the opposite was true.

One place where I didn't look as hard to find existing categories is the "Types of knightian risk" part. I couldn't find any existing breakdowns of this and as far as I know it's original, but there may be an existing list and I was simply using the wrong search terms.

How to Understand and Mitigate Risk (Crosspost from LessWrong)

2019-03-12T10:24:06.352Z · score: 12 (8 votes)
Comment by halffull on EA is vetting-constrained · 2019-03-09T17:17:23.537Z · score: 3 (6 votes) · EA · GW

I worked on this problem for a few years and agree that it's a bottleneck just in EA, but globally. I do think that the work on prediction is one potential "solution", but there are additional problems with getting people to actually adopt solutions. The incentives for the people in power to change to a solution that gives them less power is low, and there are lots of evolutionary pressures that lead to the current vetting procedures. I'd love to talk more to you about this as I'm working on similar things, although have moved away from this exact problem.

Comment by halffull on How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? · 2019-02-13T18:53:45.339Z · score: 1 (1 votes) · EA · GW

One option we were looking to use at Verity is the 'contest' model - In which an interested party can subsidize a particular question, and then split the pool between forcasters based on their reputation/score after the outcome has come to pass. This helps to subsidize specific predictions, rather than subsidizing more general predictions when paying people for their overall score. It has similarities to the subsidized prediction market model as well.

Comment by halffull on Against Modest Epistemology · 2017-11-18T00:11:17.857Z · score: 0 (2 votes) · EA · GW

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

This is what I'm talking about when I say "jut so stories" about the data from the GJP. One explanation is that superforecasters are going through this thought process, another would be that they discard non-superforecasters' knowledge, and therefore end up as more extreme without explicitly running the extremizing algorithm on their own forecasts.

Similarly, the existence of super-forecasters themselves argues for a non-modest epistemology, while the fact that the extremized aggregation beats the superforecasters may argue for somewhat of a more modest epistemology. Saying that the data here points one way or the other to my mind is cherrypicking.

Comment by halffull on Against Modest Epistemology · 2017-11-17T01:20:31.443Z · score: 1 (1 votes) · EA · GW

How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.

Doesn't this clearly demonstrate that the superforecasters are not using modest epistemology? At best, this shows that you can improve upon a "non-modest" epistemology by aggregating them together, but does not argue against the original post.

Comment by halffull on Against Modest Epistemology · 2017-11-16T22:26:19.150Z · score: -2 (4 votes) · EA · GW

It's an interesting just so story about what IARPA has to say about epistemology, but the actual story is much more complicated. For instance, the fact that "Extremizing" works to better calibrate general forecasts, but that extremizing of superforecaster's predictions makes them worse.

Furthermore, that contrary to what you seem to be claiming about people not being able to outperform others, there are in fact "superforecasters" who out perform the average participant year after year, even if they can't outperform the aggregate when their forecasts are factored in.