Posts

The Case for the EA Hotel 2019-03-31T12:34:14.781Z · score: 64 (36 votes)
How to Understand and Mitigate Risk (Crosspost from LessWrong) 2019-03-12T10:24:06.352Z · score: 12 (8 votes)

Comments

Comment by halffull on What posts you are planning on writing? · 2019-07-26T17:25:03.073Z · score: 2 (2 votes) · EA · GW
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

Comment by halffull on Why the EA Forum? · 2019-06-20T19:08:25.044Z · score: 3 (2 votes) · EA · GW

Yes, this is more an argument for "don't have downvotes at all" like hacker news or traditional forum.

Note I think your team has made the correct tradeoffs so far, this was more paying devils advocate.

Comment by halffull on Why the EA Forum? · 2019-06-20T16:49:09.178Z · score: 1 (1 votes) · EA · GW

Of course there's a reverse incentive here, where getting downvoted feelsbadman, and therefore you may be even less likely to want to post up unfinished thoughts, as compared to them simply getting displayed in chronological order.

Comment by halffull on Raemon's EA Shortform Feed · 2019-06-19T22:39:59.097Z · score: 3 (2 votes) · EA · GW

I won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:03:54.305Z · score: 1 (1 votes) · EA · GW

Do you think that Guesstimate has not yet made $200,000 worth of value for the world? I'm legitimately unsure about this point but my priors say its' at least possible that it's added that much value in time saved and better estimates. I think that systems modelling could have similar impacts.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:00:54.968Z · score: 1 (1 votes) · EA · GW

I have the reverse intuition here. I think that in general while optimizing profit doesn't make sense, creating sustainable business models that fund their own growth provides many opportunities for impact that simply taking other peoples money doesn't.

Comment by halffull on Overview of projects at EA NTNU · 2019-06-17T15:51:39.113Z · score: 2 (2 votes) · EA · GW

This is great! One thing I was wanting was a short retrospective on each of the events, to see lessons and if they were effective uses of time in retrospect. As is, this list is great for brainstorming, but not so much for prioritization. I don't want to disincentivize further publishing of lists like this (because this was great), just wanted to give a suggestions for possible improvement in the future.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-14T19:01:09.839Z · score: 5 (3 votes) · EA · GW

Incubators usually take founders and ideas together, whereas I like the Charity Entrepreneurship approach of splitting up those tasks, and that it would fit the EA community well.

I think there are opportunities for lots of high expected value startups, when taking the approach that the goal is to do as much good as possible, for instance:

1. Proving a market for things that are good for the world, like Tesla's strategy.

2. Identifying startups that could have high negative value if externalities are ignored, and trying to have an EA aligned startup be a winner in that space.

3. Finding opportunities that may be small or medium in terms of profitability, but have high positive externalities.

The difference between this and any other incubator is that this would not be measuring just profitability as its' main measure, but also working to measure the externalities of the company's, and aim to create a portfolio that does the highest good for the world.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:47:54.617Z · score: 8 (5 votes) · EA · GW

Note that this is quite easy to do. Give me or someone else that's competent access to the server for a few hours, and we can install Yourls or another existing url shortening tool.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:46:15.212Z · score: 1 (1 votes) · EA · GW

Impact assessments. I think our ability to do impact assesments are bounded by our tools (for instance, they were on average much worse before guesstimate). If EAs started regularly modelling complex feedback loops because there was a readily available tool for it, I think the quality of thinking and estimates would go up by quite a bit.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:26:26.228Z · score: 5 (2 votes) · EA · GW

A tool that makes systems modelling (with sinks, flows, and feedback and feedforward loop) as easy as Monte Carlo modelling was made with Guesstimate.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:25:24.692Z · score: 6 (5 votes) · EA · GW

Charity Entrepreneurship, but for for-profits.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:24:36.332Z · score: 13 (8 votes) · EA · GW

An organization dedicated to studying how to make other organizations more effective, that runs small scale experiments based on the existing literature, then helps EA orgs adopt best practices.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:23:15.827Z · score: 19 (9 votes) · EA · GW

An early stage incubator that can provide guidance and funding for very small projects, like Charity Entrepreneurship but on a much more experimental scale.

Comment by halffull on Is trauma a potential EA cause area? · 2019-06-05T13:12:55.455Z · score: 2 (2 votes) · EA · GW
Are you aware of any extremely efficient ways to reduce trauma ?

There are several promising canidates that show high enough efficacy to do more research. Drugs therapies such as MDMA show promise, as do therepeutic techniques like RTM. (RTM is particularly promising because it appears to be quick, cheap, and highly effective).


Is trauma something that can easily be measured.

Of course. Like most established constructs in psychology, there are both diagnostic criteria for assesment by trained professionals and self-report indexes. Most of these tend to be fairly high on agreement between different measures as well as test-retest reliability.

Comment by halffull on Considering people’s hidden motives in EA outreach · 2019-06-01T15:20:50.036Z · score: 11 (5 votes) · EA · GW

One consistent frame I've seen with EAs is a much higher emphasis on "How can I frame this to avoid looking bad to as many people as possible?" rather than "How can I frame this to look good and interesting to as many people as possible?"

Something the "cold hard truth about the icebucket challenge" did (correctly I think), is be willing to be controversial and polarizing deliberately. This is something that in general EAs seem to avoid, and there's a general sense that these sorts of marketing framings are the "dark arts" that one should not touch.

On one hand, I see the argument for how framing the facts in the most positive light is obviously bad for an epistemic culture, and could hurt EA's reputation; on the other hand, I think EA is so allergic to this that it hurts it. I do think this is a risk aversion bias when it comes to both public perception and epistemic climate, and that EA is irrationally too far towards being cautious.

Another frequent mistake I see along this same vein (although less rare with the higher status people in the movement) is to confuse epistemic and emotional confidence. People often think that if they're unsure about an opinion, they need to appear unsure of themselves when stating an opinion.

The problem with this in the context of the above post is that appearing unsure of yourself signals low status. The antidote to this is to detach your sure-o-meter from your feeling of confidence, and be able to verbally state your confidence levels without being unsure of yourself. If you do this currently in the EA community, there can be a stigma about epistemic overconfidence that's difficult to overcome, even though this is the correct way to maximize both epistemic modesty and outside perception.

So to sum my suggestions up for concrete ways that people in organizations could start taking status effects more into account:

  • Shift more from "how can frame the truth to avoid looking bad?" to "How can I frame the truth to look good?"
  • Work to detach your emotional and your epistemic confidence, especially in public settings.
Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-30T09:33:40.252Z · score: 1 (1 votes) · EA · GW

I will note that I notice that I'm feeling very adversarial in this conversation, rather than truth seeking. For that reason I'm not going to participate further.

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-30T08:59:14.039Z · score: 1 (1 votes) · EA · GW
If you just look backwards from EAs' priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better.

Maybe, but I didn't say that I'd expect to see lots of projects trying to fix these issues, just that I'd expect to see more research into them, which is obviously the first step to determine correct interventions.

Arguments like this don't really go anywhere. Especially if you are talking about "thoughts not thinked", then this is just useless speculation.

What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?

What's systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.

Voting mechanisms can be systemic if they're approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.

are human enhancement to eliminate suffering

This is another great example of EA bucking the trend, but I don't see it as a mainstream EA cause.

functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate

These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.

You can probably say that I happen to underestimate or overestimate their importance but the idea that it's inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it's pretty easy to just come up with guesstimates if nothing else.

The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn't even mention those as problems.

What would a "systemic solution" look like?

Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.

Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.

I feel like you are implicitly including "big" as part of your definition of "systemic"

I'm including systems thinking as part of my definition. This often leads to "big" interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.

Well they're not going to change all of it. They're going to have to try something small, and hopefully get it to catch on elsewhere.

They "have to" do that? Why? Certainly that's one way to intervene in the system. There are many others as well.

"Hopefully" getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-29T22:48:00.588Z · score: 3 (6 votes) · EA · GW

It's hard to point to thoughts not thinked :). A few lines of research and interventions that I would expect to be more pursued in the EA community if this bias wasn't present:

1. More research and experimentation with new types of governance (on a systemic level, not just including the limited research funding into different ways to count votes).

2. More research and funding into what creates paradigm shifts in science, changes in governance structures, etc.

3. More research into power, and influence, and how they can effect large changes.

4. Much much more looking at trust and coordination failures, and how to handle them.

5. A research program around the problem of externalities and potential approaches to it.

Basically, I'd expect much more of a "5 why's approach" that looks into the root causes of suffering in the world, rather than trying to fix individual instances of it.


An interesting counter example might be CFAR and the rationality focus in the community, but this seems to be a rare instance, and at any rate tries to fix a systemic problem with a decidely non-systemic solution (there are a few others that OpenPhil has lead, such as looking into changing academic research, but again the mainstream EA community mostly just doesn't know how to think this way).

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-28T23:05:16.525Z · score: 13 (5 votes) · EA · GW

As someone who agrees EAs aren't focused enough on systemic change, I don't see a single "system" that EAs are ignoring. Rather, I see a failure to use systems thinking to tackle important but hard to measure opportunities for interventions in general. That is, I may have particular ideas for systemic change of particular systems (academia and research, capitalism, societal trust) I'm working on or have worked on, but my critique is simply that EAs (at least in the mainstream movement) tend to ignore this type of thinking at all, when historically the biggest changes in quality of life seem to have come from systemic change and the resulting feedback loops.

Comment by halffull on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-13T07:19:23.319Z · score: 1 (1 votes) · EA · GW

Thought about this for the last couple of days, and I'd recommend against it. the workshop is set up to be a complete, contained experience, and isn't really designed to be consumed only partially.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-14T21:46:35.802Z · score: 5 (3 votes) · EA · GW
There is an opportunity cost in not having a better backdrop.

Seems plausible.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-14T13:11:30.367Z · score: -2 (3 votes) · EA · GW

It's possible I'm wrong. I find it unlikely that veganism wasn't influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn't influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn't influenced by the existing political zeitgeist around inequality.

All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there's opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.

It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-13T09:54:53.124Z · score: 1 (4 votes) · EA · GW
I haven't seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA.

Veganism is probably a good example here. Institutional decisionmaking might be another. I don't think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-12T17:36:12.277Z · score: 4 (3 votes) · EA · GW
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs' understanding of the world.

It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.

Comment by halffull on Who is working on finding "Cause X"? · 2019-04-12T12:53:41.089Z · score: 6 (6 votes) · EA · GW

Rethink Priorities seems to be the obvious organization focused on this.

Comment by halffull on Political culture at the edges of Effective Altruism · 2019-04-12T09:54:52.728Z · score: 15 (15 votes) · EA · GW

An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.

For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.

Comment by halffull on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T19:09:15.545Z · score: 14 (10 votes) · EA · GW
I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think there's two things going on here.

The first is that weirdness and outside view are often deeply correlated, although not the same thing. In many ways the feeling of weirdness is a schelling fence. It protects people from sociopaths, joining cults, and other things that are a bad idea but they can't quite articulate in words WHY it's a bad idea.

I think you're right that the best interventions will many times be weird, so in this case its' a schelling fence that you have to ignore if you want to make any progress from an inside view... but it's still worth noting that weirdness is there and good data.

The second thing going on is that it seems like many EA institutions have adopted the neoliberal stategy of gaining high status, infiltrating academia, and using that to advance EA values. From this perspective, it's very important to avoid an aura of weirdness for the movement as a whole, even if any given individual weird intervention might have high impact. This is hard to talk about because being too loud about the strategy makes it less effective, which means that sometimes people have to say things like "outside view" when what they really mean is "you're threatening our long term strategy but we can't talk about it." Although obviously in this particular case the positive impact on this strategy outweighs the potential negative impact of the weirdness aura.

I feel comfortable stating this because it's a random EA forum post and I'm not in a position of power at an EA org, but were I in that position, I'd feel much less comfortable posting this.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-10T08:57:15.291Z · score: 4 (3 votes) · EA · GW

You can often get the timing to work late in the game by stalling the company that gave you the offer, and telling other companies that you already have an offer so you need an accelerated process.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-10T08:56:03.535Z · score: 9 (3 votes) · EA · GW

It matters less if you time your offers so you have multiple at the same time.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-09T19:31:15.284Z · score: 1 (1 votes) · EA · GW

Last I looked at the data for job negotiations, the rescission rate is actually much higher for jobs, around 10%.

Comment by halffull on Salary Negotiation for Earning to Give · 2019-04-07T08:40:17.765Z · score: 2 (2 votes) · EA · GW

There's another good post on salary negotiation in an ETG context here: https://www.lesswrong.com/posts/Z6dmoLyfBdmo6HEss/maximizing-your-donations-via-a-job

Back when I was a career coaching, I used to run a popular workshop on salary negotiation for the local job search meetup. It broke down salary negotiation into a set of 6 skills you could practice such as timing your offers, deferring salary negotiation, overcoming objections etc.

The great thing about this was that after the initial presentation, job seekers could practice the skills with each other meaning I wasn't the bottleneck. Presentation is here: https://www.evernote.com/shard/s8/sh/d45a92cc-a8d0-4b3b-adf5-53b53e1efbc6/a423ecc18c4d0a386b5fe1d2284680f7

I could see a similar idea of a practice group working for EAs.

Comment by halffull on How x-risk projects are different from startups · 2019-04-05T21:41:21.404Z · score: 3 (4 votes) · EA · GW

I wasn't asking for examples from EA, just the type of projects we'd expect from EAs.

Do you think intentional insights did a lot of damage? I'd say it was recognized by the community and pretty well handled whole doing almost no damage.

Comment by halffull on How x-risk projects are different from startups · 2019-04-05T09:12:55.182Z · score: 6 (10 votes) · EA · GW

Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I'm more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight

Comment by halffull on The Case for the EA Hotel · 2019-04-03T16:45:20.060Z · score: 2 (2 votes) · EA · GW
My main point is that, even if the EA hotel is the best way of supporting/incenting productive EAs in the beginning of their careers, it doesn't solve the problem of selecting the best projects

What do you think about the argument of using the processes in the hotel to filter projects? I tend to think that one way to cross the chasm is "just try as many projects as possible, but have tight feedback loops so you don't waste too many resources."

Comment by halffull on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-02T11:41:11.446Z · score: 6 (3 votes) · EA · GW
Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don't get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I'm willing or able to evaluate, but it's not obvious that they're being less selective than I am about which ones they fund.

I think the EA hotel is trying to do something different from Y-Combinator - Y-Combinator is much more like EA grants, and the EA hotel is doing something different. Y-Combinator basically plays the game of get status and connections, increase deal-flow, and then choose from the cream of the crop.

It's useful to have something like that, but a game of "use tight feedback loops to find diamonds in the rough" seems to be useful as well. Using both strategies is more effective than just one.

Comment by halffull on Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions · 2019-04-01T13:00:12.203Z · score: 4 (3 votes) · EA · GW

I agree with Robin that this is a criminally neglected cause areas. Especially for people who put strong probability on AGI, Bioweapons, and other technological risks, more research into institutions that can make better decisions and outcompete our current institutions seems to be important.

Comment by halffull on The Case for the EA Hotel · 2019-04-01T10:57:05.893Z · score: 5 (3 votes) · EA · GW
Has anyone been asked to leave the EA Hotel because they weren't making enough progress, or because their project didn't turn out very well?

Not yet (I don't think. Maybe Toon or Greg can chime in here), but the hotel has noticed this and is working on procedures to have better feedback loops.

If not, do you think the people responsible for making that decision have some idea of when doing so would be correct?

As I understand it, the trustees are currently working to develop standards for this.

Comment by halffull on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T10:47:46.709Z · score: 2 (2 votes) · EA · GW
but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.

The post is organized by dependency not by strength of argument. First people have to convinced that funding projects make sense at all (given that there's so much grant money already in EA) before we can talk about the way in which to fund them.

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T16:17:29.157Z · score: 2 (1 votes) · EA · GW

I think my crux is something like "this is a question to be dissolved, rather than answered"

To me, trying to figure out whether a goal is egoistic or altruistic is like trying to figure out whether a whale is a mammal or a fish - it depends heavily on my framing and why I'm asking the question, and points to two different useful maps that are both correct in different situations, rather than something in the territory.

Another useful map might be something like "is this eudomonic or hedonic egoism" which I think can get less squirrely answers than the "egoic or altruistic" frame. Another useful one might be the "Rational Compassion" frame of "Am I working to rationally optimize the intutions that my feelings give me?"

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T16:06:51.672Z · score: 1 (1 votes) · EA · GW

Sure, but if one has the value of actually helping other people, that distinction disappears, yes?

As an example of a famous egoist, I think someone like Ayn Rand would say that fooling yourself about your values is doing it wrong.

Comment by halffull on Altruistic action is dispassionate · 2019-03-31T15:44:05.458Z · score: 1 (1 votes) · EA · GW

If my values say "I should help lots of people", and I work to maximize my values (which makes my life meaningful) which category does that fit into? Does it matter if I'm doing it "because" it makes my life meaningful, or because it helps other people?

To me that last distinction doesn't even make a lot of sense - I try to maximize my values BECAUSE they're my values. Sometimes I think the egoists are just saying "maximize your values" and the altruists are just saying "my values are helping others" and the whole thing is just a framing argument.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T15:20:26.145Z · score: 7 (3 votes) · EA · GW

Right... or even worse, you're simply paying someone's rent and not working towards ownership at all.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T15:00:15.601Z · score: 14 (6 votes) · EA · GW

I've seen people need breaks and time off. I think the "Culture of care" goes a long way towards making sure this isn't the norm though.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:55:28.131Z · score: 3 (2 votes) · EA · GW

Good point. Either route seems viable. The EA hotel route might be slightly higher EV as the way you're gaining skills and status has the potential to do a lot of good, but I think probably many ways to cross/fill the chasm are needed.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:53:00.584Z · score: 7 (4 votes) · EA · GW

Because buying the hotel was a fixed cost, whereas rent is an ongoing one.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T14:52:08.898Z · score: 6 (4 votes) · EA · GW

I'm wary of making this particular post about my project, but happy to talk through private message, on a call, or as comments on the linked document. I'll probably make a post about project metis on the EA forum at some point to solicit feedback, but not quite yet.

Comment by halffull on The Case for the EA Hotel · 2019-03-31T13:50:35.860Z · score: 11 (7 votes) · EA · GW

Blogging/research counts as a project as well, and one that could ultimately do a lot of good. Trying to understand insect sentience is definitely a project, even it's not billed as one. As is writing a thesis on the philisophical underpinnings of effective altruism. There's a wider span of "projects that can do good" than "startups that can make money" so the former might not always look the latter.

Studying I would not consider a project, but I still consider the EA hotel a great place for the few who are doing it outside of a project. This sentence explains why:

For EAs who are still looking for projects, it provides a bridge to focus on gaining skills and knowledge while getting chances to join new projects as they circulate through the hotel.

ETA: As the EA hotel gets more applicants, I suspect the distribution will shift towards a few more things that look more like traditional projects, but I still think foundational research and other "weird" projects should be a large consideration.

Comment by halffull on Identifying Talent without Credentialing In EA · 2019-03-15T12:57:32.084Z · score: 1 (1 votes) · EA · GW
Fair – an implicit assumption of my post is that markets are efficient. If you don't think so, then what I had to say is probably not very relevant.

I assume you're not arguing for the strong EMH here (markets are maximally efficient), so the difference to me seems to be a difference of degree than kind (you think hiring markets are more efficient than Peter does, Peter thinks hiring markets are less efficient than you do.)

If you are arguing for the strong version of EMH here I'd be curious as to your reasoning, as I can't think of any credible economists who think that real world markets don't have any inefficiencies.

If you're arguing for a weaker version, I think it's worth digging in to cruxes... Why do you think that the hiring market is more efficient than Peter does?

Comment by halffull on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-15T08:56:17.271Z · score: 1 (1 votes) · EA · GW

Interesting, thanks!

Edit: I've now updated the post.