Posts

Halffull's Shortform 2020-03-15T18:04:00.123Z · score: 4 (1 votes)
The Case for the EA Hotel 2019-03-31T12:34:14.781Z · score: 65 (37 votes)
How to Understand and Mitigate Risk (Crosspost from LessWrong) 2019-03-12T10:24:06.352Z · score: 13 (9 votes)

Comments

Comment by halffull on New Top EA Causes for 2020? · 2020-04-02T21:54:55.080Z · score: 3 (2 votes) · EA · GW

Perhaps Dereke Bruce had the right of it here:

"In order to keep a true perspective of one's importance, everyone should have a dog that will worship him and a cat that will ignore him."

Comment by halffull on New Top EA Causes for 2020? · 2020-04-01T14:48:40.179Z · score: 23 (13 votes) · EA · GW

I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people's overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.

Therefore, I propose a new top EA cause for 2020: Distributing Puppies

  • Puppies decrease individual loneliness, allowing a more global worldview.
  • Puppies model unconditional love and altruism, creating a flowthrough to their owners.
  • Puppies with good owners on their own are just sources of positive utility, increasing global welfare.
Comment by halffull on What are EA project ideas you have? · 2020-03-30T21:01:21.983Z · score: 5 (2 votes) · EA · GW

You might be interested in this same question that was asked last June:


https://forum.effectivealtruism.org/posts/NQR5x3rEQrgQHeevm/what-new-ea-project-or-org-would-you-like-to-see-created-in

Comment by halffull on Halffull's Shortform · 2020-03-28T01:48:23.847Z · score: 4 (1 votes) · EA · GW

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.

This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they're safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they're worth.

These types of rules are very hard to repeal after the fact because of absence blindness - someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.

Comment by halffull on What posts do you want someone to write? · 2020-03-26T15:51:04.051Z · score: 1 (1 votes) · EA · GW
Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

Comment by halffull on Growth and the case against randomista development · 2020-03-26T01:05:00.552Z · score: 2 (2 votes) · EA · GW

No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.

Comment by halffull on Growth and the case against randomista development · 2020-03-26T00:43:11.448Z · score: 1 (1 votes) · EA · GW
the post focuses on human welfare,

It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

I'm also very interested in how increased economic growth impacts existential risk.

At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

Comment by halffull on Growth and the case against randomista development · 2020-03-26T00:26:30.396Z · score: 1 (1 votes) · EA · GW

Let's say you believe two things:

1. Growth will have flowthrough effects on existential risk.

2. You have a comparative advantage effecting growth over x-risk.

You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.

Comment by halffull on What posts do you want someone to write? · 2020-03-25T20:58:04.145Z · score: 3 (2 votes) · EA · GW

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

Comment by halffull on Halffull's Shortform · 2020-03-25T17:42:22.314Z · score: 4 (3 votes) · EA · GW

It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.

Comment by halffull on Halffull's Shortform · 2020-03-25T16:14:40.258Z · score: 5 (2 votes) · EA · GW

This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.

You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.

This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.

Comment by halffull on What posts do you want someone to write? · 2020-03-25T15:18:24.150Z · score: 4 (1 votes) · EA · GW

Here's an analysis by 80k. https://80000hours.org/problem-profiles/improving-institutional-decision-making/

Comment by halffull on Halffull's Shortform · 2020-03-25T15:01:48.420Z · score: 6 (4 votes) · EA · GW

Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.

Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.

On the other hand, I think it's quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we'll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.

But getting the time of this right is everything. Striking the right balance of "deaths from economic freefall" and "deaths from an overloaded medical system" is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.

Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is

1. Mostly in non-profits

and

2. Orders of magnitude smaller than funding for AI capabilities

It's quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren't speaking cogently about the tradeoffs between depression and lives saved from Corona - they have gone through this same train of thought, and decided that preventing a depression is an information hazard.

Comment by halffull on Why not give 90%? · 2020-03-24T19:38:22.971Z · score: 14 (9 votes) · EA · GW

I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

Comment by halffull on Halffull's Shortform · 2020-03-15T18:04:00.296Z · score: 8 (3 votes) · EA · GW

I've had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I've been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.

I would appreciate if people could link me to other sources that are important. I'm especially interested in people making arguments for more experimentation, as I mostly found the opposite.

1: 80k's piece on accidental harm: https://80000hours.org/articles/accidental-harm/#you-take-on-a-challenging-project-and-make-a-mistake-through-lack-of-experience-or-poor-judgment

2. How to avoid accidentally having a negative impact with your project, by Max Dalton and Jonas Volmer: https://www.youtube.com/watch?v=RU168E9fLIM&t=519s

3. Steelmanning the case against unquantifiable interventions, By David Manheim: https://forum.effectivealtruism.org/posts/cyj8f5mWbF3hqGKjd/steelmanning-the-case-against-unquantifiable-interventions

4. EA is Vetting Constrained: https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained

5. How X-Risk Projects are different from Startups by Jan Kulveit:

https://forum.effectivealtruism.org/posts/wHyy9fuATeFPkHSDk/how-x-risk-projects-are-different-from-startups

Comment by halffull on Growth and the case against randomista development · 2020-01-17T22:17:22.057Z · score: 9 (4 votes) · EA · GW
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.

I'm curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the "developed" category could quite possibly make coordination around technological risks harder.

I think what you're saying is plausible but I don't know of the arguments for that case.

Comment by halffull on Growth and the case against randomista development · 2020-01-17T18:02:41.835Z · score: 23 (11 votes) · EA · GW

I'm quite excited to see an impassioned case for more of a focus on systemic change in EA.

I used to be quite excited about interventions targeting growth or innovation, but I've recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:

  • Climate Change
  • AGI Risk
  • Nuclear and Biological Weapons Research
  • Cheaper weapons in general

Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.

Comment by halffull on [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration · 2019-12-23T03:20:29.458Z · score: 3 (3 votes) · EA · GW

This work is excellent and highly important.

I would love to see this same setup experimented with for Grant giving.

Comment by halffull on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-17T17:47:55.395Z · score: 12 (5 votes) · EA · GW

Found elsewhere on the thread, a list of weird beliefs that Buck holds: http://shlegeris.com/2018/10/23/weirdest

Comment by halffull on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-14T02:34:33.359Z · score: 6 (3 votes) · EA · GW

I'd be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.

Comment by halffull on EA Hotel Fundraiser 5: Out of runway! · 2019-10-31T15:32:02.764Z · score: 12 (7 votes) · EA · GW

I think there's a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.

The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to other hard to measure project. This is what I attempted to do with the EA hotel here: https://www.lesswrong.com/posts/tCHsm5ZyAca8HfJSG/the-case-for-the-ea-hotel

Comment by halffull on Effective Pro Bono Projects · 2019-09-12T03:24:05.510Z · score: 5 (2 votes) · EA · GW

Tobacco taxes are pigouvian under state sponsored healthcare.

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T22:33:09.267Z · score: 1 (1 votes) · EA · GW

Hmm that's odd, I tested both in incognito mode and they seemed to work.

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T21:37:55.874Z · score: 1 (1 votes) · EA · GW

You shouldn't, it's an evernote public sharing link that doesn't require sign in. Note also that I tried to embed the image directly in my comment, but apparently the markdown for images doesn't work in comments?

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T20:16:13.305Z · score: 4 (3 votes) · EA · GW

I timeboxed 30 minutes to manually transfer this to yED. I'm fairly certain there's one or two missing edges here's what I got:

https://www.evernote.com/shard/s8/sh/1c1071e2-1ab0-47e8-a4f8-6560139e9cac/a31dd7a67e852f1b/res/8c016672-4d26-4bb9-b85d-a99f381ab1e6/FundingImage.png

Here's the yED file, if anyone wants to try their hand at other layout algorithms:

https://www.evernote.com/shard/s8/client/snv?noteGuid=1c1071e2-1ab0-47e8-a4f8-6560139e9cac&noteKey=a31dd7a67e852f1b&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs8%2Fsh%2F1c1071e2-1ab0-47e8-a4f8-6560139e9cac%2Fa31dd7a67e852f1b&title=FundingImage.png

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T16:01:39.139Z · score: 2 (2 votes) · EA · GW

Small suggestion for future projects like this. I used to use graphviz for diagramming, but since found yED and never looked back. Its edge-routing and placement algorithms are much better, and can be tweaked with WYSIWYG editing after the fact.

Comment by halffull on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-27T19:32:33.182Z · score: 2 (2 votes) · EA · GW

I tend to think this is also true of any analysis which includes only one way interactions or one way causal mechanisms, and ignores feedback loops and complex systems analysis. This is true even if each of parameters is estimaed using probability distributions.

Comment by halffull on How do you decide between upvoting and strong upvoting? · 2019-08-27T19:28:25.784Z · score: 6 (3 votes) · EA · GW

I upvote if I think the post is contributing to the current conversation, and strong upvote if I think the post will contribute to future and ongoing conversations (IE, its' a comment or post that people should see when browsing the site, aka Stock vs. Flow).

Occasionally, I'll strong upvote/downvote strategically to get a comment more in line with what I think it "deserves", trying to correct a perceived bias of other votes.



Comment by halffull on EAGxNordics 2019 Postmortem · 2019-08-27T19:21:24.214Z · score: 3 (3 votes) · EA · GW

I'm sad because I really enjoyed EAGx nordics :). In my view the main benefits of conferences are the networks and idea-sex that come out of them, and I think it did a great job at both of those. I'm curious if you think the conference "made back its' money" in terms of value to participants, which is seperate from the question of counterfactual value you pose here.

Comment by halffull on What posts you are planning on writing? · 2019-07-26T17:25:03.073Z · score: 2 (2 votes) · EA · GW
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

Comment by halffull on Why the EA Forum? · 2019-06-20T19:08:25.044Z · score: 3 (2 votes) · EA · GW

Yes, this is more an argument for "don't have downvotes at all" like hacker news or traditional forum.

Note I think your team has made the correct tradeoffs so far, this was more paying devils advocate.

Comment by halffull on Why the EA Forum? · 2019-06-20T16:49:09.178Z · score: 1 (1 votes) · EA · GW

Of course there's a reverse incentive here, where getting downvoted feelsbadman, and therefore you may be even less likely to want to post up unfinished thoughts, as compared to them simply getting displayed in chronological order.

Comment by halffull on Raemon's EA Shortform Feed · 2019-06-19T22:39:59.097Z · score: 3 (2 votes) · EA · GW

I won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:03:54.305Z · score: 1 (1 votes) · EA · GW

Do you think that Guesstimate has not yet made $200,000 worth of value for the world? I'm legitimately unsure about this point but my priors say its' at least possible that it's added that much value in time saved and better estimates. I think that systems modelling could have similar impacts.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:00:54.968Z · score: 1 (1 votes) · EA · GW

I have the reverse intuition here. I think that in general while optimizing profit doesn't make sense, creating sustainable business models that fund their own growth provides many opportunities for impact that simply taking other peoples money doesn't.

Comment by halffull on Overview of projects at EA NTNU · 2019-06-17T15:51:39.113Z · score: 2 (2 votes) · EA · GW

This is great! One thing I was wanting was a short retrospective on each of the events, to see lessons and if they were effective uses of time in retrospect. As is, this list is great for brainstorming, but not so much for prioritization. I don't want to disincentivize further publishing of lists like this (because this was great), just wanted to give a suggestions for possible improvement in the future.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-14T19:01:09.839Z · score: 5 (3 votes) · EA · GW

Incubators usually take founders and ideas together, whereas I like the Charity Entrepreneurship approach of splitting up those tasks, and that it would fit the EA community well.

I think there are opportunities for lots of high expected value startups, when taking the approach that the goal is to do as much good as possible, for instance:

1. Proving a market for things that are good for the world, like Tesla's strategy.

2. Identifying startups that could have high negative value if externalities are ignored, and trying to have an EA aligned startup be a winner in that space.

3. Finding opportunities that may be small or medium in terms of profitability, but have high positive externalities.

The difference between this and any other incubator is that this would not be measuring just profitability as its' main measure, but also working to measure the externalities of the company's, and aim to create a portfolio that does the highest good for the world.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:47:54.617Z · score: 8 (5 votes) · EA · GW

Note that this is quite easy to do. Give me or someone else that's competent access to the server for a few hours, and we can install Yourls or another existing url shortening tool.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:46:15.212Z · score: 1 (1 votes) · EA · GW

Impact assessments. I think our ability to do impact assesments are bounded by our tools (for instance, they were on average much worse before guesstimate). If EAs started regularly modelling complex feedback loops because there was a readily available tool for it, I think the quality of thinking and estimates would go up by quite a bit.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:26:26.228Z · score: 5 (2 votes) · EA · GW

A tool that makes systems modelling (with sinks, flows, and feedback and feedforward loop) as easy as Monte Carlo modelling was made with Guesstimate.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:25:24.692Z · score: 6 (5 votes) · EA · GW

Charity Entrepreneurship, but for for-profits.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:24:36.332Z · score: 14 (9 votes) · EA · GW

An organization dedicated to studying how to make other organizations more effective, that runs small scale experiments based on the existing literature, then helps EA orgs adopt best practices.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:23:15.827Z · score: 19 (9 votes) · EA · GW

An early stage incubator that can provide guidance and funding for very small projects, like Charity Entrepreneurship but on a much more experimental scale.

Comment by halffull on Is trauma a potential EA cause area? · 2019-06-05T13:12:55.455Z · score: 2 (2 votes) · EA · GW
Are you aware of any extremely efficient ways to reduce trauma ?

There are several promising canidates that show high enough efficacy to do more research. Drugs therapies such as MDMA show promise, as do therepeutic techniques like RTM. (RTM is particularly promising because it appears to be quick, cheap, and highly effective).


Is trauma something that can easily be measured.

Of course. Like most established constructs in psychology, there are both diagnostic criteria for assesment by trained professionals and self-report indexes. Most of these tend to be fairly high on agreement between different measures as well as test-retest reliability.

Comment by halffull on Considering people’s hidden motives in EA outreach · 2019-06-01T15:20:50.036Z · score: 11 (5 votes) · EA · GW

One consistent frame I've seen with EAs is a much higher emphasis on "How can I frame this to avoid looking bad to as many people as possible?" rather than "How can I frame this to look good and interesting to as many people as possible?"

Something the "cold hard truth about the icebucket challenge" did (correctly I think), is be willing to be controversial and polarizing deliberately. This is something that in general EAs seem to avoid, and there's a general sense that these sorts of marketing framings are the "dark arts" that one should not touch.

On one hand, I see the argument for how framing the facts in the most positive light is obviously bad for an epistemic culture, and could hurt EA's reputation; on the other hand, I think EA is so allergic to this that it hurts it. I do think this is a risk aversion bias when it comes to both public perception and epistemic climate, and that EA is irrationally too far towards being cautious.

Another frequent mistake I see along this same vein (although less rare with the higher status people in the movement) is to confuse epistemic and emotional confidence. People often think that if they're unsure about an opinion, they need to appear unsure of themselves when stating an opinion.

The problem with this in the context of the above post is that appearing unsure of yourself signals low status. The antidote to this is to detach your sure-o-meter from your feeling of confidence, and be able to verbally state your confidence levels without being unsure of yourself. If you do this currently in the EA community, there can be a stigma about epistemic overconfidence that's difficult to overcome, even though this is the correct way to maximize both epistemic modesty and outside perception.

So to sum my suggestions up for concrete ways that people in organizations could start taking status effects more into account:

  • Shift more from "how can frame the truth to avoid looking bad?" to "How can I frame the truth to look good?"
  • Work to detach your emotional and your epistemic confidence, especially in public settings.
Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-30T09:33:40.252Z · score: 1 (1 votes) · EA · GW

I will note that I notice that I'm feeling very adversarial in this conversation, rather than truth seeking. For that reason I'm not going to participate further.

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-30T08:59:14.039Z · score: 1 (1 votes) · EA · GW
If you just look backwards from EAs' priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better.

Maybe, but I didn't say that I'd expect to see lots of projects trying to fix these issues, just that I'd expect to see more research into them, which is obviously the first step to determine correct interventions.

Arguments like this don't really go anywhere. Especially if you are talking about "thoughts not thinked", then this is just useless speculation.

What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?

What's systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.

Voting mechanisms can be systemic if they're approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.

are human enhancement to eliminate suffering

This is another great example of EA bucking the trend, but I don't see it as a mainstream EA cause.

functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate

These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.

You can probably say that I happen to underestimate or overestimate their importance but the idea that it's inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it's pretty easy to just come up with guesstimates if nothing else.

The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn't even mention those as problems.

What would a "systemic solution" look like?

Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.

Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.

I feel like you are implicitly including "big" as part of your definition of "systemic"

I'm including systems thinking as part of my definition. This often leads to "big" interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.

Well they're not going to change all of it. They're going to have to try something small, and hopefully get it to catch on elsewhere.

They "have to" do that? Why? Certainly that's one way to intervene in the system. There are many others as well.

"Hopefully" getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-29T22:48:00.588Z · score: 3 (6 votes) · EA · GW

It's hard to point to thoughts not thinked :). A few lines of research and interventions that I would expect to be more pursued in the EA community if this bias wasn't present:

1. More research and experimentation with new types of governance (on a systemic level, not just including the limited research funding into different ways to count votes).

2. More research and funding into what creates paradigm shifts in science, changes in governance structures, etc.

3. More research into power, and influence, and how they can effect large changes.

4. Much much more looking at trust and coordination failures, and how to handle them.

5. A research program around the problem of externalities and potential approaches to it.

Basically, I'd expect much more of a "5 why's approach" that looks into the root causes of suffering in the world, rather than trying to fix individual instances of it.


An interesting counter example might be CFAR and the rationality focus in the community, but this seems to be a rare instance, and at any rate tries to fix a systemic problem with a decidely non-systemic solution (there are a few others that OpenPhil has lead, such as looking into changing academic research, but again the mainstream EA community mostly just doesn't know how to think this way).

Comment by halffull on What exactly is the system EA's critics are seeking to change? · 2019-05-28T23:05:16.525Z · score: 13 (5 votes) · EA · GW

As someone who agrees EAs aren't focused enough on systemic change, I don't see a single "system" that EAs are ignoring. Rather, I see a failure to use systems thinking to tackle important but hard to measure opportunities for interventions in general. That is, I may have particular ideas for systemic change of particular systems (academia and research, capitalism, societal trust) I'm working on or have worked on, but my critique is simply that EAs (at least in the mainstream movement) tend to ignore this type of thinking at all, when historically the biggest changes in quality of life seem to have come from systemic change and the resulting feedback loops.

Comment by halffull on The Athena Rationality Workshop - June 7th-10th at EA Hotel · 2019-05-13T07:19:23.319Z · score: 1 (1 votes) · EA · GW

Thought about this for the last couple of days, and I'd recommend against it. the workshop is set up to be a complete, contained experience, and isn't really designed to be consumed only partially.