Posts

Halffull's Shortform 2020-03-15T18:04:00.123Z · score: 4 (1 votes)
The Case for the EA Hotel 2019-03-31T12:34:14.781Z · score: 65 (37 votes)
How to Understand and Mitigate Risk (Crosspost from LessWrong) 2019-03-12T10:24:06.352Z · score: 17 (10 votes)

Comments

Comment by halffull on Halffull's Shortform · 2020-09-24T01:17:43.857Z · score: 1 (1 votes) · EA · GW

Yeah, I'd expect it to be a global catastrophic risk rather than existential risk.

Comment by halffull on Halffull's Shortform · 2020-09-23T22:37:29.994Z · score: 4 (4 votes) · EA · GW

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

Comment by halffull on Delegate a forecast · 2020-07-29T14:52:29.980Z · score: 1 (1 votes) · EA · GW
Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021

Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.

There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?

A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn't really matter because it's roughly bell-curved shape, but if I was using this as for instance decisionmaking tool to decide what actions to take, I'd really want to look at which ideas had a small chance of being very runaway successes, and how valuable that makes them compared to other options which are surefire, but don't have that chance of tail success. Choosing those ideas isn't likely to pay off on any single idea, but is likely to pay off over the course of a business's lifetime.

Comment by halffull on Delegate a forecast · 2020-07-29T02:06:33.743Z · score: 4 (3 votes) · EA · GW

Thanks, this was great!

The estimates seem fair, Honestly, much better than I would expect given the limited info you had, and the assumptions you made (the biggest one that's off is that I don't have any plans to only market to EAs).

Since I know our market is much larger, I use a different forecasting methodology internally which looks at potential marketing channels and growth rates.

I didn't really understand how you were working in growth rate into your calculations in the spreadsheet, maybe just eyeballing what made sense based on the current numbers and the total addressable market?

One other question I have about your platform is that I don't see any way to get the expected value of the density function, which is honestly the number I care most about. Am I missing something obvious?

Comment by halffull on Delegate a forecast · 2020-07-28T20:42:03.772Z · score: 3 (3 votes) · EA · GW

Hey, I run a business teaching people how to overcome procrastination (procrastinationplaybook.net is our not yet fully fleshed out web presence).

I ran a pilot program that made roughly $8,000 in revenue by charging 10 people for a premium interactive course. Most of these users came from a couple of webinars that my friend's hosted, a couple came from finding my website through the CFAR mailing list and webinars I hosted for my twitter friends.

The course is ending soon, and I'll spend a couple of months working on marketing and updating the course before the next launch, as well as:

1. Launching a podcast breaking down skills and models and selling short $10 lessons for each of them teaching how to acquire the skill.

2. Creating a sales funnel for my pre-course, which is a do-it-yourself planning course for creating the "perfect procrastination plan". Selling for probably $197

3. Creating the "post-graduate" continuity program after people have gone through the course, allowing people to have a community interested in growth and development, priced from $17/month for basic access to $197 with coaching.

Given those plans for launch in early 2021:

1. What will be my company's revenue in Q1 2021?

2. What will be the total revenue for this company in 2021?

Comment by halffull on Putting People First in a Culture of Dehumanization · 2020-07-22T12:11:06.773Z · score: 3 (2 votes) · EA · GW

I recommend Made to Stick by Chip and Dan Heath.

Comment by halffull on What skill-building activities have helped your personal and professional development? · 2020-07-21T17:56:37.568Z · score: 12 (6 votes) · EA · GW

Going through several startup weekends showed me what works and what doesn't when trying to de-risk new projects.

Comment by halffull on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T02:27:19.228Z · score: 6 (3 votes) · EA · GW

This is great! Was trying to think through some of my own projects with this framework, and I realized I think there's half of the equation missing, related to the memetic qualities of the tool.

1. How "symmetric" is the thing I'm trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?

2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.

3. How contained is the group with which I'm sharing? Outside of the memetic factors of the idea itself, is the person or group I'm sharing with it likely to spread it, or keep it contained.

Comment by halffull on Concern, and hope · 2020-07-20T18:25:57.858Z · score: 5 (3 votes) · EA · GW

Here's Raymond Arnold on this strategy:

https://www.lesswrong.com/posts/LxrpCKQPbdpSsitBy/short-circuiting-demon-threads-working-example

Comment by halffull on A Step-by-Step Guide to Running Independent Projects · 2020-07-17T00:36:06.257Z · score: 4 (3 votes) · EA · GW

This is great!

I'd love to be able to provide an alternative model that can work as well, based on Saras Sarasvathy's work on Effectuation.

In the effectuation model (which came from looking at the process of expert entrepreneuers), you don't start with a project idea up front. Instead, you start with your resources, and the project evolves based on demand at any given time. I think this model is especially good for independent projects, where much of the goal is to get credibility, resources, and experience.

Instead of starting with the goal, you start with your resources: What are my skills, interests, connections, and resources? What can I do with them?

And then, instead of doing something like Murphyjitsu to check for risks , you might reach out to a few of the people who are in your network, tell them about vague ideas, and chat with them about it. They may help crystalize what the project is, and you can get them onboard to help. Now you've effectively derisked by splitting up the work of the MVP among multiple people. If you can't get anyone to commit, that may be enough validation by itself to not continue.

Then, you may launch your MVP - not to validate doing the full project, but even to figure out what the full project is. By getting feedback on your MVP, seeing what people are excited about, and not, this can help crystallize the project even further.

Once you've gotten some momentum, you can write up, rather than a project proposal, a "project summary" showing your momentum so far and using that to get things like funding and clarify to yourself what has happened. Then, you can evaluate next steps.

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-23T14:07:11.451Z · score: 1 (1 votes) · EA · GW

I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.

However, it's important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.

Some relevant articles:

https://forum.effectivealtruism.org/posts/2cWEWqkECHnqzsjDH/doing-good-is-as-good-as-it-ever-was

https://www.independent.co.uk/news/business/analysis-and-features/nassim-taleb-the-black-swan-author-in-praise-of-the-risk-takers-8672186.html

https://foreverjobless.com/ev-millionaires-math/

https://www.facebook.com/yudkowsky/posts/10155299391129228

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-23T02:01:04.135Z · score: 4 (1 votes) · EA · GW
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.

But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1/50,000 that has ridiculous outsized impact - It's high expected value even if comparatively all the other events have low impact. And again, that's comparatively. Compared to say, most other events, an event on AI safety is ridiculously high impact.

It can't take more that ~50 events for every AI Safety researcher to get to know each other.

This is true, much of the networking impact of events is frontloaded.

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-23T01:20:59.635Z · score: 1 (1 votes) · EA · GW

Nope, 1/50,000 seems like a realistic ratio for very high impact events to normal impact events.

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-20T21:05:45.120Z · score: 1 (1 votes) · EA · GW
Would you say that events are low impact?

I think most events will be comparatively low impact compared to the highest impact events. Let's say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.

Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:

1. Pay all of the events based on their expected impact prior to the events, so the money evens out.

2. Pay a very small amount of money to the other events, so I can afford to pay the two events that had many orders of magnitude higher impact.

3. Only buy a small fraction of the impact of the very high impact events, so I have money left over to pay the small events and can reward them all on impact equally.

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-18T23:09:18.144Z · score: 2 (2 votes) · EA · GW
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?

I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?

I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1/1,000,000 of the people who start such projects can have high impact?

Comment by halffull on The Case for Impact Purchase | Part 1 · 2020-04-16T22:08:17.376Z · score: 1 (1 votes) · EA · GW
For an impact purchase the amount of money is decided based on how good impact of the project was

I'm curious about how exactly this would work. My prior is that impact is clustered at the tails.

This means that there will frequently be small impact projects, and very occasionally be large impact projects - My guess is that if you want to be able to incentivize the frequent small
impact projects at all, you won't be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of their impact, but in practice this means that there's a cap on how much you can receive from impact purchase.

Maybe a cap is fine, and you know that all you're ever get from an impact purchase is for instance $50,000, and the prestige comes with what % of impact they bought at that price.

Comment by halffull on New Top EA Causes for 2020? · 2020-04-02T21:54:55.080Z · score: 3 (2 votes) · EA · GW

Perhaps Dereke Bruce had the right of it here:

"In order to keep a true perspective of one's importance, everyone should have a dog that will worship him and a cat that will ignore him."

Comment by halffull on New Top EA Causes for 2020? · 2020-04-01T14:48:40.179Z · score: 23 (13 votes) · EA · GW

I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people's overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.

Therefore, I propose a new top EA cause for 2020: Distributing Puppies

  • Puppies decrease individual loneliness, allowing a more global worldview.
  • Puppies model unconditional love and altruism, creating a flowthrough to their owners.
  • Puppies with good owners on their own are just sources of positive utility, increasing global welfare.
Comment by halffull on What are EA project ideas you have? · 2020-03-30T21:01:21.983Z · score: 5 (2 votes) · EA · GW

You might be interested in this same question that was asked last June:


https://forum.effectivealtruism.org/posts/NQR5x3rEQrgQHeevm/what-new-ea-project-or-org-would-you-like-to-see-created-in

Comment by halffull on Halffull's Shortform · 2020-03-28T01:48:23.847Z · score: 3 (2 votes) · EA · GW

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that caused more suffering for everyone, and gave government way more power and way less oversight than is safe, because we over-reacted to prevent one bad event, not considering the counterfactual invisible things we would be losing.

This will happen again with Corona, things will be put in place that are maybe good at preventing pandemics (or worse, making people think they're safe from pandemics), but create a million trivial conveniences every day that add up to more strife than they're worth.

These types of rules are very hard to repeal after the fact because of absence blindness - someone needs to do the work of calculating the cost/benefit ratio BEFORE they get implemented, then build a convincing enough narrative to what seems obvious/common sense measures given the climate/devastation.

Comment by halffull on What posts do you want someone to write? · 2020-03-26T15:51:04.051Z · score: 1 (1 votes) · EA · GW
Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

Comment by halffull on Growth and the case against randomista development · 2020-03-26T01:05:00.552Z · score: 2 (2 votes) · EA · GW

No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.

Comment by halffull on Growth and the case against randomista development · 2020-03-26T00:43:11.448Z · score: 1 (1 votes) · EA · GW
the post focuses on human welfare,

It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

I'm also very interested in how increased economic growth impacts existential risk.

At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

Comment by halffull on Growth and the case against randomista development · 2020-03-26T00:26:30.396Z · score: 1 (1 votes) · EA · GW

Let's say you believe two things:

1. Growth will have flowthrough effects on existential risk.

2. You have a comparative advantage effecting growth over x-risk.

You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.

Comment by halffull on What posts do you want someone to write? · 2020-03-25T20:58:04.145Z · score: 3 (2 votes) · EA · GW

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

Comment by halffull on Halffull's Shortform · 2020-03-25T17:42:22.314Z · score: 5 (4 votes) · EA · GW

It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.

Comment by halffull on Halffull's Shortform · 2020-03-25T16:14:40.258Z · score: 5 (2 votes) · EA · GW

This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.

You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.

This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.

Comment by halffull on What posts do you want someone to write? · 2020-03-25T15:18:24.150Z · score: 4 (1 votes) · EA · GW

Here's an analysis by 80k. https://80000hours.org/problem-profiles/improving-institutional-decision-making/

Comment by halffull on Halffull's Shortform · 2020-03-25T15:01:48.420Z · score: 5 (5 votes) · EA · GW

Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.

Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.

On the other hand, I think it's quite plausible that this particular problem will take care of itself. When people begin to experience depression, will the young people who are the economic engine of the country really continue to stay home and quarantine themselves? It seems quite likely that we'll simply become stratified for a while where young healthy people break quarantine, and the older and immuno-compromised stay home.

But getting the time of this right is everything. Striking the right balance of "deaths from economic freefall" and "deaths from an overloaded medical system" is a balancing act, going too far in either direction results in hundreds of thousands of unnecessary deaths.

Then I got to thinking about the effect of a depressed economy on x-risks from AI. Because the funding for AI safety is

1. Mostly in non-profits

and

2. Orders of magnitude smaller than funding for AI capabilities

It's quite likely that the funding for AI safety is more inelastic in depressions than than the funding for AI capabilities. This may answer the puzzle of why more EAs and rationalists aren't speaking cogently about the tradeoffs between depression and lives saved from Corona - they have gone through this same train of thought, and decided that preventing a depression is an information hazard.

Comment by halffull on Why not give 90%? · 2020-03-24T19:38:22.971Z · score: 14 (9 votes) · EA · GW

I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

Comment by halffull on Halffull's Shortform · 2020-03-15T18:04:00.296Z · score: 8 (3 votes) · EA · GW

I've had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I've been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.

I would appreciate if people could link me to other sources that are important. I'm especially interested in people making arguments for more experimentation, as I mostly found the opposite.

1: 80k's piece on accidental harm: https://80000hours.org/articles/accidental-harm/#you-take-on-a-challenging-project-and-make-a-mistake-through-lack-of-experience-or-poor-judgment

2. How to avoid accidentally having a negative impact with your project, by Max Dalton and Jonas Volmer: https://www.youtube.com/watch?v=RU168E9fLIM&t=519s

3. Steelmanning the case against unquantifiable interventions, By David Manheim: https://forum.effectivealtruism.org/posts/cyj8f5mWbF3hqGKjd/steelmanning-the-case-against-unquantifiable-interventions

4. EA is Vetting Constrained: https://forum.effectivealtruism.org/posts/G2Pfpkcwv3bJNF8o9/ea-is-vetting-constrained

5. How X-Risk Projects are different from Startups by Jan Kulveit:

https://forum.effectivealtruism.org/posts/wHyy9fuATeFPkHSDk/how-x-risk-projects-are-different-from-startups

Comment by halffull on Growth and the case against randomista development · 2020-01-17T22:17:22.057Z · score: 9 (4 votes) · EA · GW
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.

I'm curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the "developed" category could quite possibly make coordination around technological risks harder.

I think what you're saying is plausible but I don't know of the arguments for that case.

Comment by halffull on Growth and the case against randomista development · 2020-01-17T18:02:41.835Z · score: 24 (12 votes) · EA · GW

I'm quite excited to see an impassioned case for more of a focus on systemic change in EA.

I used to be quite excited about interventions targeting growth or innovation, but I've recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:

  • Climate Change
  • AGI Risk
  • Nuclear and Biological Weapons Research
  • Cheaper weapons in general

Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.

Comment by halffull on [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration · 2019-12-23T03:20:29.458Z · score: 3 (3 votes) · EA · GW

This work is excellent and highly important.

I would love to see this same setup experimented with for Grant giving.

Comment by halffull on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-17T17:47:55.395Z · score: 12 (5 votes) · EA · GW

Found elsewhere on the thread, a list of weird beliefs that Buck holds: http://shlegeris.com/2018/10/23/weirdest

Comment by halffull on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-14T02:34:33.359Z · score: 6 (3 votes) · EA · GW

I'd be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.

Comment by halffull on EA Hotel Fundraiser 5: Out of runway! · 2019-10-31T15:32:02.764Z · score: 12 (7 votes) · EA · GW

I think there's a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.

The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to other hard to measure project. This is what I attempted to do with the EA hotel here: https://www.lesswrong.com/posts/tCHsm5ZyAca8HfJSG/the-case-for-the-ea-hotel

Comment by halffull on Effective Pro Bono Projects · 2019-09-12T03:24:05.510Z · score: 5 (2 votes) · EA · GW

Tobacco taxes are pigouvian under state sponsored healthcare.

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T22:33:09.267Z · score: 1 (1 votes) · EA · GW

Hmm that's odd, I tested both in incognito mode and they seemed to work.

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T21:37:55.874Z · score: 1 (1 votes) · EA · GW

You shouldn't, it's an evernote public sharing link that doesn't require sign in. Note also that I tried to embed the image directly in my comment, but apparently the markdown for images doesn't work in comments?

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T20:16:13.305Z · score: 4 (3 votes) · EA · GW

I timeboxed 30 minutes to manually transfer this to yED. I'm fairly certain there's one or two missing edges here's what I got:

https://www.evernote.com/shard/s8/sh/1c1071e2-1ab0-47e8-a4f8-6560139e9cac/a31dd7a67e852f1b/res/8c016672-4d26-4bb9-b85d-a99f381ab1e6/FundingImage.png

Here's the yED file, if anyone wants to try their hand at other layout algorithms:

https://www.evernote.com/shard/s8/client/snv?noteGuid=1c1071e2-1ab0-47e8-a4f8-6560139e9cac&noteKey=a31dd7a67e852f1b&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs8%2Fsh%2F1c1071e2-1ab0-47e8-a4f8-6560139e9cac%2Fa31dd7a67e852f1b&title=FundingImage.png

Comment by halffull on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T16:01:39.139Z · score: 2 (2 votes) · EA · GW

Small suggestion for future projects like this. I used to use graphviz for diagramming, but since found yED and never looked back. Its edge-routing and placement algorithms are much better, and can be tweaked with WYSIWYG editing after the fact.

Comment by halffull on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-27T19:32:33.182Z · score: 2 (2 votes) · EA · GW

I tend to think this is also true of any analysis which includes only one way interactions or one way causal mechanisms, and ignores feedback loops and complex systems analysis. This is true even if each of parameters is estimaed using probability distributions.

Comment by halffull on How do you decide between upvoting and strong upvoting? · 2019-08-27T19:28:25.784Z · score: 8 (4 votes) · EA · GW

I upvote if I think the post is contributing to the current conversation, and strong upvote if I think the post will contribute to future and ongoing conversations (IE, its' a comment or post that people should see when browsing the site, aka Stock vs. Flow).

Occasionally, I'll strong upvote/downvote strategically to get a comment more in line with what I think it "deserves", trying to correct a perceived bias of other votes.



Comment by halffull on EAGxNordics 2019 Postmortem · 2019-08-27T19:21:24.214Z · score: 3 (3 votes) · EA · GW

I'm sad because I really enjoyed EAGx nordics :). In my view the main benefits of conferences are the networks and idea-sex that come out of them, and I think it did a great job at both of those. I'm curious if you think the conference "made back its' money" in terms of value to participants, which is seperate from the question of counterfactual value you pose here.

Comment by halffull on What posts you are planning on writing? · 2019-07-26T17:25:03.073Z · score: 2 (2 votes) · EA · GW
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

Comment by halffull on Why the EA Forum? · 2019-06-20T19:08:25.044Z · score: 3 (2 votes) · EA · GW

Yes, this is more an argument for "don't have downvotes at all" like hacker news or traditional forum.

Note I think your team has made the correct tradeoffs so far, this was more paying devils advocate.

Comment by halffull on Why the EA Forum? · 2019-06-20T16:49:09.178Z · score: 1 (1 votes) · EA · GW

Of course there's a reverse incentive here, where getting downvoted feelsbadman, and therefore you may be even less likely to want to post up unfinished thoughts, as compared to them simply getting displayed in chronological order.

Comment by halffull on Raemon's EA Shortform Feed · 2019-06-19T22:39:59.097Z · score: 3 (2 votes) · EA · GW

I won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.

Comment by halffull on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T21:03:54.305Z · score: 1 (1 votes) · EA · GW

Do you think that Guesstimate has not yet made $200,000 worth of value for the world? I'm legitimately unsure about this point but my priors say its' at least possible that it's added that much value in time saved and better estimates. I think that systems modelling could have similar impacts.