Posts

[Creative writing contest] Blue bird and black bird 2021-09-16T23:56:23.202Z
Disentangling "Improving Institutional Decision-Making" 2021-09-13T23:50:16.418Z
Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” 2021-08-25T00:43:20.358Z
Humanities Research Ideas for Longtermists 2021-06-09T04:39:40.873Z

Comments

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-21T08:34:41.584Z · EA · GW

I'm highly enjoying the "death of the author" interpretation (and even just its existence), thanks! :)

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-21T08:32:46.912Z · EA · GW

Fair point, thank you! If I have some time, I might replace the sprout with some other kind of risk (maybe something flammable), but I haven't though about it very carefully yet, and would definitely take suggestions. 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-21T08:31:20.841Z · EA · GW

For what it's worth, I highly enjoyed reading this interaction:) +1 to Dario and everyone else here. 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-21T08:29:43.838Z · EA · GW

Thanks for the feedback! I definitely dislike propaganda, and would be curious to see which parts felt the most propaganda-y to you. Also, to echo Dario, below--- I appreciate your very kind delivery of the negative feedback. :)
I don't know if I will ever end up spending much time improving the story, as my life is pretty hectic at the moment, but I would be interested in any specific improvements you suggest. (So far, I haven't really tried much, but I've considered ways of addressing the inadequacy of the oak sprout metaphor by e.g. replacing it with something flammable.) 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-18T05:13:52.129Z · EA · GW

To be honest, I didn't think very hard about the names. The thought process was roughly: 1) I want to make a story whose characters are birds, and I could have a smart black bird. 2) Incidentally, I like that it doesn't have to be technical or complicated--- there are birds you can call "blackbirds," and there are birds you can call "bluebirds," so 3) I'll call my characters "black bird" and "blue bird." And I liked the colors this suggested, so that didn't veto the decision. :) 

In any case, I'm glad you liked it, thanks! 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-18T05:08:03.730Z · EA · GW

Thanks for the comments! The urgency argument makes sense. I'm not sure if I'll end up changing things, but I'll consider it, and thanks for pointing this out! 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-17T13:33:15.959Z · EA · GW

Thanks a bunch--- I'm glad you liked it!

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-17T13:31:17.511Z · EA · GW

Thank you for this comment! 

Comment by Lizka on [Creative writing contest] Blue bird and black bird · 2021-09-17T13:30:47.533Z · EA · GW

Thank you! I'm glad. :)

Comment by Lizka on Disentangling "Improving Institutional Decision-Making" · 2021-09-17T13:29:35.832Z · EA · GW

An update: after a bit of digging, I discovered this post, "Should marginal longtermist donations support fundamental or intervention research?", which contains a discussion on a topic that is quite close to "should EA value foundational (science/decision theory) research," (in the pathway (1) section of my post). The conclusions of the post I found do not fit into my vague impressions of "the consensus." In particular, the conclusion of that post is that longtermist research hours should often be spent on fundamental research (which is defined by its goals). 

I’m moderately confident that, from a longtermist perspective, $1M of additional research funding would be better allocated to fundamental rather than intervention research (unless funders have access to unusually good intervention research opportunities, but not to unusually good fundamental research opportunities)

(Disclaimer: the author, Michael, is employed at Rethink Priorities, where I am interning. I don't know if he still endorses this post or its conclusions, but the post seems relevant here and very valuable as a reference.)

Comment by Lizka on Impact chains · 2021-09-16T07:33:46.219Z · EA · GW

For what it's worth, I've seen "pathway to impact" used in the way you seem to use "impact chain" (e.g. and e.g., and I used it a bunch), and it seems somewhat more natural to me. It's possible that "pathway to impact" is just a niche term that clicked with me, though, and I definitely agree that it's a useful concept. 

Comment by Lizka on Disentangling "Improving Institutional Decision-Making" · 2021-09-15T12:16:56.277Z · EA · GW

Thank you for this comment! 

I intuitively would’ve drawn the institution blob in your sketch higher, i.e. I’d have put fewer than (eyeballing) 30% of institutions in the negatively aligned space (maybe 10%?). 

I won't redraw/re-upload this sketch, but I think you are probably right. 

In moments like this, including a quick poll into the forum to get a picture what others think would be really useful.

That's a really good idea, thank you! I'll play around with that.

re: "argument for how an abstract intervention that improves decision-making would also incidentally improve the value-alignment of an institution" etc.

Thank you for the suggestions! I think you raise good points, and I'll try to come back to this.

Comment by Lizka on EA Forum Creative Writing Contest: Submission thread for work first published elsewhere · 2021-09-15T11:15:01.964Z · EA · GW

I think this is a really cool work/parable: "That Alien Message." It's by Eliezer Yudkowsky, so I don't know if it's too well known to count, but it still seems worth collecting in this context. (The topic, or "relevance" from an EA point of view, of the story is a spoiler, but should be pretty clear.) 

Comment by Lizka on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-15T11:07:33.653Z · EA · GW

Thank you so much! 

Comment by Lizka on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-14T23:25:57.280Z · EA · GW

For what it's worth, I also feel like people might shy away from referring works if every referral has to be a top-level post (rather than a reply, as Linch suggests). In particular, I personally am second guessing myself and will probably not end up referring anything, but would happily contribute things as comments (I might end up doing that anyway, if I feel like it's relevant enough, and people can repost if they want to). However, this could just be a personal preference rather than a common or shared experience. 

Comment by Lizka on Disentangling "Improving Institutional Decision-Making" · 2021-09-14T08:38:10.034Z · EA · GW

Thank you for this response! I think I largely agree with you, and plan to add some (marked) edits as a result. More specifically, 

On the 80K problem profile: 

  • I think you are right; they are value-oriented in that they implicitly argue for the targeted  approach. I do think they could have make it a little clearer, as much (most?) of the actual work they recommend or list as an example seems to be research-style. The key (and important) exception that I ignored in the post is the "3. Fostering adoption of the best proven techniques in high-impact areas" work they recommend, which I should not have overlooked. (I will edit that part of my post, and likely add a new example of research-level value-neutral IIDM work, like a behavioral science research project.)

"I don't think the value-neutral version of IIDM is really much of a thing in the EA community"

  • Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable? The CES example is quite complicated, but I'm not sure that I think it should be disqualified here. (To be clear, however, I do think CES seems to do very valuable work--- I'm just not exactly sure how to evaluate it.)

Side note, on "a core tenet of democracy is the idea that one citizen's values and policy preferences shouldn't count more than another's"

  • I agree that this is key to democracy. However, I do think it is valid to discuss to what extent voter's values align with actual global good (and I don't think this opinion is very controversial). For instance, voters might be more nationalistic than one might hope, they might undervalue certain groups' rights, or they might not value animal or future lives. So I think that, to understand the actual (welfare) impact of an intervention that improves a government's ability to execute its voters' aims, we would need to consider more than democratic values.  (Does that make sense? I feel like I might have misinterpreted what you were trying to say, a bit, and am not sure that I am explaining myself properly.) On the other hand, it's possible that good government decision-making is bottlenecked  more by its ability to execute its voters' aims than it is by the voters' values' ethical alignment-- but I still wish this were more explicitly considered.

"It looks like you're essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes."

  • I think I explained myself poorly in the post, but this is not how I was thinking about it. I agree that the power of an institution is (at least) as important as its decision-making skill (although it does seem likely that these things are quite related), but I viewed IIDM as mostly focused on decision-making and set power aside. If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume  of a rectangular prism, not the relevant area of a rectangle. (Note that the visualizing approach in the "A few overwhelmingly harmful institutions" image is another way of drawing volume or a third dimension, I think.) I might add a note along these lines to the post to clarify things a bit. 

About "the distinction between stated values and de facto values for institutions" 

  • You're right, I am very unclear about this (and it's probably muddled in my head, too). I am basically always trying to talk  about the de facto values. For instance, if a finance company whose only aim is to profit also incidentally brings a bunch of value to the world, then I would view it as value-aligned for the purpose of this post. To answer your questions about the typical private health insurance company, "does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm" --- it would count as increasing value alignment, not improving decision quality. 
  • Honestly, though, I think this means I should be much more careful about this term, and probably just clearly differentiate between "stated-value-alignment" and "practical-value-alignment." (These are terrible and clunky terms, but I cannot come up with better ones on the spot.) I think also that my own note about "well-meaning [organizations that] have such bad decision quality that they are actively counterproductive to their aims" clashes with the "value-alignment" framework. I think that there is a good chance that it does not work very well for organizations whose main stated aim is to do good (of some form). I'll definitely think more about this and try to come back to it. 

"The professional world is incredibly siloed, and it's not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example."

  • This is a really good point (and something I did not realize, probably in part due to a lack of background). Would you mind if I added an excerpt from this or a summary to the post? 

On your note about"generic-strategy": Apologies for that, and thank you for pointing it out! I'll make some edits. 

Note: I now realize that I have basically inverted normal comment-response formatting in this response, but I'm too tired to fix it right now. I hope that's alright! 

Once again, thank you for this really detailed comment and all the feedback-- I really appreciate it! 

Comment by Lizka on Utilitarianism Symbol Design Competition · 2021-09-09T06:54:04.433Z · EA · GW

Hi! I'm out of the loop, but I'm curious about whether this resolved, and if there is a place to see submissions. The competition was supposed to close at the end of the month (August 2021), and it is now September. 

Comment by Lizka on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-08T08:28:09.792Z · EA · GW

Thank you for the post, I found it interesting! [Minor point in response to Linch's comment.]

I generally agree with Linch's surprise, but

When people choose not to work on other people's ideas, it's usually due to a combination of personal fit and arrogance in believing your own ideas are more important (or depending on the relevant incentives, other desiderata like "publishable", "appealing to funders", or "tractable"), not because of a lack of ideas! 

I (weakly) think that another factor here is that people are trained (e.g. in their undergraduate years) to come up with original ideas and work on those, whether or not they are actually useful. This gets people into the habit of over-valuing a form of topic originality. (I.e. it's not just personal fit, arrogance, and external incentives, although those all seem like important factors.)

This is definitely the case in many of the humanities, but probably less true for those who participate in things like scientific research projects, where there are clearly useful lab roles for undergraduates to fill. In my personal experience, all my math work was assigned to me (inside and outside of class), while on the humanities side, I basically never wrote a serious essay whose topic I did not create. (This sometimes led to less-than-sensible papers, especially in areas where I felt that I lacked background and so had to find somewhat bizarre topics that I was confident were "original.") 

My guess is that changing this would be valuable, but might be very hard. Projects like Effective Thesis come to mind. 

Comment by Lizka on elifland's Shortform · 2021-09-06T06:15:23.616Z · EA · GW

I really enjoyed your outline, thank you! I have a few questions/notes: 

  1. [Bottlenecks] You suggest "Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making" as a crucial step in the "story" of crowd forecasting’s success (the "pathway to impact"?) --- this seems very true to me. But then you write "I doubt this is the main bottleneck right now but it may be in the future" (and don't really return to this). 
    1.  Could you explain your reasoning here? My intuition was that important decision-makers' willingness (and institutional ability) to use forecasting info would be a major bottleneck. (You listed Rethink Priorities and Open Phil as examples of institutions that" seem excited about using crowd forecasts to inform important decisions," but my understanding was that their behavior was the exception, not the rule. )
    2. If, say, the CDC (or important people there, etc.) were interested in using Metaculus to inform their decision-making, do you think they would be unable to do so due to a lack of interest (among forecasters) and/or a lack of relevant forecasting questions? (But then, could they not tell suggest questions they felt were relevant to their decisions?) Or do you think that the quality of answers they would get (or the amount of faith they would be able to put into those answers) wouldn't be sufficient? 
  2. [Separate, minor confusion] You say: "Forecasts are impactful to the extent that they affect important decisions," and then you suggest examples a-d ("from an EA perspective") that range from career decisions or what seem like personal donation choices to widely applicable questions like "Should AI alignment researchers be preparing more for a world with shorter or longer timelines?" and "What actions should we recommend the US government take to minimize pandemic risk?" This makes me confused about the space (or range) of decisions and decision-makers that you are considering here. 
    1. Are you viewing group forecasting initiatives as a solution to personal life choices? (Or is the "I" in a/b a very generalized "I" somehow?) (Or even 
    2. I'd guess that an EA perspective on the possible impact of crowd forecasting should focus on decision-makers with large impacts whether or not they are EA-aligned (e.g. governmental institutions), but I may be very wrong. 
  3. [Side note] I loved the section "Idea for question creation process: double crux creation," and in general the number of possible solutions that you list, and really hope that people try these out or study them more. (I also think you identify  other really important bottlenecks). 

Please note that I have no real relevant background (and am neither a forecast stakeholder nor a proper forecaster).

Comment by Lizka on Crazy ideas sometimes do work · 2021-09-04T09:33:47.822Z · EA · GW

Thanks for writing this-- I was interested! 

Just for ease of access (because I went looking for more information myself), you can find more info on the Long Term Future Fund and the application at this link (this also includes other funds), and there's an AMA with the Long Term Future Fund team here (the AMA is closed but has a ton of comments). 

Comment by Lizka on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-09-02T01:31:12.046Z · EA · GW

I don't think you are misinterpreting; this issue is applicable when the market is advisory or indirect (not hard-wired to decisions, like futarchy is-- that has its own issues). There's a longer discussion of this issue in the thread that starts with Harrison's comment. 

Comment by Lizka on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-09-02T01:27:45.071Z · EA · GW

I agree that we're not currently good at "maximizing welfare," but I worry that futarchy would lead to issues stemming from over-optimization of a measure that is misaligned from what we actually want. In other words, my worry is that common sense barriers would be removed under futarchy (or we would lose sight of what we actually care about after outlining an explicit welfare measure), and we would over-optimize whatever is outlined in our measure of welfare, which is never going to be perfectly aligned to our actual needs/desires/values. 

This is a version of Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." (Or possibly Campbell's law, which is more specific.) 

Comment by Lizka on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-09-02T01:15:23.716Z · EA · GW

I basically agree with Linch's answer, and just want to add that a futarchy-like system (or even, likely, coherent use of prediction markets) would require a lot of management/organizational support (in addition to subsidization, probably, to push back against thin markets), and management/operations already seems like a current bottleneck in EA. 

(I'm also unconvinced that EA is the best place to kickstart something like using prediction markets, since people in EA are presumably already incentivized to make decisions that are likely to produce good outcomes and to share information they feel is relevant to those decisions. The strength of futarchy is (in theory) channeling private monetary/profit incentives towards common values or a kind of communal good, so it makes more sense outside of communities that are inherently allied under a common project. I might be quite wrong, though, and would be interested in possible counter-arguments.

On a similar note, my understanding is that Hanson considers medium to large and private companies as the ideal place to kickstart the use of prediction markets, with the idea that eventually, the techniques developed as prediction markets are used and improved in that sphere can also be used for direct public benefit.)

Comment by Lizka on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-09-02T01:03:58.631Z · EA · GW

On buying a bunch of shares to get a policy accepted: 

I agree that there would be scenarios in which manipulation by the wealthy is possible (and likely would happen), and you describe them well (thank you!). I mainly wanted to clarify or push back against a misconception I personally had when I initially read the paper, which was that this system basically grants decision-power entirely to those who are rich and motivated enough. The system is less silly than I initially thought, because the manipulation that is possible is much harder and less straightforward than what one might naively think (if one is new to markets, as I was). 

Comment by Lizka on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-09-02T00:59:49.540Z · EA · GW

lockdowns is most likely correlated with increased deaths [since...] decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high

That is a really interesting illustration of the general causality =/= conditionality issue I mention in the post (and which Harrison elaborates on), thank you!

I agree that the generalization--- the fact that a decision is made reveals currently unavailable information--- is the key point, here, and Harrison's interpretation seems like a reasonable and strong version or manifestation of the issue. 

Comment by Lizka on What are some moral catastrophes events in history? · 2021-06-22T07:40:12.883Z · EA · GW

It seems valuable to distinguish between long-standing practices and things that might be called historical "events." For instance, different systems of slavery would be moral catastrophes that were long-standing practices. Wars and genocides might be more like "historical events" (although some happened over rather extended periods of time). 

Some other long-standing practices that seem to qualify, depending on your moral views. 

  1. Factory farming
  2. Forms of (mass) incarceration (and other systems of state-sanctioned punishment)
  3. Certain actions with respect to our environment and non-human life on the planet
  4. Mass mistreatment of certain groups of people (e.g. societies that accept rape)

Possible "events" that seem to qualify: 

  1. Setting up colonies (many instances, was usually supported by large groups of people, not just by the governments themselves)
  2. Some additional categories of events that are always or often moral catastrophes: wars, genocides, any situation in which large groups of people suffered and the world could have helped more than it did (any famine, refugee crises, etc.), democides (e.g. millions of people killed under Stalin in the USSR, even setting Holodomor aside)
  3. There are resources like this list of wars and anthropogenic disasters by death toll (this also provides ideas for classification)

I'm a little concerned that attempts at trying to determine what "how often moral catastrophes happen and the scale of suffering they cause" will be highly definition-dependent. E.g. if you lower your bars for criteria # 1 and 2 for what you call a moral catastrophe, you'll get more moral catastrophes. But I personally found thinking about past "moral catastrophes" and the ways in which they were justified in different societies helpful for trying to identify possibly current and future moral catastrophes, so the project does seem quite useful. 

(By the way, another possible resource for this could be the list of references in the paper linked in the post you reference--- although I haven't actually read the paper or the references, only the summary.)

(I made this a comment as it doesn't seem specific enough to be an answer, but I'm not sure that that was the right call.)

Comment by Lizka on Humanities Research Ideas for Longtermists · 2021-06-19T23:45:34.739Z · EA · GW

Hi folks! Thank you so much for the warm reception this post has received so far. I'm actively trying to improve my EA-aligned research and writing skills, so I would really appreciate any constructive feedback you might be willing to send as a comment or a private message. (Negative feedback is especially appreciated.) If you are worried about wording criticism in a diplomatic way, Linch (my supervisor) has also offered to perform the role of a middleman. 

Of course, we would also appreciate being informed if any of the proposed research ideas actually change your decisions (e.g. if you end up writing a paper or thesis based on an idea listed here). (And I would be really curious to see where that goes.)

On a different note, there are additional posts that I would have linked to this one if I had published later. In particular, the Vignettes Workshop (AI Impacts) , Why EAs researching mainstream topics can be useful (note: Michael and I both work at Rethink Priorities),  this post about a game on animal welfare that just came out (I haven’t tried the game), and this question about the language Matsés and signaling epistemic  certainty .