Posts

Forecasting Through Fiction 2022-07-06T05:23:18.422Z
Sam Bankman-Fried should spend $100M on short-term projects now 2022-05-31T19:17:51.601Z
[Linkpost] The Problem With The Current State of AGI Definitions 2022-05-29T17:01:53.305Z
Should we be hiring more “unqualified” people? 2022-05-15T01:06:55.135Z
If you had an hour with a political leader, what would you focus on? 2022-05-09T01:23:15.870Z
Best person to contact for quick media opportunity? 2022-05-03T17:06:49.570Z
Yitz's Shortform 2022-04-04T22:03:53.299Z

Comments

Comment by Yitz on $20K in Bounties for AI Safety Public Materials · 2022-08-05T16:26:10.622Z · EA · GW

Thanks for the clarification! I might try to do something on the Orthogonality thesis if I get the chance, since I think that tends to be glossed over in a lot of popular introductions.

Comment by Yitz on What reason is there NOT to accept Pascal's Wager? · 2022-08-05T04:24:51.791Z · EA · GW

My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves. If your epistemics require that you spend all your money on projects that will, for all intents and purposes do nothing (and which if universally followed would lead to a clearly dystopian world where only muggers get money), then I’d wager that the epistemics are the problem. Rationalists, and EAs, should play to win, and not fall prey to obvious basilisks of our own making.

Comment by Yitz on $20K in Bounties for AI Safety Public Materials · 2022-08-05T03:58:27.088Z · EA · GW

Question—is $20,000 awarded to every entry which qualifies under the rules, or is there one winner selected among the pool of all who submit an entry?

Comment by Yitz on Some updates in EA communications · 2022-08-02T22:53:46.259Z · EA · GW

This is really exciting! I’m glad there are so many talented people on the case, and hope the good news will only grow from here :)

Comment by Yitz on The first AGI will be a buggy mess · 2022-07-31T18:55:02.630Z · EA · GW

I strongly agree with you on points one and two, though I’m not super confident on three. For me the biggest takeaway is we should be putting more effort into attempts to instill “false” beliefs which are safety-promoting and self-stable.

Comment by Yitz on Conjecture: Internal Infohazard Policy · 2022-07-29T23:29:22.002Z · EA · GW

Thanks for making this document public, it’s an interesting model! I am slightly concerned this could lead to reduced effectiveness within the organization due to reduced communication, which could plausibly cause more net harm in EV than the increased risk of infohazard leakage. I assume you’ve done that cost/benefit analysis already of course, but thought it’s worth mentioning just in case.

We are in the process of reaching out to individuals and we will include them after they confirm. If you have suggestions for individuals to include please add a comment here.

It may be worth talking with a trusted PR expert before going public. I’ve done PR work for a tech company in the past, and my experience there was that sometimes people can be clueless about how the average person or the media will receive a story once it leaves their cultural circle. It is not always obvious to insiders that a given action will lead to public blowblack (or loss of important private allies/investors), so if that’s of potential concern, I highly recommend talking with someone who does good work in public/media relations. If necessary feel free to tap me, though note that I am closer to hobbyist than expert, so you should find a more experienced go-to PR person if possible.

Comment by Yitz on Is my blog too provocative for a group organizer? · 2022-07-12T11:27:22.563Z · EA · GW

There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.

Comment by Yitz on Forecasting Through Fiction · 2022-07-06T16:17:57.694Z · EA · GW

Not a bad idea! I’d love to try to actually test this hypothesis—my hunch is that it will do worse at prediction in most areas, but there may be some scenarios where thinking things through from a narrative perspective could provide otherwise hard-to-reach insight

Comment by Yitz on The Future Might Not Be So Great · 2022-07-05T23:19:14.935Z · EA · GW

I was personally unaware of the situation until reading this comment thread, so can confirm

Comment by Yitz on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T21:50:58.259Z · EA · GW

My brother was recently very freaked out when I asked him to pose a set of questions that he thinks an AI wouldn’t be able to answer, and GPT-3 gave excellent-sounding responses to his prompts.

Comment by Yitz on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T21:48:21.343Z · EA · GW

Seconding this—I’m definitely personally curious what such a chart would look like!

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-06-07T06:35:37.763Z · EA · GW

I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.

Comment by Yitz on Announcing a contest: EA Criticism and Red Teaming · 2022-06-02T10:25:47.311Z · EA · GW

I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?

Comment by Yitz on Sam Bankman-Fried should spend $100M on short-term projects now · 2022-06-01T17:31:47.141Z · EA · GW

I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.

Comment by Yitz on Sam Bankman-Fried should spend $100M on short-term projects now · 2022-06-01T00:27:20.876Z · EA · GW

Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”

Comment by Yitz on Sam Bankman-Fried should spend $100M on short-term projects now · 2022-06-01T00:21:58.098Z · EA · GW

Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons would impugn on our perceived integrity. The thing is, reputation is actually one of the things we are already paying a tremendous amount of attention to—in the context of both forecasting and charity evaluation. To explain:

In forecasting, if you want your predictions to be maximally accurate, it is highly worthwhile to see what domain experts and superforecasters are saying, since they either have a confirmed track record of getting predictions right, or a track record of contributing to the relevant field (which means they will likely have a more robust inside-view). In charity evaluation, the only thing we usually have to go on to determine the effectiveness of existing charities is what the charities themselves say about their impact, and if we’re very lucky, what outside researchers have evaluated. Ultimately, the only real reason we have to trust some people or organizations more than others is their track record (certifications are merely proxies for that). Organizations like GiveWell partially function as track-record evaluators, doing the hard parts of the work for us to determine if charities are actually doing what they say they’re doing (comparing effectiveness once that’s done is the other aspect of their job, of course). When dealing with longtermist charities, things get trickier. It’s impossible to evaluate a direct track record of impact, so the only thing we have to go on is proxies for effectiveness—is the charity structured well, do we trust the people working there, have they been effective at short-termist projects in the past, etc…evaluation becomes a semi-formalized game of trust.

The outside world cares about track record as much—if not significantly more than—we do. I do not think it would signal a lack of integrity for SBF to deliberately invest in short-term altruistic projects which can establish a positive track record, showing that not only does he sincerely want to make the world a better place, he knows how to actually go about doing that.

Comment by Yitz on Sam Bankman-Fried should spend $100M on short-term projects now · 2022-05-31T23:19:23.831Z · EA · GW

Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-30T19:20:16.441Z · EA · GW

If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-30T19:17:59.951Z · EA · GW

Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.

Comment by Yitz on How can we make Our World in Data more useful to the EA community? · 2022-05-29T17:12:07.600Z · EA · GW

I'm not sure if you're still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis (https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis) could badly use some infographics!! My favorite graph on the subject is this one (from https://news.sky.com/story/violent-crime-linked-to-levels-of-lead-in-air-10458451; I like it because it shows this isn't just localized to one area), but I'm pretty sure it's under copyright unfortunately.

Comment by Yitz on Monthly Overload of EA - June 2022 · 2022-05-29T04:31:42.129Z · EA · GW

Love this newsletter, thanks for making it :)

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-29T03:20:46.838Z · EA · GW

One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.

Comment by Yitz on My first effective altruism conference: 10 learnings, my 121s and next steps · 2022-05-27T13:24:25.484Z · EA · GW

Thanks for the post—It was really amazing talking with you at the conference :)

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-26T18:08:22.072Z · EA · GW

We already know that we can create net positive lives for individuals

Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)

Comment by Yitz on Arguments for Why Preventing Human Extinction is Wrong · 2022-05-26T18:05:00.629Z · EA · GW

If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?

Comment by Yitz on Some unfun lessons I learned as a junior grantmaker · 2022-05-26T17:27:08.567Z · EA · GW

I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.

Comment by Yitz on Some unfun lessons I learned as a junior grantmaker · 2022-05-26T17:21:36.762Z · EA · GW

my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice

Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.

Comment by Yitz on DeepMind’s generalist AI, Gato: A non-technical explainer · 2022-05-17T15:30:46.671Z · EA · GW

Ditto here :)

Comment by Yitz on Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter · 2022-05-16T17:31:32.744Z · EA · GW

I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.

Comment by Yitz on What are examples where extreme risk policies have been successfully implemented? · 2022-05-16T17:17:50.403Z · EA · GW

Fixing the Ozone Layer should provide a whole host of important insights here.

Comment by Yitz on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T09:51:23.197Z · EA · GW

I would be in favor of this!

Comment by Yitz on Should we be hiring more “unqualified” people? · 2022-05-15T09:48:36.852Z · EA · GW

what's stopping random people from just going after the bounties themselves?

Simple answer—they don’t know the bounties exist. Bounties are usually only posted in local EA groups, and if you’re outside of those groups, even if you’re looking for bounties to collect, the amount of effort it would take to find out about our community’s bounty postings would be prohibitively high (and there’s plenty of lower-hanging fruit in search space). Likewise, many large companies hire recruiters to expressly go out and find talent, rather than hoping that talent finds them. The market is efficient, but it is not omniscient.

Comment by Yitz on Should we be hiring more “unqualified” people? · 2022-05-15T09:42:01.155Z · EA · GW

Are you sure that all problems we’re facing are necessarily difficult in this he sort of way a non-expert would be bad at? I don’t have the time right now to search through past bounties, but I remember a number of them involved fairly simple testable theories which would simply take a lot of time and effort, but not expertise.

Comment by Yitz on EA and the current funding situation · 2022-05-11T13:16:20.078Z · EA · GW

That’s a fair point, thank you for bringing that up :)

Comment by Yitz on EA and the current funding situation · 2022-05-11T09:03:57.593Z · EA · GW

How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The reason why that’s normally negative is due to that money then not being used in a more cost-effective manner, but if our primary problem is spending enough money in the first place, that may not be much of an issue at all.

Comment by Yitz on The Mystery of the Cuban missile crisis · 2022-05-06T11:57:27.834Z · EA · GW

Thanks for the excellent analysis! It’s notable that if your theory is correct, it wasn’t a single person here making an irrational decision, but two different entire command structures being so blinded by emotional thinking that nobody thought to even suggest that Cuba wouldn’t change anything in terms of defensive/offensive capabilities.

Comment by Yitz on Has anyone actually talked to conservatives* about EA? · 2022-05-06T11:09:50.388Z · EA · GW

I’m curious if you have any friends who identify as “far right” or “alt-right”—do their views on EA substantially differ?

Comment by Yitz on The AI Messiah · 2022-05-06T10:57:44.081Z · EA · GW

I’m curious on what exactly you see your opinions as differing here. Is it just how much to trust inside vs outside view, or something else?

Comment by Yitz on A tale of 2.75 orthogonality theses · 2022-05-02T08:31:46.808Z · EA · GW

As a singular data point, I’ll submit that until reading this article, I was under the impression that the Orthogonality thesis is the main reason why researchers are concerned.

Comment by Yitz on Joseph Lemien's Shortform · 2022-05-01T21:15:45.173Z · EA · GW

Please do! I'd absolutely love to read that :)

Comment by Yitz on The team at EA for Jews is growing — apply now or refer others! · 2022-05-01T21:09:34.091Z · EA · GW

This is hilarious; I was literally thinking yesterday that we should be reaching out to the Orthodox/Modern Orthodox Jewish community, and was going to write a post on that today! Happy to know this already exists :)

May I ask what your long-term plans are?

Comment by Yitz on Yitz's Shortform · 2022-04-29T22:38:43.810Z · EA · GW

I need to book plane tickets for EAGx Prague before they get prohibitively expensive, but I’ve never done this before and haven’t been able to get myself to actually go through the process for some reason. Any advice for what to do when you get “stuck” on something that you know will be pretty easy once you actually do it?

Comment by Yitz on What would you like to see Giving What We Can write about? · 2022-04-29T05:51:37.708Z · EA · GW

I’d be interested in reading about the impact of artistic careers!

Comment by Yitz on Help us make civilizational refuges happen · 2022-04-13T18:47:54.462Z · EA · GW

Quick note that I misread "refuges" as "refugees," and got really confused. In case anyone else made the same mistake, this post is talking about bunkers, not immigrants ;)

Comment by Yitz on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T18:25:28.653Z · EA · GW

Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives,  and I hope that this leads to more careful, but not less giving.

Comment by Yitz on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T18:24:21.239Z · EA · GW

+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).

Comment by Yitz on Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it? · 2022-04-11T11:19:45.573Z · EA · GW

I think “unfunded ideas” would be a great title for a tag!

Comment by Yitz on Liars · 2022-04-06T19:34:11.594Z · EA · GW

I think I would actually be for this, as long as the resolution criteria can be made clear, and at least in the beginning it can only be for people who already have a large online presence .

One potential issue is if the resolution criteria is worded the wrong way, perhaps something like "there will be at least one news article which mentions negative allegations against person X," it may encourage unethical people to try to purposely spread false negative allegations in order to game the market.  The resolution criteria would therefore have to be very carefully thought about so that sort of thing doesn't happen.

Comment by Yitz on Issues with centralised grantmaking · 2022-04-04T22:06:53.282Z · EA · GW

Posted on my shortform, but thought it’s worth putting here as well, given that I was inspired by this post to write it:

Thinking about what I’d do if I was a grantmaker that others wouldn’t do. One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of them to be interested (though some would decline), and they’d likely be coming from a very different angle than most people in this space. This may not be the most efficient use of money, but making use of my peculiar/unique network of friends is something only I can do, and may be of value.

Comment by Yitz on Yitz's Shortform · 2022-04-04T22:03:53.617Z · EA · GW

Thinking about what I’d do if I was a grantmaker that others wouldn’t do (inspired by https://forum.effectivealtruism.org/posts/AvwgADnkdxynknYRR/issues-with-centralised-grantmaking). One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of them to be interested (though some would decline), and they’d likely be coming from a very different angle than most people in this space. This may not be the most efficient use of money, but making use of my peculiar/unique network of friends is something only I can do, and may be of value.