Posts

What is estimational programming? Squiggle in context 2022-08-12T18:01:43.662Z
Projects for EA historians 2022-06-07T14:48:55.367Z
Abundance and scarcity; working forwards and working backwards 2022-02-18T19:38:19.653Z
Movie review: Don't Look Up 2021-12-25T06:57:44.821Z
Linkpost for "Organizations vs. Getting Stuff Done" and discussion of Zvi's post about SFF and the S-process (or; Doing Actual Thing) 2021-12-15T14:16:34.448Z
quinn's Shortform 2021-08-03T19:40:34.156Z
Cliffnotes to Craft of Research parts I, II, and III 2021-06-26T14:04:35.916Z
High Impact Careers in Formal Verification: Artificial Intelligence 2021-06-05T14:45:20.343Z
EA Philly's Infodemics Event Part 2: Aviv Ovadya 2021-05-22T18:29:51.953Z
EA Philly's Infodemics Event Part 1: Jeremy Blackburn 2021-01-28T13:08:03.733Z
What would an EA do in the french revolution? 2021-01-07T13:10:45.794Z
What would an EA do in the american revolution? 2021-01-07T12:58:48.993Z
A small pile of thoughts on psychology of entrepreneurship 2020-12-23T18:33:19.173Z
My upcoming CEEALAR stay 2020-12-14T06:23:04.709Z

Comments

Comment by quinn on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T01:38:04.067Z · EA · GW

There's another whole category of discussion that you didn't mention that strongly touches on the class and opulence topics, even though those would come up later in the EA onboarding funnel than intro fellowships: lefties are quite likely to experience cognitive dissonance at conferences with catering staff, coworking spaces with staff, the however-many star hotel in the bahamas, etc. because lefties tend to see the world from the lens of the staff and not from the lens of the customers. This seems to have a lot to do with lefties working those jobs themselves. 

Heck, I can't go into a restaurant I used to run deliveries for (now that I'm in IT and am a customer) without at least a quarter of a moral crisis, without obsessing over trying to be the least bad customer in the joint, etc. I know the sexy thing in EA right now is emphatically not people who see the world that way, we're going after optimal careerists from the upper classes and so on, but in the case of lefty retention in particular (insofar as that even matters), this is a critical consideration. 

Comment by quinn on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T01:25:21.616Z · EA · GW

Thank you for writing this post that I disliked. It is, for the record, well written. 

I'm sure I'll fail to dig up links, but I've seen intralefty dissent about the "bodies" thing, so I'm really not inclined to try to be more appealing to someone who really likes the word "bodies". 

You wrote

go nuts in the comments

And the nuts I'd like to go today consist of my pet theory that language is the number one ailment of the left, and the very last lever we should try in increasing retention (which, by the way, I don't inherently endorse. I'm not google or amazon, I don't think volume of customers is the most important thing for EA and certainly may not be the best thing for EA, but that is a totally different rant). 

The crux about the "ways of knowing/being" and "lived experience" is actually standpoint epistemology (SE). Seems like not a small literature, I've just figured EA has enough philosophy majors that one of them can handle it which is one reason I haven't gotten around to doing a writeup on it for the forum myself (the enterprising reader will notice that there's still a couple months left of the redteaming contest, nudge nudge). The way I tend to break it down when I'm in your shoes is literally ask "when do you think SE would outperform expertise from someone who's trying really hard but doesn't have skin in the game?", then I use cancer to show SE's limitations (we do not trust cancer patients more than cancer specialists when we're reasoning about what causes tumors to grow or shrink), then I just say "I believe that such n such cause area is a region of the world where SE can be outperformed by so n so experts". And I'd rather identify cruxes, agree to disagree, and go our separate ways than hand out speech welfare to SE. A rather strong version of the claim is that even racism, a domain that tends to make SE look it's best, doesn't inherently require you to have skin in the game if you want to come up with ways to fix it, but I don't think a properly EA rejection of SE is necessarily an endorsement of this strong version. 

A rather uncharitable take is that after several years in the left I decided that they were mostly defined by an arms race to see who could complain most poetically and had very little to do with fixing anything, and there's an associated claim that memeplexes are upstream of behavior, that you can detect this tendency to merely complain in the vocabulistics. Now you might point out the good works in the community - people earnestly looking for levers in the CJR space, or food not bombs volunteers - but I would claim that if you look at their information diets, there's a great deal of zines and artworks that have this poeticism problem. My pet theory is that this information diet contributes to burnout, because it's all very "capitalism is unbelievably intractable => it's most likely better to minimize your impact on the world and not be categorical-imperativey about the petty crimes you commit because such n such poem made the crime sexy", it's not conducive to any movement with any useful properties at all. 

So no, my friend, it's going to be a hard pass from me on language as a lever to increase our lefty retention. They have systematic ambition-killing agitants in their memeplex, and the best thing we can do for them is help them out of that. 

Comment by quinn on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T00:47:04.990Z · EA · GW

The goal with roping in the anti-imperialist crowd should be "come for the deontological thing (i.e. justice or reparations), stay for the consequentialist thing (i.e. independent of history, looking forward, the next right thing to do is help)." The first part (where we essentially platform deontology) doesn't actually seem super dangerous to me, in the way that compromising on epistemics is dangerous. 

Comment by quinn on Effective Altruism for Muslims · 2022-08-12T18:37:44.330Z · EA · GW

To be fair, there are lots of types of people who reject comparing value of human life with value of animal life, spanning secular and religious backgrounds. I wouldn't have guessed there was a correlation between religiosity and that particular rejection. 

Comment by quinn on EA in the mainstream media: if you're not at the table, you're on the menu · 2022-07-31T16:28:42.781Z · EA · GW

(to save some folks some time clicking or reading) the article makes a case that EA goals are subject to the seductively evil drive to collect taxes for public goods funding, and I personally feel bad I didn't predict that argument to come from this crowd once I saw that Sam, Dustin, and Cari  were making specific political plays

(excuse my US-centrism) Usually when I think about the EA republicans question, I think about nationalism, religiosity, some core "liberty is welfare and has good externalities" principles around homosexuality and freedom of movement, but this article updated me to also think about taxes (not that I think republicans are actually against taxation in any sense, but just that there's nonzero information in what they choose to write on the tin). 

Comment by quinn on EA in the mainstream media: if you're not at the table, you're on the menu · 2022-07-31T16:17:06.750Z · EA · GW

Thanks. I was inspired yesterday to do a point by point addressing of the piece. Feels a little "when you wrestle with a pig, you get muddy and the pig likes it", but spoiler alert I think there's nonzero worthy critique hiding in the bad writing.

Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.

I think EAs live in the space between the extreme "capitalism is perfectly inefficient such that a wallstreet compensation package is irrelevant to the (negligible) social value that a wall street worker produces" and the equally extreme "capitalism is perfectly efficient such that a wallstreet compensation package is in direct proportion to the (evidentially high) social value that a wallstreet worker produces". Also, insofar as capitalism is designed and not emergent, is it really optimized for social value? It seems optimized for metrics which are proxies for social value, and very much subject to goodhart, but I'll stop before I start riots in every last history and economics department. Moreover, how about we want more number go up? If number go up is good, and working some arbitrary gig in fact makes number go up, donating some of the proceeds will make number go up more, so E2G people are correct to do both! 

Animal rights and veganism are big in the movement as well.

Sorry this reads to me like applause lights for the "I hate those smug virtue signaling vegans because I love bacon" crowd. OP's thesis about EA doesn't really relate to our unusually high vegan population, they might as well have pointed out our unusually high queer or jewish or computer programmer population. 

Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.

Sorry one sec I'm laughing a little at this "what have the romans ever done for us?" moment. "yeah, besides the malaria nets and deworming, which I admit are a plus, what have the EAs ever done for the poor?" it's like monty python! Anyway, my friend, if you think golden rice is a neglected approach to vitamin A deficiency, are you rolling up your sleeves and advancing the argument? Do you even bother to cite evidence that it's more effective? "Hmmm, seems like a move backward" is a completely unjustified and frivolous sentence. 

That’s a bit like closing the barn door after the horse has bolted.

EAs do not subscribe to the interpretation of the theory of random variables that you imply! We do not believe that random variables conserve a supply of events out in the universe of potentiality, such that an event of a particular class drains the supply of events of that class from the future. We instead believe that events of a class occurring does not imply that there's less of that class of event available to occur in the future. In fact, if anything we believe the opposite, if anything we believe that observing an event of a class should update us to think they're more likely than we did before we observed it! Moreover, EAs are widely on record advocating for pandemic preparedness well before covid. 

Partly as a result of his and his brother’s efforts, $30 billion for pandemic preparation was written into the Biden administration’s thankfully stalled Build Back Better porkfest.

From a writing style perspective, this is blatant applause lights for the tribe of those who think build back better is bad. 

Catch that? Someone else pays. Effective, but not exactly selfless. It’s the classic progressive playbook: Raise taxes to fund their pet projects but not yours or mine. I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.

Not wrong. Policy efforts inevitably lead to this line (from this crowd at least), unless they're, like, tax-cutting. Policy EAs are advancing a public goods argument. It opens us up to every lowering-my-taxes-is-ITN guy that every single public goods argument in the world is opened up to. I don't need to point out that OP surely has pet projects that they think ought to be funded, by taxes even, and I can omit conjectures about what they are and about how I personally feel about them. But this is a legitimate bit of information about EA policy efforts. (Obviously subject to framing devices: tax increments are sufficiently complex that a hostile reader would call something "increase by 0.75%" while another reader would say "pushing numbers around the page such that the 0.75% came from somewhere else so it's not a real increment" and neither would be strictly lying). 

And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”

I'll omit how what I actually think about Pinker, but in no worlds is this settled. Pinker is one guy who lots of people disagree with! 

There are other critics. Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.” He notes that “national development” and “high economic productivity” drive human well-being.

Seems valid to me. Nonlinear returns on philanthropy would be awesome, wouldn't they? It's sort of like "if a non-engineer says 'wouldnt a heat-preserving engine be great?' we don't laud them as a visionary inventor" in this case, because I don't expect OP to roll up their sleeves and start iterating on what that nonlinearly returning mechanism would look like! But that doesn't mean we shouldn't take a look ourselves. 

There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.

This should clearly be in our overton window about how to do the most good. It almost alludes to the excellent Hauke Hillebrandt essay doesn't it? 

Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.

This seems underjustified and not of a lot of substance. I think what OP has portrayed may qualify as a racket to people of a particular persuasion regarding government spending, or as a cult to the "I intuitively dislike virtue signaling and smugness so I look for logical holes in anyone who tries to do good" crowd, but OP could have been more precise and explicit about which of those they think is important to end on. But alas, when you're in a given memeplex that you know you share with your audience, you only have to handwave! lol 


As Scott Alexander recently addressed, EAs are like a borg: we assimilate critics of any quality bar whatsoever. As much as we respect Zvi "guys, I keep telling you I'm not an EA" Mowshowitz' wishes to not carry a card with a lightbulb heart stamped on it, it's pretty hard not to think of him as an honorary member. My point is we really should consider borg-ing up the "taxation is theft" sort of arguments about public goods and the "investment beats aid" sort of arguments about raising global welfare. 

Comment by quinn on What ‘equilibrium shifts’ would you like to see in EA? · 2022-07-26T18:22:56.179Z · EA · GW

the philly value prop

  • 2 hours from new york, a little over 2 from DC, something like 5-7 to boston depending on if you drive or amtrak. 
  • EA Philly's discord has about a hundred people
  • A wework cluster (spearheaded by rethink priorities) has a bunch of empty desks at the time of this writing! 
  • rent under a thousand is quite easy to find for a lot of different types of people and needs (I pay more than anyone in my house and i'm at like 537 lol) 
  • Penn has a reasonable EA history, has hard coursework and some cool profs and students.
  • Adequate public transit

Not philly: 

  • Volume of entrepreneurial vibes is small. At a meetup you're more likely to run into a "since there's no free will we might as well all not try / rationality is a social club it's not about kicking ass and winning" guy than a "well I've been working on project xyz" or "my theory of change is abc" guy (by an astounding factor). 
  • Summers are too hot and humid, winters are too cold. lol. The sweet spot of no complaints doesn't feel actively too short to count tbh, it's really only the worst part of the summer and the worst part of the winter that I can be caught whining about it. 
  • You should talk to the people who bailed from Penn EA about what they don't like about Philly

Reach out to me for a couch if you want to visit! 

Comment by quinn on GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly · 2022-07-20T19:24:28.328Z · EA · GW

 to make this no longer a red flag?

Some note to the effect that you've redteamed the strategy, planned for contingencies. I think if I had read a brief comment like 

I think you're following the way I set up the point about EV theories, what I meant really just had to do with risk tolerance, that I think the risk tolerance of the user base implies a more conservative approach. 

Comment by quinn on GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly · 2022-07-19T19:34:56.781Z · EA · GW

Cheers, chaps! Thanks for the update. I hope you're right and I hope you win

If bonds stop generating yields, we’ll have to rethink our strategy.

Sorry about not sparing effort to sound nicer, want to write comment quickly: I think of markets (in the sense of "which things are generating how much yield?") as rather fickle, and if I was involved I wouldn't build any strategy at all around these short-term signals, my assessment of your circumstances is not the assessment that would lead to me getting behind a myopic strategy. And I'll go one further-- it's a bit of a red flag that y'all are willing to stake so much on fickle behaviors that you're observing in a notoriously fickle market. 

The discussion here is a broad one: on the one hand, I don't have a good track record in that I've never been super glad about worshiping at the altar of EV theory (the altar hasn't made me an OOM more money than I could make working hourly), which you can interpret as either lack of luck or lack of wisdom, and you don't want to listen to advice from people without a good track record. On the other hand, the nature of your product is specifically more involved in trust and stability, which comes with a kind of responsibility that makes the class of reasoning you want to do decidedly not the "EV theory YOLO" that characterizes for example SBF's journey from a well-off jane street alum to $30b. The fact is that a person reasoning about the journey from $1 to $100 /day has a qualitatively different EV theory than a person reasoning about the journey from $1000 to $10000 /day, because logarithms. A claim I'm considering making is that the GLO team ought to be using the former EV theory -- because it is the one used by the users! -- even though most defi projects prefer the latter EV theory. 

I think I could be wrong by not misassessing how much the strategies you're building around short-term properties of market behavior really actually are short-term strategies, and won't shoot you in the foot when the properties change and you have to reassess. 

Comment by quinn on How do employers not affiliated with effective altruism regard experience at EA-affiliated organizations? · 2022-07-15T03:04:38.400Z · EA · GW

Epistemic status: pepperoni airplane, includes reasoning about my blase and bohemian risk tolerance and career path which probably doesn't apply to e.g. people with responsibilities. I think it'd be really hard to proceed with this question in a non-anecdotal way, e.g. employers being cagey about the reasons they decline hiring someone due to legal risk as a barrier to creating a dataset. 

I took a 6 month sabbatical at EA Hotel to do some AI safety things smack in the middle of what was supposed to be a burgeoning IT career. I received zero career advice telling me to leave that startup after a mere 8 months, but I'm good at independent study and I was finding that in my case the whole "real world jobs teach you more than textbooks" thing was a lie. So off to Blackpool I went, here's the one consideration: I didn't feel like my AI Safety Camp Five project had freedom to be too mathy, I felt like I needed to make sure it had a github presence with nice-looking pull requests because while I was earnestly attempting an interesting research problem, I was also partially optimizing for legible portfolio artifacts for when I'd end up back on the job hunt. 

When I got home, I had a couple months left of the SERI internship, and toward the end of that I landed an interview at a consultancy for web3 projects (right groupchat right time), and crushed it using some of my EA Hotel activities (the leader of the consultancy ended up mentioning reinforcement learning on sales calls because of my AI Safety Camp Five project, though no customers took him up on it). I kinda borked my SERI project, so took a confidence hit as far as alignment or any kind of direct work was concerned, so retreating into E2G was the move: it was also great brain food and exposed me to generically kickass people. The point is that EA was not a negative signal, even a totally weird-sounding sabbatical at a hotel in a beach town scored no negative points in the eyes of this particular employer. The takeaway about my AI Safety Camp Five project is you can optimize for things legible to normies while doing direct work

If you have way less bohemian risk tolerance than me, then your EA activities will be way more legible and respectable than mine were at that time. 

It's kind of like what they tell people trying to break into IT from "nontraditional paths"-- the interview is all about spin, narrative, confidence. IT managers, in my experience (excuse another pepperoni airplane), can get a ton of useful information from stories about problem solving and conflict resolution that take place in restaurants or film sets! Unless I'm deliberately making the least charitable caricature of HR, I assume that if you talked about some project you tried for a while with this social movement of philosophers trying to fix massive problems in an interview you'd get a great response. 

Comment by quinn on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T20:47:51.505Z · EA · GW

I have suggested we stop conflating positive and negative longtermism. I found, for instance, the Precipice hard to read because of the way he flipped back and forth between the two.

Comment by quinn on What is the top concept that all EAs should understand? · 2022-07-05T15:01:33.333Z · EA · GW

I've come, through the joking to serious pipeline, to telling people that EAs are just people who are really excited about multiplication, and who think multiplication is epistemically and morally sound. 

Comment by quinn on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T02:19:56.904Z · EA · GW

Seems like a win, curious to hear about involvement of people in our networks in making this happen. 

Comment by quinn on Future Fund June 2022 Update · 2022-07-03T13:57:44.600Z · EA · GW

and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)

I guess it's not realistic to litmus test individuals about their cold-emailing practices and their seriousness about the problem area they claim to be working in, before giving them access to the list.  

I would expect the cold emailing advice given by Y Combinator to result in emails that do not frustrate regrantors. 

Comment by quinn on Future Fund June 2022 Update · 2022-07-03T01:24:41.571Z · EA · GW

Is there a way to access a list of regrantors, maybe indexed by problem area? Any reason I can't just query "show me the email address of every FTX regrantor who is interested in epistemic institutions" for instance? 

Comment by quinn on quinn's Shortform · 2022-06-26T16:57:01.888Z · EA · GW

perfect, thanks! 

Comment by quinn on quinn's Shortform · 2022-06-26T16:56:41.851Z · EA · GW

right, yeah, I think it's a fairly common conclusion regarding a reference class like drugs and sex work, but for a reference class like murder and theft it's a much rarer (harder to defend) stance.

I don't know if it's on topic for the forum to dive into all of my credences over all the claims and hypotheses involved here, I just wanted to briefly leak a personal opinion or inclination in OP. 

Comment by quinn on quinn's Shortform · 2022-06-24T21:30:16.109Z · EA · GW

We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights. 

While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity. 

Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making. 

(To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.) 

Comment by quinn on Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism) · 2022-06-21T21:00:24.822Z · EA · GW

Bostrom's vulnerable world hypothesis paper seems to suggest that existential security (xsec) isn't going to happen, that we need a dual of the yudkowsky-moore law of mad science that raises our vigilance every timestep to keep up with the drops in minimal IQ it costs to destroy the world. A lifestyle of such constant vigilance seems leagues away from the goals that futurists tend to get excited about, like long reflections, spacefaring, or a comprehensive assault on suffering itself. Is xsec (in the sense of freedom from extinction being reliable and permanent enough to permit us to do common futurist goals) the kind of thing you would actually expect to see if you lived till the year 3000, 30000, or do you think the world would be in a state of constant vigilance (fear, paranoia) as a bargain for staying alive? What are the most compelling reasons to think that a strong form of xsec, one that doesn't depend on some positive rate of heightening vigilance in perpetuity, is worth thinking about at all? 

Comment by quinn on Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism) · 2022-06-21T20:45:06.929Z · EA · GW

Previous MCE projects like abolitionism or liberal projects like extending suffrage to non-landowning non-whitemales were fighting against the forcible removal of voice from people who had the ability to speak for themselves. Contemporary MCE projects like animals and future people do not share this property; I believe that animals cannot advocate for themselves, and the best proxy for future peoples' political interests I can think of falls really short. In this light, does it make any sense at all to say that there's a continuity of MCE activism across domains/problem areas? 

I think it makes sense for, say, covid-era vaccine administrators to think of themselves as carrying on the legacy of the groups who put smallpox in the ground, but it may not make the same sense for longtermists to think of themselves as carrying on the legacy of slavery abolition just because both families of projects in some sense look like MCE. 

Related, does classifying abolitionism as an MCE project downplay the agency of the slaves and over emphasize the actions of non-enslaved altruists/activists? 

In other words, contemporary MCE/liberalism may actually be agents fighting for patients, whereas prior MCE/liberalism was agents who happen to have political recognition fighting with agents who happen to lack recognition. Does this distinction hold water with respect to your research? 

Comment by quinn on quinn's Shortform · 2022-06-20T23:44:39.229Z · EA · GW

cool projects for evaluators

Find a nobel prizewinner and come up with a more accurate distribution of shapley points. 

The Norman Borlaug biography (the one by Leon Hesser) really drove home for me that, in this case, there was a whole squad behind the nobel prize, but only one guy got the prize. Tons of people moved through the rockefeller foundation and institutions in mexico to lay the groundwork for the green revolution, Borlaug was the real deal but history should also appreciate his colleagues. 

It'd be awesome if evaluators could study high impact projects and come up with shapley point allocations. It'd really outperform the simple prizes approach. 

Comment by quinn on "Long-Termism" vs. "Existential Risk" · 2022-06-20T12:16:08.308Z · EA · GW

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

Eliezer's underrated fun theory sequence tackles this

Comment by quinn on Announcing a contest: EA Criticism and Red Teaming · 2022-06-20T03:01:03.784Z · EA · GW

One issue is that networked and connected people may have greater access to pre-publish criticism in the form of google doc comments, and getting google doc comments seems like a fairly robust strategy for improving the quality of an essay. If simply the best essays are awarded, then we may ossify some dynamics around being networked and well connected, or failing to recognize people from outside of our ingroup. 

Comment by quinn on Critiques of EA that I want to read · 2022-06-20T02:31:02.776Z · EA · GW

Strong upvote. I'm a former leftist and I've got a soft spot for a few unique ideas in their memeplex. I read our leftist critics whenever I can because I want them to hit the quality target I know the ideas are worth in my mind, but they never do. 

If anyone reading this knows leftist critics that you think have hit a reasonable quality bar or you want to coauthor a piece for the contest where we roleplay as leftists, DM me on the forum or otherwise hit me up. 

Comment by quinn on Critiques of EA that I want to read · 2022-06-20T02:24:16.599Z · EA · GW

Yeah I'd be figuring out homotopy type theory and figuring out personal curiosities like pre-agriculture life or life in early cities, maybe also writing games. That's probably 15% of my list of things I'd do if it wasn't for all those pesky suffering lives or that annoying crap about the end of the world. 

Comment by quinn on Seven ways to become unstoppably agentic · 2022-06-19T17:07:20.807Z · EA · GW

Gorgeous post! Up next, I'd like to see a followup from an older person's perspective about preservation of agentiness, because I have a number of friends who started out very agenty then kinda tapered off by their mid 20s. 

Comment by quinn on quinn's Shortform · 2022-06-17T21:37:24.082Z · EA · GW

In negative longtermism, we sometimes invoke this concept of existential security  (which i'll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction. 

One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they're simply not altruistic at all), derived from Most Important Century arguments. 

I think it's tempting to say that the duty -- the ask -- is to obtain existential security. But I think this is wildly too hard, and I'd like to propose a kind of different framing

Xsec is a delusion

I don't think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we'll obtain a commensurate rate of increase in vigilance or we'll die. "Security" implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH)

I think the idea that we'd obtain xsec is unnecessarily utopian, and very misleading. 

Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction

Obtaining xsec seems like something you don't just do for your grandkids, or for the 22nd century, but for all the centuries in the future. 

I think this is too tall an order. I think that instead of trying something that's too hard and we're sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it. 

In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about "the base case" (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I'm thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure. 

Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it's a much better bet than searching for actions you can take to directly impact arbitrary centuries. 

I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work). 

But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we're in the alchemy era of longtermism). 

Comment by quinn on Penn EA Residency Takeaways · 2022-06-10T23:41:23.407Z · EA · GW

I see two open discussions here. 

One is brain drain, as you mentioned. I wrote a little about this in the "keeping one's eye on the ball" section of this comment. I think we should be reasoning about the bay as "look at all these magical things happening there" and constantly panicking about the opportunity cost every day we're not creating more bays in other places.

Another has to do with 

one of us should have (a) tried to have weekly calls with the remaining Penn organizers, and (b) connected them with GCP.

And why Quinn M wasn't tapped, or myself. Is there a view formed that people in their later 20s out working aren't a super good fit for university work? Is there a view formed that top universities are culturally particular, and that people who weren't at top universities would screw it up? Things like this seem plausible to me, but I'm shooting in the dark. 

But moreover, I'm really glad to read your comment about weighing CB against anything else CBers could be doing. (I have concerns about the movement doing so much advocacy that we build out the wrong skillsets, and so on). 

Comment by quinn on How to kick-start 10 university groups in 6 months · 2022-06-10T16:30:42.722Z · EA · GW

Could you elaborate a bit on what happened to the group after that residency?

I'm not the guy to write a postmortem, as I wasn't involved, but the sense I get from talking to grad students is that 1. omicron suspended weekly dinners, 2. organizers got busy with other projects. This is not the canonical postmortem or explanation, but I believe is factually accurate and may provide more than zero information. 

Comment by quinn on Penn EA Residency Takeaways · 2022-06-10T01:32:00.352Z · EA · GW

So, I wasn't really a part of this, but I think based on karma this post got people might be interested in an (unofficial) update from a random philly guy. 

I'm not aware of any evidence of Penn EA existing in 2022. My understanding is the weekly dinners got kneecap'd by omicron, organizers got busy with other projects, and it never bounced back. I've put together a few events using campus facilities in the meantime, but haven't done a lot to interface with the student body. 

Fall 2021 may be some sort of success story, but to what end? With what metrics do we evaluate university groups with respect to time

I think institutions take a lot of grinding, persistence, boring tasks that last way longer than any of the lessons they teach you. Intergenerational continuity is hard, what with people graduating and leaving all the time. Storytime: when I got to the community college of philadelphia, the tutoring department didn't have anyone for courses harder than calc 1. After me and my peers climbed through the sequence without tutors, we said "that was hard; let's make it easier on the next cohort" and all joined the tutoring department. Then, we scouted for and trained future tutors to replace ourselves, and moved on (note, more complicated story with covid kneecapping the continuity we worked to build, so some of us came back during covid, but the lesson of the story remains). This sort of "leadership must make themselves scarce; always be exiting" ethos is emphasized in some lefty activism I've done and literature I've read, but this comment is the first I'm aware of it being uttered on the EA forum. I don't think I need to argue against the virtues of centralization to make a case for distributing responsibility through the group such that when an organizer feels called to another project everything doesn't just stop in it's tracks, but some intuition pumps like "eggs in one basket", "single point of failure", "chokepoint" are important to keep on a post-it note somewhere in your mind, in case you'd like to act on the criticisms they imply. 

Random spitballing: Isn't greek life very old? Did anyone study frats, sororities, whatever culty stuff is going on with the exclusive clubs at harvard (as portrayed in the movie the social network)? They seem like they've built stable institutions, and nailed intergenerational continuity! 

Moving forward, I'm proposing that everyone involved in university groups make bets about the engagement they'll get 2, 5, and 10 semesters from now. (and remember, market manipulation and insider trading are virtuous, in this case!)

One more thing, above I wrote

I think institutions take a lot of grinding, persistence, boring tasks that last way longer than any of the lessons they teach you

and I think not only does it bear repeating, but it's important to highlight questions like "how many EAs can we afford to assign to CB logistics, ask them to build those skills, when we need people building different skills if we want to save the world?", "is it in an EA's interest to build these skills w/r/t opportunity cost?", etc. 

Comment by quinn on How to kick-start 10 university groups in 6 months · 2022-06-10T00:48:59.530Z · EA · GW

So, groups need consistency over time to be successes, and I'm wary of these posts about airdropping people to kickstart then declaring victory. My wariness is probably because it appeared to work in my city for one semester, only for everything to be gone the subsequent semester. 

I think declarations of victory are plausibly good on a cultural level, in spite of the fact that many people have to continue fighting onward, so I don't want to be a party pooper. But, I think we should at least have social rewards for, or better yet just have norms that expect, people to say "this project had core goals xyz which we're p% sure will have abc impact per year over the next 5 years, we also associate it with externalities or secondary goals pqr which we're also excited about". In the case of this post, I would have liked to see predictions for 2024 engagement at each of the groups that got kickstarted (or let others register predictions, then insider trade and take their money). 

Comment by quinn on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-09T01:54:37.875Z · EA · GW

Plausible that this post's comments is not the optimal place for some of this. One may argue that each heading should be it's own comment, but I'm slightly uncertain what mods prefer. 

Skilling up

“to come to the bay over the summer” and “learn stuff and skill up”

I don't think the meatspace value prop you outlined constitutes skilling up. That needs more justification. 

Aligning some social rewards with one's desire to believe true things or to help fix broken stuff seems plausibly critical for some people to keep their eye on the ball, or even to just take themselves seriously. Skilling up is not really related to this, except in the sense that you have to get whatever emotional structure that powers your motivation behind the grind. 

But I've been basically worried that emphasis on giving students quickly legible levers early in their exposure to the movement is sort of looting scholarship from the effective altruism of the future, and the way this post evoked "skilling up" to talk about what it was interested in talking about, which I see as boiling down to networking, really triggered me. 

I'd like to register a vote for an extremely boring notion of skilling up, which is: yes, textbooks. Yes, exercises. No, nodding along to a talk 3-10x more advanced than you and following the high level but not able to reproduce it at a low level. No, figuring out how to pattern match enough to flatter sr researchers at parties. Yes (often, or usually), starting hard projects quickly before an outside view would think you've done enough homework to be able to do it well, but the emphasis is on hard. Gaining friends and status may even be distracting! 

Obviously there's a debate to be had about a scholarship/entrepreneurship tradeoff-- my sense is that people think we use to swing too far to the scholarship side, and now there will probably be an overcorrection. (One should also register that few people think getting kickass at stuff looks like locking oneself in a monastary with textbooks, but a longer treatment of the nuances there is out of scope for this comment). 

But no, respectfully, I'm sorry but skilling up was not described in the post. Could you elaborate? 

I think in star wars 8 there was this "sacred texts" situation, you may recall that after listening to luke rant about the importance of the sacred texts, yoda challenged whether he even read them. Luke says "well...", implying that he flipped through them, but wasn't intimately familiar with what they were saying. I'm personally bottlenecked in my current goals by not having knowledge that is sitting in books and papers I haven't read yet! Which says nothing of doing avant garde or state of the art things. I think this post risks entering a kind of genre of message for young EAs, which is "you get kickass at stuff by exposing yourself to people who are kicking ass", and I think that's auxiliary -- yes you need to construct the emotional substructure that keeps your eye on the ball (which socialization helps with), and yes you need advice from your elders from time to time -- but no, you get kickass at stuff by grinding. 

Landscape knowledge

The bay is not necessary for landscape knowledge. 

I could’ve easily ended up digging into, say, MIRI’s research, only to realise very late that I actually think their approach is hopeless.

I (gave an 8 month college try at alignment) managed to form a map of and opinions about different threatmodels and research agendas online. I was at EA Hotel but there was only one other alignment person there and he was doing his own thing, not really interfacing with the funded cells of the movement. The discord/slack clusters are really good! AI Safety Camp is great! One widely cited, high status alignment researcher responded to a cold email with a calendly link. The internet remains an awesome place. 

Clout and reputation

I'm not gonna say clout and reputation are amoral distractions-- I think they're tools, instruments that help everyone do compression and filtering, oneself included. I roughly model grantmakers as smart enough to know that clout is a proxy for plausible future impact at best, so I'm not gonna come here and say status games are disaligning the movement from it's goals.

But jan_kulveit is 1000% right and it warrants repeating: networking and kicking ass are different things. What do I think goodharting on clout looks like at a low level? It looks like the social rewards for simply agreeing with the ideology leaving people satisfied, then they end up not doing hard projects.

Keeping one's eye on the ball

I conjecture that some people 

  1. Need consistent social rewards for trying to believe true things or fix broken stuff in order to keep trying to do those things
  2. See superlinearly increasing gains in the extra bandwidth of meatspace, or find a ton of value in the information that medium-bandwidth spaces like discord delete. 

I think the bay can be counterfactually very good by increasing the impact of these people! 

But I want the Minneapolis EA chapter to be powerful enough to support people who fail the university entrance exam. I don't want to leave billion dollar bills (or a billion human lives) on the sidewalk because someone wasn't legible or well connected at the right time. Keeping all our eggs in one basket seems bad in a myriad of ways.

People who can keep their eye on the ball, and grow as asskickers, without the bay should probably resist the bay, so that we build multipolar power. One of the arguments for this I have not advanced here is to do with correlated error, the idea that if we all lived together we may make homogenous mistakes, but perhaps another post. 

Networks and caution

We should be cautious about allowing a set of standards to emerge where being good at parties (in the right city) correlates with generating opportunities. 

Comment by quinn on quinn's Shortform · 2022-06-08T21:05:45.350Z · EA · GW

Another CCing of something I said on discord to shortform

If I was in comms at Big EA, I think I'd just say "EAs are people who like to multiply stuff" and call it a day

I think the principle that is both 1. as small as possible and 2. is shared as widely between EAs as possible is just "multiplication is morally and epistemically sound". 

It just seems to me like the most upstream thing. 

That's the post. 

Comment by quinn on quinn's Shortform · 2022-06-08T21:00:25.433Z · EA · GW

Thanks to the discord squad (EA Corner) who helped with this. 

Casual, not-resolvable-by-bet prediction: 

Basically EA is going to splinter into "trying to preserve permanent counter culture" and "institutionalizing"

I wrote yesterday about "the borg property", that we shift like the sands in response to arguments and evidence, which amounts to assimilating critics into our throngs.

As a premise, there exists a basic march of subcultures marching from counterculture to institution: abolitionists went from wildly unpopular to champions commonsense morality over the course of some hundreds of years, I think feminism is reasonably institutionalized now but had countercultural roots, let's say 150 years. Drugs from weed to hallucinogens have counterculture roots, and are still a little counterculture, but may not always be. BLM has gotten way more popular over the last 10 years. 

But the borg property seems to imply that we'll not ossify (into, begin metaphor torturing sequence: rocks) enough to follow that march, not entirely. Rocks turn into sand via erosion, we should expect bottlenecks to reverse erosion (sand turning into rocks), i.e. the constant shifting of the dunes with the wind. 

Consequentialist cosmopolitans, rats, people who like to multiply stuff, whomever else may have to rebrand if institutionalized EA got too hegemonic, and I've heard a claim that this is already happening in the "rats who arent EAs" scene in the bay, that there are ambitious rats who think the ivy league & congress strategy is a huge turn-off. 

Of interest is the idea that we may live in a world where "serious careerists who agree with leadership about PR are the only people allowed in the moskovitz, tuna, sbf ecosystems", perhaps this is a cue from the koch or thiel ecosystems (perhaps not: I don't really know how they operate). Now the core branding of EA may align itself with that careerism ecosystem, or it may align itself with higher variance stuff. I'm uncertain what will happen, I only expect splintering not any proposition about who lands where. 

Expected and obligate citation.

Ok, maybe a little resolvable by bet

A manifold market could look like "will there exist charities founded and/or staffed by people who were high-engagement EAs for a number of years before starting these projects, but are not endorsed by EA's billionaires". This may capture part of it. 

Comment by quinn on quinn's Shortform · 2022-06-08T02:41:24.226Z · EA · GW

As far as I know, this is the original attribution. 

Comment by quinn on quinn's Shortform · 2022-06-08T02:22:58.328Z · EA · GW

open problems in the law of mad science

The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by  points every  years. 

My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It's sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks "what if nukes but cost a dollar and fit in your pocket?", whereas LOMS goes all the way to "the price and size of nukes is in fact dropping".

I also think that the LOMS is vague and imprecise. 

I'm basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.

  1. Are  (step size) and  (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn't have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued. 
  2. With or without a proof of 1's falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of? 
  3. Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement? 
  4. Is the LOMS sensitive to population size, or relative consumption of different proportions of the population? 
  5. For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord's gesture at "existential security" entails). How about merely reversing it's direction, or mere mitigation? 
  6. My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form "the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff" on the table in any way?

While I'm not 100% sure there will be actionable insights downstream of these open problems, it's plausibly worth researching. 

Comment by quinn on The Strange Shortage of Moral Optimizers · 2022-06-07T16:49:27.400Z · EA · GW

(So long as this alternative movement has good epistemics and doesn’t seem likely to be positively counterproductive and bad for the world, that is!)

I think this parenthetical misses what is actually hard about forming solidarity out of good intentions, which is in fact that disagreements may run so deep that it may feel mutually negative sum, or like the alt team has to lose necessarily  in order for us to win. I'm not saying it's definitely like this, but it's kind of worstcase/securitarian thinking to prepare your model for that kind of scenario. 

I have some anecdotes. 

  • A guy at my last job bounced off EA quickly because he didn't like a conversation he had with one of us about mental health. He felt like mental health was obviously the number one cause area, and thought the fact that for us it's only vaguely in the top 10-15 if that was a signal that we were totally borked. I was gravely disappointed that he didn't reason more like "the reason they're not serious about mental health is that they haven't met me yet, I better post my arguments on the forum" or "wow, someone should really do something about that, it might as well be me" and found an org. I encouraged him to do both of these things, but that wasn't his mindset at all. I think this is what missed opportunities for alt-EA look like, people have their pet criticisms but fail to take themselves seriously. 
  • I was talking with one of my oldest friends, not an EA whatsoever at this point (she eventually grokked the idea that 1 in 900 mosquito nets saves a life and signed up for the newsletter, still is far from card-carrying, but this was prior to any of that anyway), about the popularity of climate change. It seems like few beliefs are more conventional right now than "climate change really bad", and I asked her why anecdotally every single person who's told me they don't want to have kids because of climate change (not because the broader GCR conversation, but strictly because of climate change) was failing to do energy science or related engineering, and heck I'd even settle for policy theories of change or serious activism. She said, and this is a point for intellectual diversity, because I don't think I would've encountered this if I only talked to EAs, "no, that's a militaristic 'draft' mindset. If everyone has to fight, then what is there left to fight for?", and broadly defended peoples' entitlement to believe there be problems that they're not personally fixing. This, plausibly, explains a cluster of the memespace around what we interpret as missed opportunities to start alt-EA movements! Is the mentality of observing broken stuff and deciding to fix it unusually soldiery? Can we slip some cash to a viral marketing expert to instill that mentality in people, without associating it with EA? Is this plausibly an actual crux separating the alt-EAs we'd like to see from actually-existing critics? 

One more comment:

Others may be broadly enthusiastic about the idea of Effective Altruism, but have some concerns about the current state of the movement as it actually stands. From here one might offer friendly/internal critiques of EA: “Here’s how you might do better by your own lights!” And my sense is that good-faith critiques of this sort tend to get a very positive reception on the EA forum.

I think EA has "a borg property", i.e. the entity/civilization from star trek that could assimilate anything which expresses fear of homogeny that some critics have called an affectation from the west-end of the cold war. I think EA is nimble, a minimal set of premises that admits lots of different stuff and adapts, and I think is genuine about it's enjoyment of criticism. But this means that it literally eats everyone above a certain quality bar (which is good). There's an old saying "who exactly is a rationalist? Simply someone who disagrees with Eliezer Yudkowsky", which I think sums up a lot about our culture. The difficult thing about separating a critic (someone who helps you find a path through action space that deletes their complaint) from a complainer (someone who's the opposite of that) is that, while you have to protect your attention from complainers to a nontrivial degree, you may accidentally block a high quality adversary because what seems like a complaint is actually a criticism that's just really really hard to address, and you don't know the difference. Trashing your progress and going back to the drawing board is painful, we should expect cognitive biases to make it feel even more unpleasant or to tip the scale against doing that! "So you're saying I have to throw out bourgeois economics and arm the malaria patients so they can fight imperialism?" may appear like a hostile interaction to you while also being the critic's earnest attempt to help you be more morally correct with respect to their empirical beliefs. We have, as a tradition, heuristics for honing our sense of who's epistemics we trust, who's beliefs are most true, and so on, but they're not infallible. This only gets worse when you remember that if you're serious about intellectual diversity, you have to actually tolerate very different norms. We can't stay in our comfort-zone norms of discourse -wise, even if we think our norms of discourse are superior, if we're serious about actual intellectual diversity. 

TLDR, a tepid defense of admitting more things that seem like complaints into the overton window of proper criticisms

Comment by quinn on Projects for EA historians · 2022-06-07T15:45:59.118Z · EA · GW

Analyze community/movement/capacity -building in the Industrial Workers of the World in the early 20th century the way we analyze contemporary EA community/movement/capacity -building. 

Actionable insights I would expect:  My understanding is that the IWW gets a great deal of shapley points for the 8-hour workday. Setting aside any implicit claim that the 8-hour workday increased wellbeing (which would be another project for an evaluator/historian), we ought to marvel at this accomplishment, because it persisted in spite of the first red scare. I expect the calculus formed by their goals, accomplishments, and decline (along with other major actors in the story) holds lessons about how to make your goals persist even if your movement gets hunted down in the end. The movement/community/capacity building connection is just that they sent students unionists across the country to infiltrate universities workplaces bringing information and resources to altruists fellow workers, because doing so was critical to their theory of change. The major difference is, by my understanding, they used songs to recruit and educate, whereas our treasured solstice songs are more internal. 

Comment by quinn on Projects for EA historians · 2022-06-07T15:45:34.621Z · EA · GW

Analyze community/movement/capacity -building in the Industrial Workers of the World in the early 20th century the way we analyze contemporary EA community/movement/capacity -building. 

Actionable insights I would expect:  My understanding is that the IWW gets a great deal of shapley points for the 8-hour workday. Setting aside any implicit claim that the 8-hour workday increased wellbeing (which would be another project for an evaluator/historian), we ought to marvel at this accomplishment, because it persisted in spite of the first red scare. I expect the calculus formed by their goals, accomplishments, and decline (along with other major actors in the story) holds lessons about how to make your goals persist even if your movement gets hunted down in the end. The movement/community/capacity building connection is just that they sent students unionists across the country to infiltrate universities workplaces bringing information and resources to altruists fellow workers, because doing so was critical to their theory of change. The major difference is, by my understanding, they used songs to recruit and educate, whereas our treasured solstice songs are more internal. 

Comment by quinn on Projects for EA historians · 2022-06-07T15:27:30.629Z · EA · GW

Who has agency in hingey moments? (see also my prior EAF questions here and here). 

Actionable insights I would expect: having agency over hinges is a grave privilege, a very sobering responsibility. How seriously should we take egalitarian arguments that abdicating this responsibility is actually morally superior to doing our best job with it? Are there heuristics that prior wielders of this privilege considered that would be good or bad for us to consider? 

Comment by quinn on Projects for EA historians · 2022-06-07T15:18:19.212Z · EA · GW

In terms of beating Jim Crow in the US and getting the black vote, can we assign shapley points to armed and unarmed factions of the movement? My rough sense is that historians are ideologically or aesthetically divided here, people on both sides (armed wing gets more shapley points vs. unarmed wing gets more shapley points) seem to have an axe to grind. I'm especially interested in how activists reasoned about coalitioning with people they disagreed with on this key point

Actionable insights I would expect: if we hone our reasoning about social change and diversity of tactics, we can make much more intentional decisions about what ought to be in the community overton window and who to coalition with. 

More broadly, I think the black civil rights movement is great to study because they had goals that were more unpopular than EA goals, so they probably learned stuff that we could use if we ever became way more unpopular than we are now. 

Comment by quinn on Projects for EA historians · 2022-06-07T14:52:31.905Z · EA · GW

Combine leftist accounts of the history of capitalism and labor-focused views with Muehlhauser's posts about the industrial revolution

Actionable insights I would expect: If we better understand how enormous schelling points that are difficult to opt out of (like capitalism) operate in hingey environments, we might have a fighting chance of calculating counterfactually useful/informative interventions the next time GWP 10x's quickly. 

Comment by quinn on Unflattering reasons why I'm attracted to EA · 2022-06-06T21:35:47.720Z · EA · GW

Thanks for posting. I endorse a subset of these, another subset is quite alien to me. 

I want to zero in on 

I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)

Because I find it odd that you conflated relieving guilt and maintaining privilege into a single point, and the idea that installing oneself as an altruist in a cruel system (economic, ecological, or otherwise) is hedging against losing relative status or power within that system is a claim that needs to be justified

As an example, surely many of us will have at least glanced at leftist comments to the effect that donating to AMF is a convenient smokescreen, keeping us blissfully ignorant of postcolonial mechanisms which are the true root cause of disvalue for the people AMF is (ostensibly) helping, and that if we were real altruists we would be anti imperialism activists. These comments, with whatever level of quality we find them, often point at this very claim. 

Those of us who have taken substantial paycuts for (ostensibly) altruistic purposes may simply be trading cash for intra-community status-- this observation can justify arguments that we're not genuine altruists (whatever that is), but they do not on their own point to a bid at maintaining privilege. 

Obviously Joe Ineffective Philanthropy Schmoe, who donates to the opera for tax breaks and PR, can be accused of using the polite fiction of philanthropy to shore up their privilege. If Joe is laundering money for the paperclip mafia by starting an alignment foundation (via some inscrutable mechanism), this accusation only increases. 

But such a line of attack seems orthogonal to actually existing effective altruism. 

Moreover, I may be right about the orthogonality but wrong about the emotional substructure. The emotional substructure may not make 100% sense, it may be a voice that assimilates guilt about privilege into some monologue about how you're falling short of franciscan altruism or some self-sacrifice emphasizing notion of altruism. This, however, is I think a mistake, because having an emotional substructure of guilt may not relate at all to the merits of franciscan altruism or mechanisms by which philanthropy fails to think systemically or etc. 

My two cents: guilt is a reasonable mechanism to draw one's attention to the stakes and the opportunities of their privilege, but is not "emotionally competitive" with responsibility. You, a member of the species that beat smallpox, are plausibly alive at a hinge of history. Who knows what levers are lying around under your nose. You, in a veil of ignorance sense, would prefer people of your privilege to do a minimum of try. There's a line in an old jewish book about not being free to abandon it, nor obligated to complete it (where it is presumably the brokenness of the world, etc.), which is emotionally very effective for me.

Guilt seems like it wants to emphasize my feelings about the unjust, from a cosmopolitan point of view, situation we find ourselves in. My subjective state, my inner monologue. It seems indifferent to arguments that making myself suffer as much as the people I want to help may not help those people as much as possible. In other words, it is negative. Responsibility is positive, it asks "what actions can you take?" This is at least a reasonable place to start. 

Comment by quinn on quinn's Shortform · 2022-05-25T02:10:57.622Z · EA · GW

Stem cell slowdown and AI timelines

My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown. 

Has anyone looked to that movement for lessons about AI? 

Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach? 

Comment by quinn on Valuing research works by eliciting comparisons from EA researchers · 2022-05-13T14:26:39.859Z · EA · GW

One particularly worrying difference in opinions is the difference in the range of values. Moorhouse’s range is 5.1 orders of magnitude, whereas Leech’s is 12.6 (the participants’ average is 7.6).

what about taking exp(normalize(log(x)) for some normalization function that behaves roughly like vector normalization? 

Comment by quinn on The Vultures Are Circling · 2022-04-09T15:49:32.271Z · EA · GW

“You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!”

I can speak of one EA institution, which I will not name, that suffers from this. Math and cognitive science majors can get a little too  far in EA circles just by mumbling something about AI Safety, and not delivering any actual interfacing with the literature or the community. 

So, thanks for posting. 

Comment by quinn on Sex work as part of mental health and wellbeing services · 2022-03-18T14:25:11.643Z · EA · GW

I am commenting to create public knowledge, in a form stronger than a mere upvote, that I think this post is on the right track and that wellbeing increases from just tackling loneliness, lack of affection, lack of validation, etc. directly ought to be a serious cause candidate. 

Comment by quinn on quinn's Shortform · 2022-03-18T13:29:13.960Z · EA · GW

idea: taboo "community building", say "capacity building" instead. 

https://en.wikipedia.org/wiki/Capacity_building 

Comment by quinn on Mediocre AI safety as existential risk · 2022-03-17T19:22:46.248Z · EA · GW

"At least existential"

Comment by quinn on EAGxBoston: Updates and Info from the Organizing Team · 2022-03-13T11:47:21.012Z · EA · GW

How do I get into the Groups slack?