Posts
Comments
Agreed. Worth looking into alternatives that would be good for everyone.
Low hanging fruit sounds like a good alternative :p now I’m picturing the minks chilling under a bunch of trees, picking and ‘til they get their fill. Sadly, that’s very far from the situation…
Is the hypothesis that the minks are more likely to have caught it from encounters with wild birds, than the raw poultry that you mention they were fed on?
Perhaps a ban on feeding them poultry would be tractable/helpful if not?
This should be possible by adding noindex meta tags. That would indicate to search engines that a page shouldn’t appear in their results. They don’t have to honor that, but the major ones do, which is probably all we’d care about. I’m not sure how quickly stuff that is already in their index would be removed but there might be a way to manually trigger that.
I like the idea, but it probably wouldn’t/ shouldn’t change how much one should self-censor based on the possibility of things being quoted out of context by journalists. Any journalist worth their salt would have no trouble coming here to use the forum search, or creating an account.
Awesome! Let's keep in touch and when you guys are up and running we can provide you a proper welcome :)
Thank you for the feedback! I agree that it's not the best one I've reposted. I haven't had much time for digging in the LW archives lately though and I came across it, and it actually helped me make some concrete improvements to productivity, so I thought it could possibly help others. I am realizing now that I may be more excited about the productivity-hack genre than most, so I will keep that in mind moving forward.
If you haven't checked out the ~30 earlier reposts, you can find them by clicking on the tag. I would be surprised if you didn't find that stuff higher quality as they are mostly older and higher karma. Feel free to tag your own reposts as well. I think it would be great to have a collection of stuff that doesn't just reflect my tastes/interests, and I am not sure how frequently I will be able to keep posting at this point.
Fixed thanks!
I have been using Focusmate* for a while now and this post helped me realize that one of my biggest failure modes with it was not having sessions setup to begin each day. I would use it a lot for a while and then get out of the habit and my productivity would gradually start to suffer and it took me some time to realize and get back into it.
Now I have been booking first sessions out for two weeks. It's flexible in that I can always cancel it if something comes up, but I usually won't and it's elastic in that even if I bail on it for the good part of a day, my first session is setup by default for me for the next day. Usually that's enough to get and keep me on track.
*A service that matches you with video co-working partners for accountability. More about it and EA group details here.
Otherwise, they will be effectively alone in the middle of nowhere, totally dependent on the internet to exchange and verify ideas with other EA-minded people (and all the risks entailed by filtering most of your human connection through the internet).
The first part of this sentence seems fine to me, and living in the country can be isolating and is not for everyone, but just because there aren't other EAs around, doesn't mean you have to get all your human connection through the internet. Having interests and relationships outside of EA/AI Safety circles is probably beneficial for mental health.
FWIW I live in Vermont, on the border of NH and it’s about 2 hours to Boston. Not sure where this group house is, but Burlington is 90 minutes from here on the opposite side of the state, so 3.5 hours from Boston - not that it couldn’t take longer if you aren’t near a highway.
I know of 2 or 3 (not sure if one of them is still there) people in Burlington working on AI safety and there are ~8 of us Vermont EAs that have been getting together sporadically for the last year or so. Would love to expand that group if anyone wants to get in touch. Don’t be a stranger :)
Thank you!
Is it not possible that, if it became public knowledge that grants were insured against clawback, lawyers would try harder to get them? If the money is already spent and it’s a bunch of broke individuals, it may not be worth the expense of trying to claw it back. I guess that would just be something Bill would have to account for.
I agree with most of this - clusters probably not very accurate, divisive religious terminology, him identifying with one of the camps while describing them.
Can you elaborate a bit more on why you think binary labels are harmful for further progress? Would you say they always are? How much of your objection here is these particular labels and how Scott defines them, and how much of it is that you don't think the shape can be usefully divided into two clusters?
I find that, on topics that I understand well, I often object intuitively to labels on the grounds that they aren't very accurate, or don't describe enough nuance, but for topics I am not expert on, I sometimes find it useful to be able to gesture at the general shape of things.
I guess I'm still interested in possible paths to understanding AI risk that don't require accepting some of the "weirder" arguments up front, but might eventually get you there.
Agreed. I find Kevin to be an excellent communicator on the subject. There are a few other posts on the forum with podcasts and videos featuring him, easily found my searching on his name, for those who are interested in further content.
I noticed when I was setting up the recurring donations, that it is preselected to donate $15 to Every.org. If you don't want to do that, you have to slide the slider to the left, down to $0.
Also, if you don't want to continue with the recurring donations after the match, be sure to set a reminder to disable it in December.
@WilliamKiely, you might want to consider adding either of these points to the original post.
I agree, but I think it has to be a consideration when trying to market something widely these days. That said, my general impression is that it's less of an issue with food than in other areas.
I am rooting for this so hard (for selfish as much as altruistic reasons). I passed it on to the only person I know who funds alt-protein stuff, though I imagine he would have seen it anyway. I am not sure what else I can do to help. If you ever need small scale help with a website (IE we need to update this info quickly, or add this small feature, but probably not we need to design a whole website), DM me and I will make it happen.
I do also really like the idea that this is going for more highbrow, rather than fast food, which is so crowded with alt-protein options these days (don't need another realistic burger or chicken nuggets, thanks).
And offsets too FWIW. Something about avoiding doing something bad makes me feel like a good person, in a way that doing something bad and then making up for it by doing something good just doesn't.
I'm not sure if the two events are just too far apart in time, or if my EA/rational side just kicks in and I can't feel good about donating to offset a particular thing instead of to the most effective thing. Or maybe I just can't emotionally get over my sense of "can't undo the bad thing".
The second paragraph really hits on the nose how I feel, without having ever been able to put it into words - regarding both eating less animal products and recycling.
I noticed a typo in the transcript that is pretty confusing. Probably important to fix, since this article is being used in several curriculum for AI alignment.
"power is useful for loss of objectives"
should be
"power is useful for lots of objectives"
Great to see someone else doing one of these :)
Thank you for the correction!
(I don’t think this is done already but downvote or comment if so)
Some optional/additional way to weight karma as a percentage of total users (active users? readers?) so that sorting all posts by karma doesn’t show only newer posts at the top, and older popular posts way down with newer less-popular posts.
RSS feeds for tags (be surprised anyone else wants this but maybe?)
Credit this post for bringing the LW post to my attention.
To answer my own question, in case someone ends up here in the future, wondering the same thing, there are some options to do this.
I dug a bit deeper and tried a few out. Updating the original post with details.
This is super cool, thanks! If I'm not mistaken, I don't see anything about excluding tags in there. That would probably be somewhat too lengthy for the query string anyway.
I realized my "posts below X karma" idea wasn't particularly coherent actually, because every post starts with low karma, so, depending on how often my reader checks (and cache length) it would potentially just show all posts.
LOL. I wonder how much of this is the red-teaming contest. While I see the value in it, the forum will be a lot more readable once that and the cause exploration contest are over.
Sorry for the delay. Yes this seems like the crux.
It would be very surprising if there weren’t others who are in a similar boat, except being somewhat more averse to longtermism and somewhat less appreciative of the rest of the EA, the balance swings the other way and they avoid the movement altogether.
As you pointed out, there's not much evidence either way. Your intuitions tell you that there must be a lot of these people, but mine say the opposite. If someone likes the Givewell recommendations, for example, but is averse to longtermism and less appreciative of the other aspects of EA, I don't see why they wouldn't just use Givewell for their charity recommendations and ignore the rest, rather than avoiding the movement altogether. If these people are indeed "less appreciative of the rest of EA", they don't seem likely to contribute much to a hypothetical EA sans longtermism either.
Further, it seems to me that renaming/dividing up the community is a huge endeavor, with lots of costs. Not the kind of thing one should undertake without pretty good evidence that it is going to be worth it.
One last point, for those of us who have bought in to the longtermist/x-risk stuff, there is the added benefit that many people who come to EA for effective giving, etc. (including many of the movement's founders) eventually do come around on those ideas. If you aren't convinced, you probably see that as somewhere on the scale of negative to neutral.
All that said, I don't see why your chapter at Microsoft has to have Effective Altruism in the name. It could just as easily be called Effective Giving if that's what you'd like it to focus on. It could emphasize that many of the arguments/evidence for it come from EA, but EA is something broader.
There’s not a way to filter stuff from the RSS fee is there? Doesn’t seem like it, but maybe I missed something.
I guess we can swap anecdotes. I came to EA for the Givewell top charities, a bit after that Vox article was written. It took me several years to come around on the longtermism/x-risk stuff, but I never felt duped or bait-and-switched. Cause neutrality is a super important part of EA to me and I think that naturally leads to exploring the weirder/more unconventional ideas.
Using terms like dupe and bait and switch also implies that something has been taken away, which is clearly not the case. There is a lot of longtermist/x-risk content these days, but there is still plenty going on with donations and global poverty. More money than ever is being moved to Givewell top charities (don't have the time to look it up, but I would be surprised if the same wasn't also true of EA animal welfare) and (from memory) the last EA survey showed a majority of EAs consider global health and wellbeing their top cause area.
I hadn't heard the "rounding error" comment before (and don't agree with it), but before I read the article, I was expecting that the author would have made that claim, and was a bit surprised he was just reporting having heard it from "multiple attendees" at EAG - no more context than that. The article gets more mileage out of that anonymous quote than really seems warranted - the whole thing left me with a bit of a clickbait-y/icky feeling. FWIW, the author also now says about it, "I was wrong, and I was wrong for a silly reason..."
In any case, I am glad your partner is happy with their charity contributions. If that's what they get out of EA, I wouldn't at all consider that being filtered out. Their donations are doing a lot of good! I think many come to EA and stop with that, and that's fine. Some, like me, may eventually come around on ideas they didn't initially find convincing. To me that seems like exactly how it should work.
That didn’t come off as clearly as I had hoped. What I meant was that maybe the leading with X-Risk will resonate for some and Longtermism for others. It seems worth having separate groups that focus on both to appeal to both types of people.
This sounds like a great idea. Maybe the answer to the pitching Longtermism or pitching x-risk question is both?
But I sometimes have a fear in the back of my mind that some of the attendees who are intrigued by these ideas are later going to look up effective altruism, get the impression that the movement’s focus is just about existential risks these days, and feel duped. Since EA pitches don’t usually start with longtermist ideas, it can feel like a bait and switch.
Do you have any evidence that this is happening? Feeling duped just seems like a bit of a stretch here. Animal welfare and global health still make up a large part of the effectivealtrusim.org home page. Givewell still ranks their top charities and their funds raised more than doubled from 2020 to 2021.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/23966005/Screen_Shot_2022_08_22_at_4.42.11_PM.png)
My impression is that the movement does a pretty good job explaining that caring about the future doesn’t mean ignoring the present.
Assuming someone did feel duped, what are the range of results?
- Perhaps they would get over it and read more about longtermisim and the feeling would subside.
- Perhaps they wouldn't and they'd just stick with whatever effective cause they would have otherwise donated to.
- Perhaps some combination of the two over time (this was pretty much my trajectory).
- Perhaps they'd think "These people are crazy, I'm gonna keep giving playpumps"
Kidding aside, the latter possibility seems like the least likely to me, and anyone in that bucket seems like a pretty bad candidate for EA in general.
William_MacAskill's comment on the Scott Alexander post explains his rational for leading with longermism over x-risk. Pasting it below so people don't have to click.
It's a tough question, but leading with x-risk seems like it could turn off a lot of people who have (mostly rightly, up until recently) gotten into the habit of ignoring doomsayers. Longtermism comes at it from a more positive angle, which seems more inspiring to me (x-risk seems to be more promoting acting out of fear to me). Longtermism is just more interesting as an idea to me as well.
Is existential risk a more compelling intro meme than longtermism?
My main take is: What meme is good for which people is highly dependent on the person and the context (e.g., the best framing to use in a back-and-forth conversation may be different from one in a viral tweet). This favours diversity; having a toolkit of memes that we can use depending on what’s best in context.
I think it’s very hard to reason about which memes to promote, and easy to get it wrong from the armchair, for a bunch of reasons:
- It’s inherently unpredictable which memes do well.
- It’s incredibly context-dependent. To figure this out, the main thing is just about gathering lots of (qualitative and quantitative) data from the demographic you’re interacting with. The memes that resonate most with Ezra Klein podcast listeners are very different from those that resonant most with Tyler Cowen podcast listeners, even though their listeners are very similar people compared to the wider world. And even with respect to one idea, subtly different framings can have radically different audience reactions. (cf. “We care about future generations” vs “We care about the unborn.”)
- People vary a lot. Even within very similar demographics, some people can love one message while other people hate it.
- “Curse of knowledge” - when you’re really deep down the rabbit hole in a set of ideas, it’s really hard to imagine what it’s like being first exposed to those ideas.
Then, at least when we’re comparing (weak) longtermism with existential risk, it’s not obvious which resonates better in general. (If anything, it seems to me that (weak) longtermism does better.) A few reasons:
First, message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public, and AI risk doesn’t do great. The three best-performing messages they tested were:
- “The current pandemic has shown that unforeseen events can have a devastating effect. It is imperative that we prepare both for pandemics and other risks which could threaten humanity's long-term future.”
- “In any year, the risk from any given threat might be small - but the odds that your children or grandchildren will face one of them is uncomfortably high.”
- “It is important to ensure a good future not only for our children's children, but also the children of their children.”
So people actually pretty like messages that are about unspecified, and not necessarily high-probability threats, to the (albeit nearer-term) future.
As terms to describe risk, “global catastrophic risk” and “long-term risk” did the best, coming out a fair amount better than “existential risk”.
They didn’t test a message about AI risk specifically. The related thing was how much the government should prepare for different risks (pandemics, nuclear, etc), and AI came out worst among about 10 (but I don’t think that tells us very much).
Second, most media reception of WWOTF has been pretty positive so far. This is based mainly on early reviews (esp trade reviews), podcast and journalistic interviews, and the recent profiles (although the New Yorker profile was mixed). Though there definitely has been some pushback (especially on Twitter), I think it’s overall been dwarfed by positive articles. And the pushback I have gotten is on the Elon endorsement, association between EA and billionaires, and on standard objections to utilitarianism — less so to the idea of longtemism itself.
Third, anecdotally at least, a lot of people just hate the idea of AI risk (cf Twitter), thinking of it as a tech bro issue, or doomsday cultism. This has been coming up in the twitter response to WWOTF, too, even though existential risk from AI takeover is only a small part of the book. And this is important, because I’d think that the median view among people working on x-risk (including me) is that the large majority of the risk comes from AI rather than bio or other sources. So “holy shit, x-risk” is mainly, “holy shit, AI risk”.
Do neartermists and longtermists agree on what’s best to do?Here I want to say: maybe. (I personally don’t think so, but YMMV.) But even if you do believe that, I think that’s a very fragile state of affairs, which could easily change as more money and attention flows into x-risk work, or if our evidence changes, and I don’t want to place a lot of weight on it. (I do strongly believe that global catastrophic risk is enormously important even in the near term, and a sane world would be doing far, far better on it, even if everyone only cared about the next 20 years.)
More generally, I get nervous about any plan that isn’t about promoting what we fundamentally believe or care about (or a weaker version of what we fundamentally believe or care about, which is “on track” to the things we do fundamentally believe or care about).
What I mean by “promoting what we fundamentally believe or care about”:
- Promoting goals rather than means. This means that (i) if the environment changes (e.g. some new transformative tech comes along, or the political environment changes dramatically, like war breaks out) or (ii) if our knowledge changes (e.g. about the time until transformative AIs, or about what actions to take), then we’ll take different means to pursue our goals. I think this is particularly important for something like AI, but also true more generally.
- Promoting the ideas that you believe most robustly - i.e. that you think you are least likely to change in the coming 10 years. Ideally these things aren’t highly conjunctive or relying on speculative premises. This makes it less likely that you will realise that you’ve been wasting your time or done active harm by promoting wrong ideas in ten years’ time. (Of course, this will vary from person to person. I think that (weak) longtermism is really robustly true and neglected, and I feel bullish about promoting it. For others, the thing that might feel really robustly true is “TAI is a BFD and we’re not thinking about it enough” - I suspect that many people feel they more robustly believe this than longtermism.)
Examples of people promoting means rather than goals, and this going wrong:
- “Eat less meat because it’s good for your health” -> people (potentially) eat less beef and more chicken.
- “Stop nuclear power” (in the 70s) -> environmentalists hate nuclear power, even though it’s one of the best bits of clean tech we have.
Examples of how this could go wrong by promoting “holy shit x-risk”:
- We miss out on non-x-risk ways of promoting a good long-run future:
- E.g. the risk that we solve the alignment problem but AI is used to lock in highly suboptimal values. (Personally, I think a large % of future expected value is lost in this way.)
- We highlight the importance of AI to people who are not longtermist. They realise how transformatively good it could be for them and for the present generation (a digital immortality of bliss!) if AI is aligned, and they think the risk of misalignment is small compared to the benefits. They become AI-accelerationists (a common view among Silicon Valley types).
- AI progress slows considerably in the next 10 years, and actually near-term x-risk doesn’t seem so high. Rather than doing whatever the next-best longtermist thing is, the people who came in via “holy shit x-risk” people just do whatever instead, and the people who promoted the “holy shit x-risk” meme get a bad reputation.
So, overall my take is:
- “Existential risk” and “longtermism” are both important ideas that deserve greater recognition in the world.
- My inclination is to prefer promoting “longtermism” because that’s closer to what I fundamentally believe (in the sense I explain above), and it’s nonobvious to me which plays better PR-wise, and it’s probably highly context-dependent.
- Let’s try promoting them both, and see how they each catch on.
Ah thanks. I should remember to check for that.
Good things to consider for sure. Everything here seems to imply that figuring out many hours you can be productive in a day is very important, so you can work towards being at your desk for that amount of time.
1a seems true, but with essential stuff like eating, showering, etc. it seems unlikely that I, personally, would ever face a true deficit this kind of time.
1b Again, without some truly absurd level of reduction in chores (live-in housekeeper or something), basic cleaning up after myself and taking care of myself seems to offer plenty of this recharge time for me.
Concerning Uber vs. public transportation, the most value I think I’ve gotten out of choosing the former is usually when it will allow me to get more sleep and be more productive the following day. Otherwise, I find there are often many ways to be productive on public transportation.
Some examples of resources that have inspired people to get involved in effective (but don’t necessarily represent its current form) include:
From the "What resources have inspired people to get involved with effective altruism in the past?" FAQ, I think the above is missing the word "altruism." It seems like it should be "...get involved in effective altruism (but don’t..."
I think this is put very eloquently in the "What is the definition of effective altruism?" FAQ below "Effective altruism, defined in this way, doesn’t say anything about how much someone should give. What matters is that they use the time and money they want to give as effectively as possible."
You can apply effective altruism no matter how much you want to focus on doing good – what matters is that, no matter how much you want to give, your efforts are driven by the four values above, and you try to make your efforts as effective as possible.
Typically, this involves trying to identify big and neglected global problems, the most effective solutions to those problems, and ways you can contribute to those solutions, using whatever you’re willing to give. By doing this, it’s possible for anyone to do far more to help others.
The use of the word "give" in these two paragraphs makes me worry people will interpret it as exclusively giving money. In the first paragraph, you've also gotten a little far down the page from the four values by this point. Perhaps this could be simplified to "...no matter how much you contribute, you try to make your efforts as effective as possible."
And in the second paragraph,
"...using whatever resources (time, money, etc.) you are willing to give"
You can apply effective altruism no matter how much you want to focus on doing good – what matters is that, no matter how much you want to give, your efforts are driven by the four values above, and you try to make your efforts as effective as possible.
You've gotten a little far away from the four values by this point. Perhaps this could be simplified to "no matter how much you want to give, you try to make your efforts as effective as possible.
This doesn't seem like it should be a bullet point (maybe just a sentence that follows) since it is not a way people apply the ideas in their lives.
Donating to carefully chosen charities, such as by using research from GiveWell or Giving What We Can.
Would "using research from GiveWell or taking the Giving What We Can pledge." make more sense? Does GWWC do their own charity research?
Choosing careers that help tackle pressing problems, such as by using advice from 80,000 Hours, or by finding ways to use their existing skills to contribute to these problems.
This sentence doesn't quite make sense to me, When I get to "such as", I am expecting an example of a career to follow. Maybe "...such as those recommended by 80,000 Hours..."
"or by finding ways to use their existing skills..." doesn't seem to quite work either. Why isn't someone who chooses a career based on 80k advice also using their existing skills?
Lastly, I think you mean " to contribute to solutions to these problems", obviously not contribute to the problems themselves :)
The scientific method is based on simple ideas – e.g. that you should test your beliefs – but it leads to a radically different picture of the world (e.g. quantum mechanics).
It's not really clear what "different" refers to here. Different than what?
an entirely vegan burger that tastes like meat, and is now sold in Burger King.
I love that Impossible Burgers exist and that they reduce meat consumption. I even think they taste fine, but they do not taste much like meat to me. I am sure they do to some people, but I would say this is not a fact and should not be stated like it's a fact. It might seem like a small point, but when I imagine being introduced to new ideas by an article like this, small details that seem wrong to me can really reduce how credible I find the rest of it. I think something as simple as "approaches the taste and texture of meat" would resolve my issue with it.
founded The Center for Human-Compatible AI at UC Berkeley. This research institute aims to develop a new paradigm of “human-compatible” AI development.
Repeating "human-compatible" feels a bit weird/redundant here.
Unable to govern beings with capabilities far greater than our own...
Referring to an AI system as a "being" might be a bit alienating or confusing to people coming to EA for the first time, and the background discussion/explanation seems a bit out of scope for an intro article.
...nothing to rule out a disease that’s more infectious than the omicron variant, but that’s as deadly as smallpox.
Minor point, but using omicron variant as an example might seem dated once we get to the next variant or two in 6 months or a year. Perhaps Measles would be a better choice?
Effective altruism was formalized by scholars at Oxford University...
My understanding is that EAs origins are a bit broader than just Oxford and this sort of gives the impression that they aren't. It also might be off-putting to some, depending on their views of academia (though others might be impressed by it). The word "formalized" gives the impression that things are a bit more set in stone and feels a bit contradictory to the truthseeking/updating beliefs stuff.