Thanks, I found this moving. I associate stories like yours with the urge to become stronger/smarter/richer so I can make the suffering stop. Oh, and it reminds me of the story in Harry Potter and the Methods of Rationality where Harry almost decides to sacrifice his life to destroy Azkaban.
Thanks for the update, this sounds pretty reasonable to me. I can well imagine that this will even increase your legibility, as written materials are easier to skim for the information one is looking for.
That makes a lot of sense to me, especially the points about how little time this might take and that there is not conflict with prefering the discussion to be public. Thanks!
I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE. I'd be more worried if this would be a case where most people would just form a quick negative association and won't follow up later when this all turns out to be more or less benign.
Because I'm worried that this post could hurt my future ability to get a job in EAA, I'm choosing to remain anonymous.
I personally would also find it emotionally draining to criticize possible employers and would understand if one decides against contacting them privately. Not saying this happened here, but another seemingly valid reason I’d want to keep in mind.
Your question reads a bit like you disapprove of the author posting it without doing this. I agree that people criticizing an org should strongly consider contacting the org before their public criticism. But I think there are reasons to not contact an org before, besides urgency, e.g. lacking time, or predicting that private communication will not be productive enough to spend the little time we have at our disposal. So I currently think we should approve if people bring up the energy to voice honest concerns even if they don’t completely follow the ideal playbook. What do you, or others think?
Good point. I suppose I could end up being more optimistic because
some politicians might think supporting it will, all in all, still make it more likely for them to win office
they might not believe that too many people would take part in this, so they could win relatively cheap virtue points
they might just be convinced that this is a great idea and are open to testing it out with voters
no idea if true, but I imagine many politicians also don’t have too close relationships with a significant proportion of their (seasonal?) campaign staff and have enough slack cutting other things if necessary? Or to rely more on volunteers?
Probably it would help if you could find ways for the politicians to reap as much positive public recognition from this as possible, e.g. trying to place things like „Voters of both Richard Roe and Jane Doe donated 30.000$ as part of the One America Charity Campaign“ in the local news. Maybe also by letting them recommend a charity they’d like to be associated with.
Another thought, I guess you might face less opposition in areas where campaigning is less professionalized and connected to the respective party‘s campaign apparatuses, who I guess will not like this idea (assuming they exist).
Thanks, I enjoyed reading this! I read The Selfish Gene some years ago and your post made me realize that my mind hasn‘t yet settled on how to think about all this.
One thought that came up was that we might want to distinguish between evolutionary processes and genes? This is related to the saying „Don’t hate the player, hate the game“, only that the players/the genes are not even real agents with intentions, like you argued. And furthermore we maybe shouldn’t even lay blame on evolution, as it’s just a non-agentic dynamic that probably sprang to life randomly at some point.
Thanks, yes, that seems much more relevant. The cases in that paper feel slightly different in that I expect AI and ML to currently be much more "open" fields where I expect orders of magnitude more paths of ideas that can lead towards transformative AI than
paths of ideas leading to higher transistor counts on a CPU (hmm, because it's a relatively narrow technology confronting physical extremes?)
paths of ideas leading to higher crop yields (because evolution already invested a lot of work in optimizing energy conversion?)
paths of ideas leading to decreased mortality of specific diseases (because this is about interventions in extremely complex biochemical pathways that are still not well understood?)
Maybe I could empirically ground my impression of "openness" by looking at the breadth of cited papers at top ML conferences, indicating how highly branched the paths of ideas currently are compared to other fields? And maybe I could look at the diversity of PIs/institutions of the papers that report new state-of-the-art results in prominent benchmarks, which indicates how easy it is to come into the field and have very good new ideas?
Nice, I think developing a deeper understanding here seems pretty useful, especially as I don't think the EA community can just copy the best hiring practices of existing institutions due to lack in shared goals (e.g. most big tech firms) or suboptimal hiring practices (e.g. non-profits & most? places in academia).
I'm really interested in the relation between the increasing number of AI researchers and the associated rate of new ideas in AI. I'm not really sure how to think about this yet and would be interested in your (or anybody's) thoughts. Some initial thoughts:
If the distribution of rates of ideas over all people that could do AI research is really heavy-tailed, and the people with the highest rates of ideas would've worked on AI even before the funding started to increase, maybe one would expect less of an increase in the rate of ideas (ignoring that more funding will make those researchers also more productive).
my vague intuition here is that the distribution is not extremely heavy-tailed (e.g. the top 1% researchers with the most ideas contribute maybe 10% of all ideas?) and that more funding will capture many AI researchers that will end up landing in the top 10% quantile (e.g. every doubling of AI researchers will replace 2% of the top 10%?)
I'm not sure to which if any distribution in your report I could relate the distribution of rates of ideas over all people who can do AI research. Number of papers written over the whole career might fit best, right? (see table extracted from your report)
Share of the total held by the top ...
Papers written by scientist (whole career) [Sinatra et al. 2016]
Looking for co-founders for a corporate canpaigning org:
Assuming an average person can can a can of leftover food within a minute, if every company would allow each employee to can excess canteen food for only 15 minutes after lunch for a 30 year career, each person can easily can 80,000Cans within their lifetime.
My reservation is around the idea of keeping my identity small, as Jonas suggested in his post. I feel 5/5 as a member of the EA community, I’m just worried prominently giving myself tags like „I am an Effective Altruist“, or „Feminist“, “German“, „Man“, “Vegan“ etc. comes with baggage that will constrain my thinking and behavior without many benefits, compared to saying „I am part of the EA community, I come from Germany, my diet is vegan, I care about XYZ“.
I added the reading lists of 18 people to the database, some of whom joined in the last week, some of whom for some reason didn't yield reading lists in the first runthrough. I think this didn't change much, except that one of those 18 people already read The Scout Mindset, and now there's another Eragon book in the Lowest Rated list...
I also uploaded the code and csv file if anybody else wants to play around with it: https://github.com/MaxRae/EAGoodreads
Glad ya'll found this interesting! :)
ETA: I you want to look at the first version for some reason, it's archived here.
Thanks for writing this, I think this topic is worthy of more discussion.
Of course, this does not consider important tradeoffs, such as the potential for alienating other audiences. This will therefore be most useful to people whose primary audience is progressives.
I wonder how much we should even recommend leaning into the progressive/social justice framing when the audience primarily comes from this ideological bent.
I often find talk about privilege unproductive and used in a hostile/shaming kind of way and feel mixed about suggesting that this is part of the motivation of EA (which I prefer seeing as sth. like „we share the desire to help others and improve the world as much as possible“) and bringing more people with that mindset into EA
people that are not from the social justice bent might be especially worthy to attract in situations where progressives are the main audience, in order to gain intellectual diversity
If I’d read this testimonial on the local EA website, there’d be a solid chance I‘d have been significantly less interested because it doesn’t connect to my altruistic motivations and (in my head) strongly signals a political ideology.
For me, taking the Giving What We Can pledge was an expression of my commitment to using my class privilege to contributed to a movement towards a more equitable world for current and future generations
I think some points you mention, like highlighting more that aid recipients’ feedback is strongly taken into account, don’t risk turning off non-social justice people while still connecting to their motivation and worries, so maybe I’d wish to see more of that kind.
While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
There are, in the real world, unforeseen contingencies: eventualities that even the educated decision maker will fail to foresee. For instance, the recent tsunami and subsequent nuclear meltdown in Japan are events that most agents would have omitted from their decision models. If a decision maker is aware of the possibility that they may not be aware of all relevant contingencies—a state that Walker and Dietz (2011) call ‘conscious unawareness’ —then they face state space uncertainty.
I think most Utilitarians typically don't care about extinction of some species per se, but instead more about something like how it affects the total amount of good and bad experiences that are experienced. From that perspective, some billion years of life of continued existince of animals on Earth is probably way less exciting given there's one species, humanity, that probably is headed to become or give rise to a space faring civilization with the potential to vastly exceed any Utopian imaginations. Additionally, given that animals in nature are probably living way less enjoyable lives than most people imagine, I personally don't feel so good about the idea of dragging out the current state of nature for longer than necessary.
Cool angle and thought experiment, makes this all a bit more concete. My timelines for transformative AI already were close to what you're handwaving at, but I'm really happy to see more thought in this direction and hope that this inspires more people to take soon-ish tansformative AI even more seriously.
From my inside view it feels pretty unsettling to think about the changes I expect to happen in the coming decade or two. I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.
Thanks! I relate to the last paragraph, it felt motivating and important to think about a positive vision that we’re working for. I think your essay is a really nice introduction for that.
As a side note, I really like Scott Alexander‘s Archipelago of Civilized Communities, an Utopian-ish vision where communities can freely form on their own islands but people are free to always leave. It’s probably only one level above the „my house is made of banana pizza“ brain expansion level of monkey Utopias, but it could be a great first step!
Really enjoyed reading your report, I feel very motivated to organize such a tournament at our next local group unconference in Germany and think that your report and resources will make it more likely that I’ll go through with it. What the heck, let’s say 60% that I’ll organize it in the next unconference, conditional on me participating and nobody else bringing it up before me!
The EA forum is Serious Business!! Yeah, your thinking here seems pretty reasonable, I also can relate to the felt asymmetry between positive and negative karma. I think I previously noticed current karma points somehow feeding into my upvote decisions and it kinda felt like I don’t approve of it because I thought the ideal would be an independent vote of usefulness or something like that. But I also think that this is not a big factor and it doesn’t have a large impact here.
On a meta note, I wonder if it's a bad idea to think in terms of „How much total karma should this comment have?“, instead of treating it like a vote where each person only reacts in terms of how valuable he or she personally found the comment. With the former approach other people might be inclined to use their strong up- or downvotes to counteract this strategy again because they think the vote should represent what „the people“ individually think versus what a single high karma user thinks should be the correct number of points.
Another thought that came to mind:
As the canonical echo chamber reading list of EA books currently seems to consist of maybe on the order of 50 books, I might be less worried about this because 50 popsci books are not that many books? This should especially hold for people who read a lot, and who relatively quickly will have to explore outside of the canon. E.g. this seems to be true for Michael already, and after roughly 6 years EA I also have covered a considerable fraction of the canon and read a bunch outside of it. This is also my impression from following roughly twenty EAs on Goodreads. And for people that don‘t read so much it could be fine to just read what the busy readers recommend?
The worry about EAs reading too much of the same ideas is a good point. I wonder if there are strategies that could help us as a community to explore more literature. For example somebody could scrape the reading lists from members of the EA goodreads group and create an exploration reading list with the books that many people have on their reading list but haven't actually read. Or maybe a reading list with non-fiction books that are suspiciously lacking from EA reading lists.
I think I was among the first three votes and upvoted, so there seems to have been one big downvote, or maybe a bug, because when I upvoted it didn't have negative karma and now again it also doesn't (with 4 votes).
Great idea. What did you think about the idea to somehow streamline a process to share that Google Doc with others who might have something to say? A process that might require relatively little effort would be asking people in those forms "Would you be interested in receiving career plans from other people that are looking for feedback?". That might make it relatively effortless for people from a particular field, e.g. Cognitive Science in my case, to be matched to other people who might have valuable feedback.
It might be a bit effortful to match people, though I suppose you have information about the general field and that might already suffice? Or you might worry that people will receive unhelpful feedback and that this might reflect badly on you? Though I suppose you could emphasize that the people who you'd share the Google Doc are not vetted at all and are only fellow 80,000Hours fans who clicked on "I'd be down to look over other people's career plans".
Interesting thought, it seems plausible to me that something like that could in principle become a problem. Some more thoughts that come up:
it seems like a rather low-hanging fruit to first connect to as many people who share your goal
shouldn't we be able to tell if there are specific groups of people who's perspective might be lacking in EA? I feel like I saw this discussed before about conservatives and people from specific countires like China.
you seem to be thinking most about certain groups of professionals - I suppose this should be relatively easy to spot, and also I wonder if someone knows of plausible examples of professions who's thinking about the world might be lacking in EA
I'm maybe also less worried because EAs generally seem pretty open-minded and willing to explore unrelated communities and are intrigued by people with different opinions
I could also imagine that many EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare, and there are many more ways as a community to fool themselves if they are not populated by scientifically literate people
I can second feeling pretty heavy-hearted after my rejection, and really like the idea of vetting a crowd of volunteers. A similar idea would be to offer rejected people to share the info from their form, plus maybe their most important questions, with people who agreed to maybe take a look, e.g. via the EA Hub, where you could also filter relevant background. Or alternatively into a private group like „AI Safety Career Discussion“. I’m one of the shy people who would probably never do something like that themselves, but if it were an „official“ and recommended thing from 80,000Hours it would feel somehow much less scary.
I like it too, great idea and execution also looks pretty solid! I searched „Germany“ and got two forecasts related to Germany on spot #1 and #2 and then almost 20 seemingly unrelated forecasts only from Hypermind, which might suggest something fishy going on.
Cool, thanks for shining light on this! My local chapter is currently also trying to get started with some career support and we also struggled with coming up how to do this best. I think we all relate to not feeling knowledgable or competent enough to give significant career advice ourselves. Instead, we now will host a career co-working session every six weeks. The idea is to give people space and an occasion to work on anything they think is most useful and have people around that can help out with ideas and feedback.
And yes, thanks, the point about thinking with trendlines in mind is really good.
Maybe those two developments could be relevant:
bigger number of recent ML/CogSci/Comp. Neuroscience graduates that academically grew up in times of noticeable AI progress and much more widespread aspirations to build AGI than the previous generation
related to my question about non-academic open-source projects: If there is a certain level of computation necessary to solve interesting general reasoning gridworld problems with new algorithms, then we might unlock a lot of work in the coming years
Thanks! :) I find Grace's paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative "pioneers" having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.
What would be more informative for me?
by above logic maybe I would focus more on progress of younger fields within computer science
also maybe there is a way to measure how "random" praciticioners perceive the field to be - maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
also I'd be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
evolution's algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
e.g. I'm reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
though of course evolution did not aim at general intelligence, e.g. saying "evolution took hundreds of millions of years to develop an AGI" in this context seems disanalogous
how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Could you expand on what you mean by "Maybe we could have „Pledge“ badges"? E.g., where are you envisioning those badges being displayed?
I thought about people's forum accounts. There are also the EA hub accounts, but I basically never open it, not sure about others. I'd probably do it similar to Wikipedia (e.g. here), just having a small icon for the pledge and when you hover on it "GivingWhatWeCan member since April 2nd, 2020". I didn't think about other ideas, e.g. being helpful for a person deciding on a donation! I like the idea. One worry that comes up is that it could get a bit cluttered. Also, something in me feels a bit awkward when proudly displaying something, like I could become the target of the bullies of my highschool for feeling "too cool". The GWWC pledge is already so socially accepted as something cool that I don't feel this in that case.
One idea that just came up to me was making it easier to reap status benefits from the GWWC giving pledge, e.g. I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign. Relatedly, Subreddits and Wikipedia Projects seem to actively use badges of honor to acknowledge things like being a donor, having helped with some task etc. Maybe we could have „Pledge“ badges.
Another idea: getting access to people one holds in high regard could also be something to think about. One could promote speakers coming to local groups, or generally promote networking within the community more.
Another thought that came up: Not being chosen for 80,000Hours‘ career coaching felt like it was a symptom of my relatively low value for the community (not saying there is room for improvement how they communicated that, was years ago). I imagine it feels similar for some others. Maybe having motivated volunteers taking up the rejected applicants would be a cheap way to signal „there are people in the community that value you being here and trying to work out an EA career path“?
I think [the risk of letting single AI systems control essential products like the internet or electrical grids] is a fairly predictable problem that normal mechanisms will handle, though, especially given widespread mistrust of AI, and skepticism about its robustness.
I was wondering if this neglects the risks of some agents unilaterally using AI systems to control those services, e.g. we might worry about narrow AI finding ways to manipulate stock markets, which (speaking as someone with 0 knowledge) naively doesn‘t seem easily fixed with existing mechanisms. E.g. the flash crash from 2010 seems like evidence for the fragility
New regulations put in place following the 2010 flash crash proved to be inadequate to protect investors in the August 24, 2015, flash crash — "when the price of many ETFs appeared to come unhinged from their underlying value" — and ETFs were subsequently put under greater scrutiny by regulators and investors.