I'm interested in this question. I've made the decision to quit most social media, and instagram isn't very popular in my friendgroup anyway. But I live in Silicon Valley, and I know people's habits change fast. I really buy the thesis that there's a large demographic of people who'd be interested in EA but would be much more likely to hear about it if they could discover / remain engaged with it on instagram.
Still, I'm not sure what it would look like. In my mind EA messages do poorly when they're forced to fit into images. The arguments for EA don't need to be wrapped in dense philosophy papers, but given the difficulty of conveying the ideas with fidelity, I don't have high confidence that an instagram page would do a good job of it.
Edit: Wait, I think I got this tone wrong. I do want to state some level of skepticism coming in, but I'm genuinely really interested in responses to my skepticism and in ways of doing a good EA-aligned instagram account.
Hrm, yes. When I mark something as read it does not appear to affect the current view of the page. When I refresh the page the post now appears as read / removes itself from my recommendations. I can reproduce this on LessWrong and have let them know.
I want to write a post saying why Aaron and I* think the Forum is valuable, which technical features currently enable it to produce that value, and what other features I’m planning on building to achieve that value. However, I've wanted to write that post for a long time and the muse of public transparency and openness (you remember that one, right?) hasn't visited.
Here's a more mundane but still informative post, about how we relate to the codebase we forked off of. I promise the space metaphor is necessary. I don't know whether to apologize for it or hype it.
You can think of the LessWrongcodebase as a planet-sized spaceship. They're traveling through the galaxy of forum-space, and we're a smaller spacecraft following along. We spend some energy following them, but benefit from their gravitational pull.
(The real-world correlate of their gravity pulling us along is that they make features which we benefit from.)
We have less developer-power than they do (1 dev vs 2.5-3.5, depending on how you count.) So they can move faster than we can, and generally go in directions we want to go. We can go further away from the LW planet-ship (by writing our own features), but this causes their gravitational pull to be weaker and we have to spend more fuel to keep up with them (more time adapting their changes for our codebase).
I view the best strategy as making features that LW also wants (moving both ships in directions I want), and then, when necessary, making changes that only I want.
One implication of this is that feature requests are more likely to be implemented, and implemented quickly, if they are compelling to both the EA Forum and LessWrong. These features keep the spaceships close together, helping them burn less fuel in the process.**
Good noticing. One facet of the LessWrong feature is that as users pass a certain amount of karma they gain privileges. I believe that high karma users such as Eliezer (on LW) or Peter Hurford (here) can moderate their own posts, even on the frontpage. I think very low karma users may not be able to moderate posts that remain on their personal blog. Given the EA Forum's differences in how we treat the personal blog / frontpage distinction, we may want to diverge from LW's feature-set here. I haven't touched it since I was initially setting up the Forum, and I'm not sure how I left it. I'm not sure all of the features are there. Certainly we'd want to write up a user's guide for the feature. I appreciate the comment. When we were setting up the Forum it wasn't top priority, but very plausibly the landscape has changed. Without making a public commitment (😛), I wouldn't be surprised if that fix got prioritized – it does seem useful for encouraging people to post.
For the first he got this notable comment from OpenPhil's Lewis Bollard. Honorable mention includes this post which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
What should my prior be about the likelihood of being at the hinge of history? I feel really interested in this question, but haven't even fully read the comments on the subject. TODO.
How much evidence do I have for the Yudkowsky-Bostrom framework? I'd like to get better at comparing the strength of an argument to the power of a study.
Suppose I think that this argument holds. Then it seems like I can make claims about AI occurring because I've thought about the prior that I have a lot of influence. I keep going back and forth about whether this is a valid move. I think it just is, but I assign some credence that I'd reject it if I thought more about it.
What should my estimate of the likelihood we're at the HoH if I'm 90% confident in the arguments presented in the post?
I believe the #3 not showing up is due to it having non-bold text on that line. (the  footnote). This is kinda awkwardly unexpected behavior, sorry about that. But I'm not sure what I'd rather the behavior be. The simple rule of "lines with only bold text are counted as h4, otherwise it's treated as a paragraph" probably leads to less surprise than some attempt to do a threshold.
Agree that there's a different incentive for cooperative writing than for clickbait-y news in particular. And I agree with your recommendations. That said, I think many community writers may undervalue making their content more goddamn readable. Scott Alexander is a verbose and often spends paragraphs getting to the start of his point, but I end up with a better understanding of what he's saying by virtue of being fully interested.
All in all though, I'd recommend people try to write like Paul Graham more than either Scott Alexander or an internal memo. He is in general more concise than Scott and more interesting than a memo.
Alright, the title sounds super conspiratorial, but I hope the content is just boring. Epistemic status: speculating, somewhat confident in the dynamic existing.
Climate science as published by the IPCC tends to
1) Be pretty rigorous
2) Not spend much effort on the tail risks
I have a model that they do this because of their incentives for what they're trying to accomplish.
They're in a politicized field, where the methodology is combed over and mistakes are harshly criticized. Also, they want to show enough damage from climate change to make it clear that it's a good idea to institute policies reducing greenhouse gas emissions.
Thus they only need to show some significant damage, not a global catastrophic one. And they want to maintain as much rigor as possible to prevent the discovery of mistakes, and it's easier to be rigorous about things that are likely than about tail risks.
Yet I think longtermist EAs should be more interested in the tail risks. If I'm right, then the questions we're most interested in are underrepresented in the literature.
Glad to see this writeup! I really like that you compare yourself directly to your estimate of your counterfactual work. And it comes up positive! Great work. Especially given that I think entrepreneurship is really hard.
Some comments after half-skimming half-reading, sorry if I'm asking dumb questions:
1. You basically are using a net promoter question at one point, but it seems like most experts on the subject would say that getting a 7+ is way too easy. Wikipedia says that 7-8 is considered "passive". Typically there's a score that gets calculated, which I'd be interested in your report of what you got here.
2. Can you report the increase in hours in effect size as well as absolute hours?
3. I would say it's worth noting what the clients who didn't complete 4 weeks thought.
4. Maybe considering writing up some of your best advice? I've heard (but cannot recall the source) that for-profit consulting firms will post their best advice because it acts as a beacon, drawing in those who found it useful. And it seems extra pro-social in an EA context.
LessWrong has a "Moderators may promote [to the frontpage]" checkbox (defaults to checked), which allows you to keep your post on your personal blog. We removed it because it was confusing and we have a different view of personal blogs than LessWrong does. I could imagine trying to make personal blogs more of a thing and reenabling that checkbox (with a better name).
I somewhat agree. When I say "I'm worried about", I don't mean "I'm confident but using softening language" – I'm actually pretty uncertain. The meta point is that I'm worried about it and predict it would be hard to reverse.
On the object level, I'm less worried about AI safety and animal welfare so much as on the boundaries of related cause areas. For example:
1) Hardening currently fuzzy boundaries between different specialties of long-termism
2) Reducing the flow of context from object level work into the meta-EA space
3) Specialty knowledge sharing between cause areas, like outreach knowledge between farm animal welfare and global poverty
These seem like problems that one could at least largely address, but (back to the meta point) I'd expect doing so well would require at least a month's worth of work.
EA Forum dev here. At a high level, I think of the EA Forum* as being a group blog where anyone can post. I would group two themes of the problems as
1) “relevant content discovery is hard” and
2) “posting can be intimidating”.
My model is that you think 3) “discussion happens on facebook” is downstream of 1 and particularly 2.
I have progress to report on 1, so I can start with that. Finding good, old content is hard on any Forum, and I’m pretty happy with the way the Posts by Timeframe feature (previously mentioned) worked out. You can see it live on the LessWrong version of the page; we’ll be deploying it soon. Now you can scroll back through the weeks / months / years and find popular posts that you missed. I’m also most of the way through adapting the sidebar (seen on the left side of LW here) to make it easier to discover that the All Posts page exists.
It’s possible that sub-forums is the best way to solve this. However, I’m worried about a couple of things: a) that sub-forums would cause the specializations to ossify and remove valuable cross-pollination of ideas, and more mundanely, b) that I really want to be careful with the hard-to-reverse and disruptive change of moving to a sub-forum format. Both point in favor of tagging, which I’m somewhat more optimistic about.
Problem 2 I’m more uncertain about. I want people to have a lower bar for asking and answering questions. The new shortform feature from LessWrong will also help somewhat. But making the Forum easier to post to without lowering the quality level is a hard problem. See above for why I’m less optimistic about too hasty a switch to sub-forums to solve this.
Thanks for being an attentive observer and providing feedback.
* This also applies to LessWrong, whose codebase we forked.
My guess is that Microsoft is going to get a lot of that $Bn back in Azure purchases. And they get some currently murky benefits from first access to tech. I doubt they're trying to make that much in the way of financial returns from the stock.
Open AI is trying to give the appearance that the nonprofit board is still very much the important one and that they're still very focused on AGI + safety.
Thanks for the post, I really like the attempt to use survey data to ensure that the definition reflects the views of the leaders and members of the EA community.
I agree that the maximizing nature of effective altruism is an important part of its public value. EA has made most of its strides in my mind because it wasn't satisfied with merely providing a non-zero amount of help to people. Although we often use examples like Play Pumps that were probably net negative, the founders of GiveWell would have had a much easier time if they were just trying to find net positive charities.
However, I'm not sure that maximizing is as clearly uncontroversial as you believe. I would guess that if surveys asked about it, leaders would be fairly united behind it, but it would get something in the range 75%-50% support from the community at large.
I can do an informal poll of this group and report back.
I'd also be interested a discussion of the limits to maximizing. For example, if an EA is already working on something in the 80th percentile of effectiveness, do they find it compelling to switch to something in the 90th percentile?
LessWrong* used to have a single default image for everything and I thought it was annoying because it was so generic and was glad when they removed it. I'm not familiar with best practices on social links, it's possible there's standard advice that disagrees with me.
*Who's codebase we forked and thus we have similar dynamics as.
Currently a work in progress feature that is admin only. (And has been in that state for a while unfortunately.) I've reverted this post. Do you know what the sequence of events is that caused it to get garbled?
Lead developer for the Forum here. I'm trying to ramp up the amount of feedback I get and this thread was very useful.
Addressing one large theme - sorting and filtering:
I was excited to see people wanting more sorting options because I'm working on something related right now. My goal is to make it easier to get e.g. the top posts in the past month. It's still very much in early development and I want to make it something that works for LessWrong as well, so no promises that it'll ship the way I'm currently envisioning it.
In order to get that though you have to find the AllPosts page. LessWrong has a sidebar on the left that's auto-expanded and has a link to the AllPosts page. I'm interested in what the EA version of the LessWrong sidebar should be and this conversation has updated me towards building that sooner.
Follow-up thought 1: This model implies that frugality is budget-dependent. A trader at a hedge fund is much less constrained by $500 weekend plans. In fact, thinking about this model might make the trader seem less frugal, as she wantonly cancels expensive trips. I'm tempted to say this means I should be paid more (Hi, boss! 😛) but I actually think it's income-neutral, and mostly about my budgeting.
[Disagreeing with my boss on the internet, but after chatting over lunch]
Inflexible life outside of work seems to be the problem. There are monetary and non-monetary examples of ways to become inflexible:
1. I spent $500 on my weekend plans. I can only that infrequently, so I really don't want to miss it.
2. I'm leading a group on a road trip this weekend. If I bail they'll be disappointed.
This echoes Gordon elsewhere in the comments, but I claim that non-frugality can be quite slack-constraining. This post has updated me towards keeping more slack in my budget. I'd like to not spend a significant portion of my spending money on any single adventure.
Give out student loans at ~market rates to EAs, with work that is high-impact but lower-than-counterfactual-salary counting as credit towards the loans.
Many talented EAs spend time in college or graduate school TA-ing to save money, or let financial constraints influence their trajectory. It seems risky to just throw money at people with a long time horizon of pay-off, and the risk that they'll just turn around and use that PhD to do unaligned work. This project would require an initial source of funds (perhaps even a traditional bank loan), and a funder that would be willing to commit to pay off the loans once they were satisfied with the impact of the post-graduation work of the debtor.
I believe this idea has been kicking around in my head after a conversation with Amy Labenz.
Great question Aaron! The LessWrong team wrote a good description of it that I’m going to steal:
Ask a Question. In your user menu (in the upper right corner of the screen) there is now an option for questions to ask a question, which will create a post with the question-flag. For now, it'll appear normally in lists of recent posts (including the home page, daily and your personal profile)
Answer a Question. This is similar to a comment, but the formatting is different to highlight that this is meant to have a different feel than commenting. Answers should aspire to resolve a question as accurately and thoroughly as possible, such that if you just read the question-followed-by-single-answer you'd have a pretty complete understanding of the issue.
Comment on an Answer. By default, only the top 3 comments will be displayed, but if you want to dig into the discussion of a given answer you can expand them.
Comment on a Question. You can comment on an overall question, without answering. This is for if you're still trying to understand the question, or you think it's making a conceptual mistake, or you just have some thoughts that don't neatly fall into the "answer" format.
Specialized Table of Contents. Questions with at least one answer automatically have a Table of Contents, even if there are no headings, to help users orient on a fairly complicated page.
It took me a minute to figure this on out. It looks to me like the editor switches to "Plain Markdown Editor" on mobile, even if you don't ordinarily have it enabled. Here's how to make links in markdown:
[optional link text like "click here"](https://forum.effectivealtruism.org/fakelink).