Posts

What are the top priorities in a slow-takeoff, multipolar world? 2021-08-25T08:47:47.250Z
[PR FAQ] Adding profile pictures to the Forum 2021-08-09T10:11:09.341Z
What is life like at the median global income? 2021-06-30T14:09:12.286Z
Forum update: New features (June 2021) 2021-06-17T05:01:31.723Z
AMA: JP Addison and Sam Deere on software engineering at CEA 2021-03-12T22:54:13.481Z
Forum update: New features (December 2020) 2020-12-04T06:45:30.607Z
The two-minute EA Forum feedback survey 2020-09-18T11:27:08.241Z
Forum update: New features (August 2020) 2020-08-28T16:26:34.914Z
EA Forum update: New editor! (And more) 2020-07-31T11:06:40.587Z
The one-minute EA Forum feedback survey 2020-07-30T09:37:20.448Z
EA Forum feature suggestion thread 2020-06-16T16:58:58.569Z
EA Forum Downtime Monday April 13th 9:30pm PDT 2020-04-13T16:16:49.467Z
How we think about the Forum 2019-10-15T16:24:04.447Z
Who runs the Forum? 2019-10-14T15:35:21.353Z
JP's Shortform 2019-08-13T01:40:24.406Z
Donor Lottery 2018 is live 2018-12-06T00:29:37.812Z

Comments

Comment by JP Addison (jpaddison) on Announcing my retirement · 2021-11-25T14:22:15.640Z · EA · GW

I hope others will join me in saying: thank you for your years serving as the friendly voice of the Forum, and best of luck at Open Philanthropy!

Comment by JP Addison (jpaddison) on Opportunity Costs of Technical Talent: Intuition and (Simple) Implications · 2021-11-25T10:02:19.738Z · EA · GW

I just want to say I love this metaphor and have already referenced it twice in conversation.

Comment by JP Addison (jpaddison) on Where should I donate? · 2021-11-23T14:16:21.067Z · EA · GW

I donate to, and generally advise other small donors to donate to, a donor lottery, for roughly the reasons outlined here.

Comment by JP Addison (jpaddison) on FTX EA Fellowships · 2021-11-10T11:53:01.215Z · EA · GW

It's a more America-friendly time zone though.

Comment by JP Addison (jpaddison) on What are your favourite ways to buy time? · 2021-11-04T10:18:19.421Z · EA · GW

Have you hired a digital assistant? Multiple of my coworkers have, though I think reviews are mixed.

Comment by JP Addison (jpaddison) on What are your favourite ways to buy time? · 2021-11-04T10:17:17.899Z · EA · GW

Use flightfox to buy flights. Opt for a human to book your flights and trust them to make decisions about money.

Comment by JP Addison (jpaddison) on Annual donation rituals? · 2021-10-31T11:40:03.032Z · EA · GW

Hot take: This is one of the largest benefits of the Giving Tuesday shenanigans.

Comment by JP Addison (jpaddison) on FTX EA Fellowships · 2021-10-25T12:20:05.463Z · EA · GW

I think the “Already working on EA jobs / projects that can be done from the Bahamas” is the answer here. To my read, this isn’t trying to fully fund someone’s work, but rather to incentivize someone to do the work from the Bahamas . If you were self-funding a project from savings, this doesn’t suddenly provide you a full salary, but it still probably looks very good as it could potentially eliminate your cash burn.

Comment by JP Addison (jpaddison) on Forum Update: New Features (October 2021) · 2021-10-22T14:58:39.258Z · EA · GW

I made some updates that should address a lot of this. Let me know what you think!

Comment by JP Addison (jpaddison) on Truthful AI · 2021-10-21T12:30:28.348Z · EA · GW

I'm pretty excited about this. It seems to be an approach that my gut actually believes could help with AI-powered propaganda, as written out here.

Comment by JP Addison (jpaddison) on Forum Update: New Features (October 2021) · 2021-10-19T15:03:39.744Z · EA · GW
  1. Yep.
  2. Oh, interesting. I think this is a bug related to me viewing the data as an admin. Thanks for the catch.
  3. 👀, still interested in other's view.
  4. Yeah, you can think of what we're measuring as "bounce rate". I was thinking of giving it a relatively "uninterpreted" treatment (ie: leaving the data raw, rather than calculating bounce rate), but I think more interpretation combined with tooltips seems better.

4.5. Re "average time", this turned out to be harder than I expected, so I decided to wait to see if anyone asked for it, but now I have my excuse to spend time figuring it out, mwahaha.

Comment by JP Addison (jpaddison) on Forum Update: New Features (October 2021) · 2021-10-19T11:11:39.282Z · EA · GW

Thanks for your feedback, this is super valuable!

Re 1&2, we should definitely add a note about how early the data goes (it does go all the way back to March 2020). Unfortunately the data I felt was most valuable to plot (views by unique devices), we suffered from a data collection issue in the first half of 2021. Fortunately we do have a note that appears on posts older than June 2021, unfortunately it apparently wasn't noticeable.

Re 3, I had not thought of a dashboard like that, but I like the idea a lot, thanks for making it. (I'd be curious if other authors reading this also like it, let us know!)

Comment by JP Addison (jpaddison) on Help me understand this expected value calculation · 2021-10-14T11:54:52.384Z · EA · GW

A minor factor of ten billion 😉

Comment by jpaddison on [deleted post] 2021-09-24T16:52:11.391Z

It launched in early August 2021 (Shlegeris 2021).

I think that was referring to the research project, not the org itself.

Comment by JP Addison (jpaddison) on UK's new 10-year "National AI Strategy," released today · 2021-09-24T13:20:29.939Z · EA · GW

I admittedly used the html version.

Comment by JP Addison (jpaddison) on GiveWell Donation Matching · 2021-09-23T12:07:05.972Z · EA · GW

(I’d love it if you crossposted that post, but commenting here until then.) I think there’s another category before 9, which is “Donate to a charity not commonly supported by EAs, such as the World Wildlife Fund or Habitat for Humanity.” So this allows for Giving Tuesday to count as counterfactual. I would hope GiveWell’s was of this type (though I sympathize with Luke’s points).

Then we have another question, which is who are these people that are ~indifferent between any EA charity? They’re probably not the first time donors that GiveWell’s targeting.

Comment by JP Addison (jpaddison) on UK's new 10-year "National AI Strategy," released today · 2021-09-23T11:49:52.880Z · EA · GW

[Meta commentary] Damn they have options to view in html, pdf, and mobile-optimized pdf. Holy crap. Why is the UK government so good at technology?

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-09-06T20:07:31.081Z · EA · GW

Maybe you’re suspicious of this claim, but if I think if you convinced me that JP working more hours was good on the margin, I could do some things to make it happen. Like have one saturday a month be a workday, say. That wouldn’t involve doing broadly useful life-improvements.

On “fresh perspective”, I‘m not actually that confident in the claim and don’t really want to defend it. I agree I usually take a while after a long vacation to get context back, which especially matters in programming. But I think (?) some of my best product ideas come after being away for a while.

Also you could imagine that the real benefit of being away for a while is not that you’re not thinking about work, but rather that you might’ve met different people and had different experiences, which might give you a different perspective. 

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-09-06T06:43:21.016Z · EA · GW

This is a good response.

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-09-05T10:40:45.690Z · EA · GW

A few notes on organizational culture — My feeling is some organizations should work really hard, and have an all-consuming, startup-y culture. Other organizations should try a more relaxed approach, where high quality work is definitely valued, but the workspace is more like Google’s, and more tolerant of 35 hour weeks. That doesn’t mean that these other organizations aren’t going to have people working hard, just that the atmosphere doesn’t demand it, in the way the startup-y org would. The culture of these organizations can be gentler, and be a place where people can show off hobbies they’d be embarrassed about in other organizations.

These organizations (call them Type B) can attract and retain staff who for whatever reason would be worse fits at the startup-y orgs. Perhaps they’re the primary caregiver to their child or have physical or mental health issues. I know many incredibly talented people like that and I’m glad there are some organizations for them.

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-09-05T10:40:17.487Z · EA · GW

How hard should one work?

Some thoughts on optimal allocation for people who are selfless but nevertheless human.

Baseline: 40 hours a week.

Tiny brain: Work more get more done.

Normal brain: Working more doesn’t really make you more productive, focus on working less to avoid burnout.

Bigger brain: Burnout’s not really caused by overwork, furthermore when you work more you spend more time thinking about your work. You crowd out other distractions that take away your limited attention.

Galaxy brain: Most EA work is creative work that benefits from:

  • Real obsession, which means you can’t force yourself to do it.
  • Fresh perspective, which can turn thinking about something all the time into a liability.
  • Excellent prioritization and execution on the most important parts. If you try to do either of those while tired, you can really fuck it up and lose most of the value.

Here are some other considerations that I think are important:

  • If you work hard you contribute to a culture of working hard, which could be helpful for attracting the most impactful people, who are more likely than average in my experience to be hardworking.
  • Many people will have individual-specific reasons not to work hard. Some people have mental health issues that empirically seem to get worse if they work too hard, or they would get migraines or similar. Others will just find that they know themselves well enough to know when they should call it quits, for reasons captured elsewhere in this doc or not. This makes me usually very reluctant to call someone else out for not working hard enough.

A word on selflessness — I’m analyzing this from the perspective of someone trying to be purely selfless. I think it’s a useful frame. But I also think most people should make the decision about how much they work from the perspective of someone with the actual goals they have. It is a whole nother much more complicated blog post to flesh that out.

Finally, I want to say that although this post makes it seem like I’m coming down on the side of working less hard, I do overall think the question is complicated, and I definitely don’t know what the right answer is. This is mostly me writing in response to my own thinking, and to a conversation I recently had with my friend. My feeling from reading the discussions the Forum’s had about it, the conversation rarely gets past the normal brain take, plausibly because it seems like a bad look to argue the case for working harder. If I were writing to try to shift the state of public discussion, I would probably argue the bigger brain take more. But this is shortform, so it’s written for me.

Comment by JP Addison (jpaddison) on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-29T07:56:22.707Z · EA · GW

The Andrew Critch interview is so far exactly what I’m looking for.

Comment by JP Addison (jpaddison) on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-29T07:46:46.329Z · EA · GW

This all seems reasonable.

Comment by JP Addison (jpaddison) on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-27T14:18:32.690Z · EA · GW

I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn't get there without being very protective of 10% of their budgets.

You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I'm not sure what my position is on that actually.

Comment by JP Addison (jpaddison) on What are the top priorities in a slow-takeoff, multipolar world? · 2021-08-27T14:08:59.170Z · EA · GW

Thanks for your answer. (Just to check, I think you are a different Steve Byrnes than the one I met at Stanford EA in 2016 or so?)

I do  want to emphasize  is that I don't doubt that technical AI safety work is one of the top priorities. It does seem like within technical AI safety research the best work seems to shift away from Agent Foundations type of work and towards neural-nets-specific work. It also seems like the technical problem does get easier in expectation if you have more than one shot. By contrast, I claim, many of the Moloch-style problems get harder.

Comment by JP Addison (jpaddison) on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T09:34:40.511Z · EA · GW

I feel like your qualifying statement is only true of the last one?

Comment by JP Addison (jpaddison) on Buck's Shortform · 2021-08-26T21:30:57.421Z · EA · GW

I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.

Comment by JP Addison (jpaddison) on Examples of Successful Selective Disclosure in the Life Sciences · 2021-08-21T18:54:27.145Z · EA · GW

First off I want to say thanks for your Forum contributions, Tessa. I'm consistently upvoting your comments, and appreciate the Wiki contributions as well.

I'm pretty confident in information hazards as a concern that are/will be plausibly important, but in these cases and other cases I tend to be at least strongly tempted by openness, which does seem to make it harder to advocate for responsible disclosure. "You should strongly consider selectively disclosing dangerous information, only all of these contentious examples I think should be open."

Comment by JP Addison (jpaddison) on [PR FAQ] Adding profile pictures to the Forum · 2021-08-10T13:06:37.577Z · EA · GW

I'm guessing you haven't seen, so let me show off the new signup flow!

Comment by JP Addison (jpaddison) on [PR FAQ] Adding profile pictures to the Forum · 2021-08-10T12:42:39.987Z · EA · GW

Hi Larks, thanks for taking the time to comment. I think your continuum comment is a good contribution to the considerations. I’m going to run with that metaphor, and talk about where I think we should fall. I take this seriously and want to get this right.

I’ve drawn three possible lines for what utility the Forum will get from its position on the continuum. Maybe it’s not actually useful, maybe I just like drawing things. I guess my main point is that we don’t have to figure out the entire space, just the local one:

Anyway, the story for the (locally) impersonal position is that adding profile images causes people to pay less attention to the object-level content, and more attention to the person writing. Given that epistemics are one of the top priorities of the Forum, and of EA community building writ-large, this would be quite bad. A substantial sacrifice in our group epistemics would overwhelm nearly all other considerations. I think the crux for me is how large would that effect be?

The story for the social position is that the Forum needs to be an attractive place to comment in order to be used. The Forum is growing now, but many people new to the community don’t use it. Many people experienced in EA read and occasionally comment, but the percentage of the most promising young or new people to the community on the Forum is not what I’d like. When I talk to people about it, they often say that it feels intimidating / unfriendly / cold. Having a broader reach, and broader participation, will increase the Forum’s impact. A crux related to this story is how large an effect this is. Maybe the people talking to me wouldn’t actually join the Forum anyway. It’s fairly long-form discussion, and that’s not for everyone.

A digression into my model of the Forum’s impact: In How we think about the Forum (by now 2 years old and not entirely up-to-date), I wrote down the following methods of impact from the Forum:

  • Sharing of existing ideas
  • Development and refinement of new ideas
  • Talent discovery
  • Public accountability
  • Spreading of norms
  • Encouraging coordination

To my mind, the path to impact that most favors the impersonal position is the second, “Development and refinement of new ideas”. Even small hits to epistemics are incredibly costly when the whole thing you’re trying to do is figure out what’s true. However the Forum does not only try to figure out what’s true. To my mind the biggest effect on the community's epistemics (and as a whole) comes from spreading our norms to newcomers to the community. This is less valuable (or even negatively valuable) if our epistemics get less good, but it is also impossible if newcomers don’t read the Forum in the first place. All of the items on that list are dependent on more people reading and writing on the Forum.

Overall I’m not sure what my position is right now. I’ll need to think and discuss it some more. It’s plausible that there are other important features with less sign uncertainty that become more important in my mind, but I don’t want to shy away from potentially high-impact features either. I’d be appreciative of other people’s impressions of how large the effects of my cruxes are.

Some considerations that don’t fall neatly into the continuum analysis:

  • We can probably mitigate the amount that epistemics are hurt by having a “profile images off in megathreads” policy. So that when people start digging into things in posts like the Hinge of History post or emotionally-charged topics, the epistemics can stay relatively unaffected. (This probably doesn’t matter until we put profile images on comments.)
  • Newbies to the community see a lot of names. Images are generally more recognizable, which can help newbies get a sense of which authors they like. This helps them get drawn in to reading more, and understanding the dynamics of the various positions held by various authors, in the same way that you or I can do because we recognize their names. (“That’s Buck arguing for position X, this makes sense because I’ve already read him supporting similar position Y”) This is I believe related to your “statistical discrimination” comment.
Comment by JP Addison (jpaddison) on What 2026 looks like (Daniel's median future) · 2021-08-09T08:35:43.211Z · EA · GW

I think this is great. I especially like the discussion of propaganda, which feels like an important model.

Comment by JP Addison (jpaddison) on EA Forum feature suggestion thread · 2021-08-06T15:41:15.937Z · EA · GW

Seems right. I doubt it was deliberate.

Comment by JP Addison (jpaddison) on EA Forum feature suggestion thread · 2021-08-06T13:44:05.113Z · EA · GW

What happens if you log in in incognito?  Do you have any of these settings set?

Comment by JP Addison (jpaddison) on EA Forum feature suggestion thread · 2021-08-06T10:35:24.806Z · EA · GW

I can't reproduce this, can you tell me what browser you were using, what settings you have for the allposts page, and whether you can still see the issue?

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-08-05T07:08:23.742Z · EA · GW

This is over.

Comment by JP Addison (jpaddison) on JP's Shortform · 2021-08-04T10:54:40.565Z · EA · GW

Temporary site update: I've taken down the allPosts page. It appears we have a bot hitting the page, and it's causing the site to be slow. While I investigate, I've simply taken the page down. My apologies for the inconvenience.

Comment by JP Addison (jpaddison) on EA Forum feature suggestion thread · 2021-08-01T08:53:28.957Z · EA · GW

That’s a bug, thanks for reporting.

Comment by JP Addison (jpaddison) on Database dumps of the EA Forum · 2021-07-29T10:19:24.290Z · EA · GW

You don't need to use the allPosts page to get a list of all the posts. You can just ask the GraphQL API for the ids of all of them.

Comment by JP Addison (jpaddison) on Narration: We are in triage every second of every day · 2021-07-24T09:49:56.845Z · EA · GW

You picked a good one here.

Comment by JP Addison (jpaddison) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-18T07:44:54.301Z · EA · GW

How many chicken years are affected per dollar spent on broiler and cage-free campaigns.

I estimate how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018. According to my estimate, for every dollar spent, 9 to 120 years of chicken life will be affected.

My impression is that cage free campaigns have been very successful and there's much less low-hanging fruit, such that I don't think it's reasonable to extrapolate those results to an ongoing basis.

Comment by JP Addison (jpaddison) on You Should Write a Forum Bio · 2021-07-06T18:37:47.541Z · EA · GW

This is now a thing

Comment by JP Addison (jpaddison) on You are allowed to edit Wikipedia · 2021-07-06T12:15:45.518Z · EA · GW

I turned this into a non-question post for you. (Aaron didn't know I could do that, because it's not a normal admin option.)

Comment by JP Addison (jpaddison) on What is life like at the median global income? · 2021-06-30T14:39:14.173Z · EA · GW

Thanks! That's very much the sort of thing that's helpful.

Comment by JP Addison (jpaddison) on EA needs consultancies · 2021-06-30T14:31:46.829Z · EA · GW

Those are some pretty compelling numbers, but I'd be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they're engaged with EA ideas in other ways, but now we're into territory where I'd feel like I'd need to do more vetting.)

Comment by JP Addison (jpaddison) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T08:39:16.075Z · EA · GW

Thanks Pablo and Joseph!

If you're a person who wants to learn this material, but doesn't have an Anki habit, I'd recommend taking this as an opportunity to try things, and give it a go. Turn remembering things into a deliberate choice.

You can get started here.

Comment by JP Addison (jpaddison) on On the limits of idealized values · 2021-06-27T07:51:44.721Z · EA · GW

This was really good.

Comment by JP Addison (jpaddison) on What are some key numbers that (almost) every EA should know? · 2021-06-18T13:05:06.842Z · EA · GW

I will absolutely study that deck.

Comment by JP Addison (jpaddison) on EA Forum feature suggestion thread · 2021-06-14T16:50:13.376Z · EA · GW

Sounds legit.

Comment by JP Addison (jpaddison) on Which non-EA-funded organisations did well on Covid? · 2021-06-12T08:31:04.361Z · EA · GW

And VaccinateCA was very impressive.

Comment by JP Addison (jpaddison) on Humanities Research Ideas for Longtermists · 2021-06-10T12:32:45.140Z · EA · GW

Any mistakes are the fault of Linch Zhang

:D   Good line. I hope you snuck this in and Linch didn’t notice.