Posts

LessWrong is now a book, available for pre-order! 2020-12-04T20:42:58.102Z
DontDoxScottAlexander.com - A Petition 2020-06-25T23:29:46.491Z

Comments

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-08T21:22:12.418Z · EA · GW

Fixed, tah.

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-08T06:33:43.259Z · EA · GW

Thanks for the thoughtful reply.

I do think I was overestimating how robust you're treating your numbers and premises, it seems like you're holding them all much more lightly than I think I'd been envisioning.

FWIW I am more interested in engaging with some of what you wrote in in your other comment than engaging on the specific probability you assign, for some of the reasons I wrote about here.

I think I have more I could say on the methodology, but alas, I'm pretty blocked up with other work atm. It'd be neat to spend more time reading the report and leave more comments here sometime.

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-08T06:15:40.835Z · EA · GW

Great answer, thanks.

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-02T04:10:43.148Z · EA · GW

I tried to look for writing like this. I think that people do multiple hypothesis testing, like Harry in chapter 86 of HPMOR. There Harry is trying to weigh some different hypotheses against each other to explain his observations. There isn't really a single train of conditional steps that constitutes the whole hypothesis.

My shoulder-Scott-Alexander is telling me (somewhat similar to my shoulder-Richard-Feynman) that there's a lot of ways to trick myself with numbers, and that I should only do very simple things with them. I looked through some of his posts just now (1, 2,  3, 4, 5).

Here's an example of a conclusion / belief from Scott's post Teachers: Much More Than You Wanted to Know:

In summary: teacher quality probably explains 10% of the variation in same-year test scores. A +1 SD better teacher might cause a +0.1 SD year-on-year improvement in test scores. This decays quickly with time and is probably disappears entirely after four or five years, though there may also be small lingering effects. It’s hard to rule out the possibility that other factors, like endogenous sorting of students, or students’ genetic potential, contributes to this as an artifact, and most people agree that these sorts of scores combine some signal with a lot of noise. For some reason, even though teachers’ effects on test scores decay very quickly, studies have shown that they have significant impact on earning as much as 20 or 25 years later, so much so that kindergarten teacher quality can predict thousands of dollars of difference in adult income. This seemingly unbelievable finding has been replicated in quasi-experiments and even in real experiments and is difficult to banish. Since it does not happen through standardized test scores, the most likely explanation is that it involves non-cognitive factors like behavior. I really don’t know whether to believe this and right now I say 50-50 odds that this is a real effect or not – mostly based on low priors rather than on any weakness of the studies themselves. I don’t understand this field very well and place low confidence in anything I have to say about it.

I don't know any post where Scott says "there's a particular 6-step argument, and I assign 6 different probabilities to each step, and I trust that outcome number seems basically right". His conclusions read more like 1 key number with some uncertainty, which never came from a single complex model, but from aggregating loads of little studies and pieces of evidence into a judgment.

I think I can't think of a post like this by Scott or Robin or Eliezer or Nick or anyone. But would be interested in an example that is like this (from other fields or wherever), or feels similar.

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-02T03:37:27.270Z · EA · GW

One thing that I think would really help me read this document would be (from Joe) a sense of "here's the parts where my mind changed the most in the course of this investigation".

Something like (note that this is totally made up) "there's a particular exploration of alignment where I had conceptualized it as kinda like about making the AI think right but now I conceptualize it as about not thinking wrong which I explore in section a.b.c".

Also maybe something like a sense of which of the premises Joe changed his mind on the most – where the probabilities shifted a lot.

Comment by Ben Pace on Draft report on existential risk from power-seeking AI · 2021-05-02T03:25:38.692Z · EA · GW

I think I share Robby's sense that the methodology seems like it will obscure truth.

That said, I have neither your (Joe) extensive philosophical background nor have spent substantial time like you on a report like this, and I am interested in evidence to the contrary.

To me, it seems like you've tried to lay out a series of 6 steps of an argument, that you think each very accurately carve the key parts of reality that are relevant, and pondered each step for quite a while.

When I ask myself whether I've seen something like this produce great insight, it's hard. It's not something I've done much myself explicitly. However, I can think of a nearby example where I think this has produced great insight, which is Nick Bostrom's work. I think (?) Nick spends a lot of his time considering a simple, single key argument, looking at it from lots of perspectives, scrutinizing wording, asking what people from different scientific fields would think of it, poking and prodding and rotating and just exploring it. Through that work, I think he's been able to find considerations that were very surprising and invalidated the arguments, and proposed very different arguments instead. 

When I think of examples here, I'm imagining that this sort of intellectual work produced the initial arguments about astronomical waste, and arguments since then about unilateralism and the vulnerable world hypothesis. Oh, and also simulation hypothesis (which became a tripartite structure).

I think of Bostrom as trying to consider a single worldview, and find out whether it's a consistent object. One feeling I have about turning it into a multi-step probabilistic argument is that it does the opposite, it does not try to examine one worldview to find falsehoods, but instead integrates over all the parts of the worldview that Bostrom would scrutinize, to make a single clump of lots of parts of different worldviews. I think Bostrom may have literally never published a six-step argument of the form that you have, where it was meant to hold anything of weight in the paper or book, and also never done so assigning each step a probability.

To be clear, probabilistic discussions are great. Talking about precisely how strong a piece of evidence is – is it 2:1, 10:1, 100:1? Helps a lot in noticing which hypotheses to even pay attention to. The suspicion I have is that they are fairly different from the kind of cognition Bostrom does when doing this sort of philosophical argumentation that produces simple arguments of world-shattering importance. I suspect you've set yourself a harder task than Bostrom ever has (a 6-step argument), and thought you've made it easier for yourself by making it only probabilistic instead of deductive, whereas in fact this removes most of the tools that Bostrom was able to use to ensure he didn't take mis-steps.

But I am pretty interested if there are examples of great work using your methodology that you were inspired by when writing this up, or great works with nearby methodologies that feel similar to you. I'd be excited to read/discuss some.

Comment by Ben Pace on [deleted post] 2021-04-23T21:08:37.085Z

Pretty sure I picked those. I don't know that the first two categories are as great a split as I once did. I was broadly trying to describe the difference between the sorts of basic theory work done by people like Alex Flint and Scott Garrabrant, and the sorts of 'just solve the problem' ideas by people like Paul Christiano and Alex Turner and Stuart Russell. But it's not super clean, they dip into each other all the time e.g. Inner Alignment is a concept used throughout by people like Paul Christiano and Eliezer Yudkowsky in all sorts of research.

I worked with the belief that a very simple taxonomy even if wrong is far better than no taxonomy, so I still feel good about it. But am interested in an alternative.

Comment by Ben Pace on Concerns with ACE's Recent Behavior · 2021-04-18T20:32:56.658Z · EA · GW

Universal statements like this strike me as almost always wrong.

I appreciate the self-consistency of this sentence :)

Comment by Ben Pace on What does failure look like? · 2021-04-11T18:12:36.419Z · EA · GW

What Failure Looks Like, hit LW post.

(Note: does not at all answer your question.)

Comment by Ben Pace on EA Debate Championship & Lecture Series · 2021-04-10T19:17:55.945Z · EA · GW

I was surprised, this video was much less goodharted than I expected (after having been primed with the super-fast talking example). I was expecting more insane things.

Though overall it had the level of much broad public debate/discourse I’ve seen. I watched the first three speakers, and didn’t learn anything. In good debates I’ve seen I’ve felt that I’ve learned something from the debaters about their fields and their unique world views, these felt like two opposing sides in a broader political debate with kind of no grounding in reality. They were optimized for short-scale (e.g. <30 seconds) applause lights for the audience, when objected they’d make it a fight saying things like “Don‘t even try to win that example”, their examples seemed false yet rewarded (primarily attributing China’s rise out of poverty in the last 50 years to ‘redistribution’ and getting applause for it, which, correct me if I’m wrong, is not at all the primary reason, they had massive growth in industry in part by copying a lot of the west). I wouldn’t expect to learn anything, it just seemed like nobody understood economics and they were indexed off what was like 0-1 inferential steps from what the audience as a whole understood. I guess that was the worst part, how can you discuss interesting ideas if they have to be obvious to an audience that big and generic within 10-20 seconds?

Comment by Ben Pace on EA Debate Championship & Lecture Series · 2021-04-10T06:04:55.961Z · EA · GW

I just want to say I, Ben Pace, feel attacked every time someone criticizes “BP” in this comment thread.

Comment by Ben Pace on Announcing "Naming What We Can"! · 2021-04-01T20:10:46.014Z · EA · GW

I'm open to a legal arrangement of shared nationalities, bank accounts, and professional roles.

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-27T02:11:21.621Z · EA · GW

“Hello, I’m an Effective Altruist.”

“Hello, I’m a world-unfucker.”

Honestly, I think the second one might be more action-oriented. And less likely to attract status-seekers. Alright, I’m convinced, let’s do it :)

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-27T01:28:30.510Z · EA · GW

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal)

It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-27T01:24:40.713Z · EA · GW

I do not know. Let me try generating names for a minute. Sorry. These will be bad.

“Marginal World Improvers”

”Civilizational Engineers”

”Black Swan Farmers”

“Ethical Optimizers”

”Heavy-Tail People”

Okay I will stop now.

Comment by Ben Pace on Proposed Longtermist Flag · 2021-03-27T01:05:36.747Z · EA · GW

Appreciate you drawing this, I like the idea.

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-27T00:05:25.693Z · EA · GW

we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people

I don't expect a brand change to "Global Priorities" to bring in more action-oriented people. I expect fewer people would donate money themselves, for instance, they would see it as cute but obviously not having any "global" impact, and therefore below them.

(I think it was my inner Quirrell / inner cynic that wrote some of this comment, but I stand by it as honestly describing a real effect that I anticipate.)

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-27T00:01:22.582Z · EA · GW

The Defense Professor’s fingers idly spun the button, turning it over and over. “Then again, only a very few folk ever do anything interesting with their lives. What does it matter to you if they are mostly witches or mostly wizards, so long as you are not among them? And I suspect you will not be among them, Miss Davis; for although you are ambitious, you have no ambition.”

That’s not true!” said Tracey indignantly. “And what’s it mean?”

Professor Quirrell straightened from where he had been leaning against the wall. “You were Sorted into Slytherin, Miss Davis, and I expect that you will grasp at any opportunity for advancement which falls into your hands. But there is no great ambition that you are driven to accomplish, and you will not make your opportunities. At best you will grasp your way upward into Minister of Magic, or some other high position of unimportance, never breaking the bounds of your existence.”

—HPMOR, Chapter 70, Self-Actualization (part 5)

Added: The following is DEFINITELY NOT a strong argument, but just kind of an associative point. I think that Voldemort (both the real one from JK Rowling and also the one in HPMOR) would be much more likely to decide that he and his Death Eaters should have “Global Priorities” meetings than “Effective Altruist” meetings. (“We’re too focus on taking over the British Ministry for Magic, we need to also focus on our Global Priorities.“) In that way I think the former phrase has a more general connotation of ”taking power and changing the world” in a way the latter does not.

Comment by Ben Pace on Some quick notes on "effective altruism" · 2021-03-26T23:58:09.276Z · EA · GW

I was just reflecting on the term 'global priorities'. I think to me it sounds like it's asking "what should the world do", in contrast to "what should I do". The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.

For example, I think the question "what should the global priority be on helping developing countries" is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.

I think the question "what can I personally do to help people in Africa" is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It's a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.

"global priorities (GP)"  community is... more appropriate  than "effective altruism (EA)" community... More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action

Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.

Comment by Ben Pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T18:28:59.771Z · EA · GW

Why are your comments hidden on the EA Forum?

Added: It seems the author moved the relevant post back into their drafts.

Comment by Ben Pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T05:52:14.760Z · EA · GW

No it's not! Avoiding the action because you know you'll be threatened until you change course is the same as submitting to the threat.

Comment by Ben Pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T03:42:39.364Z · EA · GW

:)

Comment by Ben Pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T00:46:09.696Z · EA · GW

By the way Ian, I've not followed these posts in great detail and I mostly think getting involved in partisan politics in most straightforward ways seems like a bad idea, but I've really appreciated the level of effort you've put in and are clearly willing to put in to have an actual conversation about this (in comments here, with Wei Dai, with others). It's made me feel more at home in the Forum. Thank you for that.

Comment by Ben Pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T23:02:38.428Z · EA · GW

Naturally, you have to understand Rohin, that in all of the situations where you tell me what the threat is, I'm very motivated to do it anyway? It's an emotion of stubbornness and anger, and when I flesh it out in game-theoretic terms it's a strong signal of how much I'm willing to not submit to threats in general.

Returning to the emotional side, I want to say something like "f*ck you for threatening to kill people, I will never give you control over me and my community, and we will find you and we will make sure it was not worth it for you, at the cost of our own resources".

Comment by Ben Pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T22:55:46.993Z · EA · GW

It's a good question. I've thought about this a bit in the past.

One surprising rule is that overall I think people with a criminal record should still be welcome to contribute in many ways. If you're in prison, I think you should generally be allowed to e.g. submit papers to physics journals, you shouldn't be precluded from contributing to humanity and science. Similarly, I think giving remote talks and publishing on the EA Forum should not be totally shut off (though likely hampered in some ways) for people who have behaved badly and broken laws. (Obviously different rules apply for hiring them and inviting them to in-person events, where you need to look at the kind of criminal behavior and see if it's relevant.) 

I feel fairly differently to people who have done damage in and to members of the EA community. Someone like Gleb Tsipursky hasn't even broken any laws and should still be kicked out and not welcomed back for something like 10 years, and even then he probably won't have changed enough (most people don't).

In general EA is outcome-oriented, it's not a hobby community, there's sh*t that needs to be done because civilization is inadequate and literally everything is still at stake at this point in history. We want the best contributions and care about that to the exemption of people being fun or something. You hire the best person for the job.

There's some tension there, and I think overall I am personally willing to put in a lot of resources in my outcome-oriented communities to make sure that people who contribute to the mission are given the spaces and help they need to positively contribute.

I can't think of a good example that isn't either of a literal person or too abstract... like, suppose Einstein has terrible allergies to most foods, just can't be in the space as them. Can we have him at EAG? How much work am I willing to put in for him to have a good EAG? Do I have to figure out a way to feed everyone a very exclusive yet wholesome diet that means he can join? Perhaps.

Similarly, if I'm running a physics conference and Einstein is in prison for murder, will I have him in? Again, I'm pretty open to video calls, I'm pretty willing to put in the time to make sure everyone knows what sort of risks he is, and make sure he isn't allowed to end up in a vulnerable situation with someone, because it's worth it for our mission to have him contribute.

You get the picture. Y'know, tradeoffs, where you actually value something and are willing to put in extraordinary effort to make it work.

Comment by Ben Pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T22:38:30.847Z · EA · GW

Thx for the long writeup. FWIW I will share some of my own impressions.

Robin's one of the most generative and influential thinkers I know. He has consistently produced fascinating ideas and contributed to a lot of the core debates in EA, like giving now vs later, AI takeoff, prediction markets, great filter, and so on. His comments regarding common discussion of inequality are all of a kind with the whole of his 'elephant in the brain work', noticing weird potential hypocrisies in others. I don't know how to easily summarize the level of his intellectual impact on the world, so I'll stop here.

It seems like there's been a couple of (2-4) news articles taking potshots at Hanson for his word choices, off the back of an angry mob, and this is just going to be a fairly standard worry for even mildly interesting or popular figures, given that the mob is going after people daily on Twitter. (As the OP says, not everyone, but anyone.)

It seems to me understandable if some new group like EA Munich (this was one of their first events?) feels out of their depth when trying to deal with the present-day information and social media ecosystem, and that's why they messed up. But overall this level of lack of backbone mustn't be the norm, else the majority of interesting thinkers will not be interested in interacting with EA. I am less interested in contributing-to and collaborating-with others in the EA community as a result of this. I mean, there's lots of things I don't like that are just small quibbles, which is your price for joining, but this kind of thing strikes at the basic core of what I think is necessary for EA to help guide civilization in a positive direction, as opposed to being some small cosmetic issue or personal discomfort.

Also, it seems to me like it would be a good idea for the folks at EA Munich to re-invite Robin to give the same talk, as a sign of goodwill. (I don't expect they will and am not making a request, I'm saying what it seems like to me.)

Comment by Ben Pace on Hiring engineers and researchers to help align GPT-3 · 2020-10-07T18:31:58.348Z · EA · GW

Yeah. Well, not that they cannot be posted, but that they will not be frontpaged by the mods, and instead kept in the personal blog / community section, which has less visibility.

Added: As it currently says on the About page:

Community posts

Posts that focus on the EA community itself are given a "community" tag. By default, these posts will be hidden from the list of posts on the Forum's front page. You can change how these posts are displayed by using...

Comment by Ben Pace on Open Communication in the Days of Malicious Online Actors · 2020-10-07T03:19:32.671Z · EA · GW

Thanks, I found this post to be quite clear and a helpful addition to the conversation.

Comment by Ben Pace on If you like a post, tell the author! · 2020-10-06T18:27:17.075Z · EA · GW

(I like this post.)

Comment by Ben Pace on Sign up for the Forum's email digest · 2020-10-05T16:26:31.193Z · EA · GW

You can subscribe with RSS via using the "Subscribe (RSS)" button at bottom of the left menu on the frontpage.

Comment by Ben Pace on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T19:25:34.269Z · EA · GW

(Yes, I'm pretty sure this is the standard way to use those terms.)

Comment by Ben Pace on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T01:56:13.005Z · EA · GW

I find Big 5 correlates very interesting, so thanks for doing this! The graphs make it very easy to see the differences.

Comment by Ben Pace on Suggestion that Zvi be awarded a prize for his COVID series · 2020-09-24T21:24:23.620Z · EA · GW

For those who don't know Zvi's series, it has come out weekly, included case numbers and graphs, and analysis of the news that week. Here's a few:

Plus some general analysis, like Seemingly Popular Covid-19 Model is Obvious Nonsense, and Covid-19: My Current Model which was a major factor in me choosing to stop cleaning all my packages and groceries and to stop putting takeout food in the oven for 15 minutes, as well as feeling safe about outdoors. 

His 9/10 update on Vitamin D also caused me to make sure my family started taking Vitamin D, which is important because one of them has contracted the virus.

Comment by Ben Pace on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-19T23:39:17.861Z · EA · GW

Do you mean CS or ML? Because (I believe) ML is an especially new and 'flat' field where it doesn't take as long to get to the cutting edge, so it probably isn't representative.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T01:31:04.218Z · EA · GW

Yeah, I agree about how much variance in productivity is available, your numbers seem more reasonable. I'd actually edited it by the time you wrote your comment.

Also agree last year was probably unusually slow all round. I expect the comparison is still comparing like-with-like.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T00:16:58.586Z · EA · GW

I read the top comment again after reading this comment by you, and I think I understand the original intent better now. I was mostly confused on initial reading, and while I thought SLG's comment was otherwise good and I had a high prior on the intent being very cooperative, I couldn't figure out what the first line meant other than "I expect I'm the underdog here". I now read it as saying "I really don't want to cause conflict needlessly, but I do care about discussing this topic," which seems pretty positive to me. I am pretty pro SLG writing more comments like this in future when it seems to them like an important mistake is likely being made :)

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T00:05:12.181Z · EA · GW

By the way, I also was surprised by Rob only making 4 videos in the last year. But I actually now think Rob is producing a fairly standard number of high-quality videos annually.

The first reason is that (as Jonas points out upthread) he also did three for Computerphile, which brings his total to 7.

The second reason is that I looked into a bunch of top YouTube individual explainers, and I found that they produce a similar number of highly-produced videos annually. Here's a few:

  • 3 Blue 1 Brown has 10 highly produced videos in the last year (1, 2, 3, 4, 5, 6, 7, 8, 9, 10). He has other videos, which include a vide of Grant talking a walk, a short footnote video to one of the main ones, 10 lockdown livestream videos,  and a video turning someone's covid blogpost into a video. For highly produced videos, he's averaging just under 1/month.
  • CGP Grey has 10 highly produced videos in the last year (1, 2, 3, 4, 5, 6, 7, 8, 9, 10). He has other videos, which include a video of CGP Grey talking a walk, a few videos of him exploring a thing like a spreadsheet or an old building, and one or two commentaries on other videos of his.
  • Vi Hart in her peak made 19 videos in one year (her first year, 9 years ago) all of which I think were of a similar quality level to each other.
  • Veritasium has 14 highlighy produced videos in the last year, plus one short video of the creator monologuing after their visit to NASA.

CGP Grey, 3Blue 1Brown and Veritasium I believe are working on their videos full time, so I think around 10 main videos plus assorted extra pieces is within standard range for highly successful explainers on YouTube. I think this suggests potentially Rob could make more videos to fill out the space between the videos on his channel, like Q&A livestreams and other small curiosities that he notices, and could plausibly be more productive a year in terms of making a couple more of the main, highly-produced videos.

But I know he does a fair bit of other work outside of his main channel, and also he is in some respects doing a harder task than some of the above, of explaining ideas from a new research field, and one with a lot of ethical concerns around the work, not just issues of how to explain things well, which I expect increases the amount of work that goes into the videos.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T20:46:41.840Z · EA · GW

:)  Appreciated the conversation! It also gave me an opportunity to clarify my own thoughts about success on YouTube and related things.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T18:02:29.182Z · EA · GW

Thx!

Following up, and sorry for continuing to critique after you already politely made an edit, but doesn't that change your opinion of the object level thing, which is indeed the phenomenon Scott's talking about? It's great to send signals of cooperativeness and genuineness, and I appreciate So-Low Growth's effort to do so, but adding in talk of how the concern is controversial is the standard example of opening a bravery debate.

The application of Scott's post here would be to separate clarification of intent and bravery talk – in this situation, separating "I don't intend any personal attack on this individual" from "My position is unpopular". Again, the intention is not in question, it's the topic, and that's the phenomenon Scott's discussing in his post.

Comment by Ben Pace on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-18T17:44:21.650Z · EA · GW

I'd heard that the particular journal had quite a high quality bar. Do you have a sense of whether that's true or how hard it is to get into that journal? I guess we could just check the number of PhD students who get published in an edition of the journal to check the comparison.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:59:20.301Z · EA · GW

I think one of the things Rob has that is very hard to replace is his audience. Overall I continue to be shocked by the level of engagement Rob Miles' youtube videos get. Averaging over 100k views per video! I mostly disbelieve that it would be plausible to hire someone that can (a) understand technical AI alignment well, and (b) reliably create youtube videos that get over 100k views, for less than something like an order of magnitude higher cost.

I am mostly confused about how Rob gets 100k+ views on each video. My mainline hypothesis is that Rob has successfully built his own audience through his years of videos including on places like Computerphile, and that they have followed him to his own channel.

Building an audience like this takes many years and often does not pay off. Once you have a massive audience that cares about the kind of content you produce, this is very quickly not replaceable, and I expect to find someone other than Rob to do this, it would either take the person 3-10 years to build this size of audience, or require paying a successful youtube content creator to change the videos that they are making substantially, in a way that risks losing their audience, and thus require a lot of money to cover the risk (I'm imagining $300k–$1mil per year for the first few years).

Another person to think of here is Tim Urban, who writes Wait But Why. That blog has I think produced zero major writeups in the last year, but he has a massive audience who knows him and is very excited to read his content in detail, which is valuable and not easily replaceable. If it were possible to pay Tim Urban to write a piece on a technical topic of your choice, this would be exceedingly widely-read in detail, and would be worth a lot of money even if he didn't publish anything for a whole year.

Comment by Ben Pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:43:04.465Z · EA · GW

I want to add that Scott isn't describing a disingenuous argumentative tactic, he's saying that the topic causes dialogue to get derailed very quickly. Analogous to the rule that bringing in a comparison to Nazis always derails internet discussion, making claims about whether the position one is advocating is the underdog or the mainstream also derails internet discussion.

Comment by Ben Pace on Sign up for the Forum's email digest · 2020-09-17T18:47:15.807Z · EA · GW

Huh, that's a nice idea. And of course a straightforward "filter for posts I've read".

Comment by Ben Pace on Some thoughts on EA outreach to high schoolers · 2020-09-13T23:23:02.043Z · EA · GW

FWIW I found and read the sequences when I was about 14, and went to a CFAR workshop before uni. I think if these things had happened later they'd have been less impactful for me in a number of ways.

Comment by Ben Pace on Asking for advice · 2020-09-10T19:17:22.360Z · EA · GW

You're welcome :) 

I don't want to claim it happens regularly, but enough that it's become salient to me that I may spend all this time planning for and around the meeting and then have it be wasted effort, such that there's some consistent irritation cost to me interacting with calendlys. 

But now that I've put in to words some of my concerns, I think I'll generally like interacting with calendly more now, as I'll notice when I'm feeling this particular worry and more pro-actively deal with it. As I said, I think it's a great tool and I'm glad it exists.

Comment by Ben Pace on Asking for advice · 2020-09-09T19:31:06.062Z · EA · GW

My feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.

If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.

I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:

  • I will book a time, then in a few days they will tell me actually the time doesn't work for them and I should pick again (this is a world where I had made plans around the meeting time and they hadn't)
  • I will book a time, and just before the meeting they will email to say they hadn't realised when I'd booked it and actually they can't make it and need to reschedule, and they will feel this is calendly's fault far more than theirs
  • I will book a time, and they won't show up or will show up late and feel that they don't hold much responsibility for this, thinking of it as a 'technical failure' on behalf of calendly.

All of these are quite irritating and feel like I'm the one holding my schedule open for them, right up until it turns out they can't make it.

I think I might be happier if there was an explicit and expected part of the process where the other person  confirms they are aware of the meeting and will show up, either by emailing to say "I'll see you at <time>!" or if they have to click "going" to the calendar invitation and I would get a notification saying "They confirmed", and only then was it 'officially happening'.

Having written this out, I may start pinging people for confirmation after filling out their calendlys...

Comment by Ben Pace on Does Economic History Point Toward a Singularity? · 2020-09-07T19:05:03.684Z · EA · GW

This is one of my favorite comments on the Forum. Thanks for the thorough response.

Comment by Ben Pace on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T18:12:10.218Z · EA · GW

It sends public signals that you'll submit to blackmail and that you think people shouldn't affiliate with the speaker. The former has strong negative effects on others in EA because they'll face increased blackmail threats, and the latter has negative effects on the speaker and their reputation, which in turn makes it less likely for interesting speakers to want to speak with EA because they expect EA will submit to blackmail about them if any online mob decides to put their crosshairs on that speaker today.

Comment by Ben Pace on How are the EA Funds default allocations chosen? · 2020-08-12T17:21:26.402Z · EA · GW

Interesting. Thank you very much.

Comment by Ben Pace on How are the EA Funds default allocations chosen? · 2020-08-11T17:13:49.961Z · EA · GW

This seems to be a coincidence. Less than 10% of total donation volume is given according to the default allocation.

I roll to disbelieve? Why do you think this? Like, even if there’s slight variation I expect it’s massively anchored on the default allocation.