Posts

Did Will MacAskill give this probability estimate? 2022-09-28T07:08:14.156Z
Let's advertise infrastructure projects 2022-09-23T14:01:11.448Z
EA Gather Town: Game night 2022-08-15T17:18:53.943Z
Announcing the EA Gather Town Event Hall as a global hub for online events 2022-08-13T09:01:51.775Z
EAGT Unconference #1 2022-07-26T12:34:53.570Z
EAGT: Social Schelling Time 2022-06-30T14:47:19.249Z
Should large EA nonprofits consider splitting? 2022-06-06T20:20:00.333Z
Can you embed a timestamped YouTube video in a forum post? 2022-06-06T18:42:04.861Z
Revisiting the karma system 2022-05-29T14:19:18.581Z
EAGT: Social Schelling Times (recurring) 2022-05-13T12:04:08.592Z
EAGT update: bespoke rooms for remote orgs/local groups on the EA Gather.Town 2022-05-05T12:39:30.062Z
A tale of 2.5 orthogonality theses 2022-05-01T13:53:17.850Z
EA coworking/lounge space on gather.town 2022-04-26T10:57:26.621Z
Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go? 2021-12-01T15:07:28.054Z
[Creative Writing Contest][Referral][Mostly Fiction] - The Parable of the Heartstone 2021-09-17T13:46:57.347Z
Part 2: The advantages of agencies 2021-07-25T13:04:34.488Z
Part 4: Intra-organizational and non-tech agencies 2021-07-25T13:03:50.149Z
Part 3: Comparing agency organisational models 2021-07-25T13:03:28.698Z
Part 1: EA tech work is inefficiently allocated & bad for technical career capital 2021-07-25T13:03:07.733Z
Is it possible to change user name? 2020-06-26T11:09:59.780Z
I find this forum increasingly difficult to navigate 2019-07-05T10:27:32.975Z
The almighty Hive will 2018-01-28T17:59:07.040Z
Against neglectedness 2017-11-01T23:09:04.526Z

Comments

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-29T10:38:05.299Z · EA · GW

Looks as though they'd charge consultancy fees, though?

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-29T10:37:03.473Z · EA · GW

Thanks! I don't have time to check all the links atm. Do you know whether any/all of them offer free or strongly discounted services?

Comment by Arepo on Did Will MacAskill give this probability estimate? · 2022-09-28T12:05:22.763Z · EA · GW

I must have ctrl-Fed right past it :\ Thanks!

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-27T07:18:43.386Z · EA · GW

I've added TfG. Lynette Bye doesn't look as though she meets the 'free or heavily discounted' requirement.

Comment by Arepo on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T19:39:32.766Z · EA · GW

If we were going to go down the multi-forum path, a less icky option would be to have both forums be open, but one explicitly for more in-depth, more advanced, more aggregative or more whatever posts, and moderation that moved ones that didn't meet the bar back to the more beginner-friendly forum.

Or as the forum currently is, we could just add tags that capture whatever it is the OP is trying to capture - 'beginner-friendly', 'not beginner-friendly' or whatever.

If that's not enough, I'd imagine there's some middle ground UX that we could implement.

Comment by Arepo on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T15:30:15.963Z · EA · GW

12 sounds right. None of the other mechanisms obviously suggest that you'd expect the absolute number of high quality posts to decline (or even not to grow). I would echo that it's not clear the average quality is diminishing either, but the forum filtering UI might not be keeping up with the level of input.

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-24T12:50:18.888Z · EA · GW

How could I forget? O_O Added to the OP!

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-24T12:24:38.049Z · EA · GW

Amazing, thanks Kat!

Comment by Arepo on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T20:30:30.938Z · EA · GW

"EAG exists to make the world a better place, rather than serve the EA community or make EAs happy."

I'm wary of this claim. Obviously in some top level sense it's true, but it seems reminiscent of the paradox of hedonism, in that I can easily believe that if you consciously optimise events for abstract good-maximisation, you end up maximising good less than if you optimise them for the health of a community of do-gooders.

(I'm not saying this is a case for or against admitting the OP - it's just my reaction to your reaction)

Comment by Arepo on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T20:22:00.992Z · EA · GW

I'm aware of the form, and trying to think honestly about why I haven't used it/don't feel very motivated to. I think there's a few reasons:

  1. Simple akrasia. There's quite a long list of stuff I could say, some quite subjective, some quite dated, some quite personal and therefore uncomfortable to raise since it feels uncomfortable criticising individuals. The logistics of figuring out which things are worth mentioning and which aren't are quite a headache.
  2. Direct self-interest. In practice the EA world is small enough that many things I could say couldn't be submitted anonymously without key details removed. While I do believe that CEA are generally interested in feedback, it's difficult to believe that, with the best will in the world, if I identify individuals in particularly strong ways and they're still at the org, it doesn't lower my expectation of good future interactions with them.
  3. Indirect self-interest/social interest. I like everyone I've interacted with from CEA. Some of them I'd consider friends. I don't want to sour any of those relationships.
  4. Fellow-interest. Some of the issues I could identify relate to group interactions, some of which don't actually involve me, but that I'm reasonably confident haven't been submitted, presumably for similar reasons. I'm especially keen not to accidentally put anyone else in the firing line.
  5. In general I think it's much more effective to discuss issues publicly than anonymously (as this post does) - but that magnifies all the above concerns.
  6. Lack of confidence that submitting feedback will lead to positive change. I could get over some of the above concerns if I were confident that submitting critical feedback would do some real good, but it's hard to have that confidence - both because CEA employees are human, and therefore have status quo bias/a general instinct to rationalise bad actions, and because as I mentioned some of the issues are subjective or dated, and therefore might turn out not to be relevant any more, not to be reasonable on my end, or not to be resolveable for some other reason.

I realise this isn't helpful on an object level, but perhaps it's useful meta-feedback. The last point gives me an idea: large EA orgs could seek out feedback actively, by eg posting discussion threads on their best guess about 'things people in the community might feel bad about re us' with minimal commentary, at least in the OPs, and see if anyone takes the bait. Many of the above concerns would disappear or at least alleviate if it felt like I was just agreeing with a statement rather than submitting multiple whinges.

(ETA: I didn't give you the agreement downvote, fwiw)

Comment by Arepo on Let's advertise infrastructure projects · 2022-09-23T20:16:50.310Z · EA · GW

That's a cool idea, though I have the feeling that for most projects such as these there'd be relatively little to talk about beyond letting people know the product or service exists. How would you image structuring a longer conversation?

Comment by Arepo on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T15:29:05.803Z · EA · GW

Please do, I'd be interested to hear your take :)

Comment by Arepo on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T15:21:05.566Z · EA · GW

Strong agree with all of this. 'Gaming the system' feels like weaksauce - it's not like there's an algorithm evaluators have to agree to in advance, so if CEA feel someone's responded to the letter but not spirit of their feedback, they can just reject and say that in the rejection.

Comment by Arepo on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T15:13:13.679Z · EA · GW

nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to.

I have received EA funding in multiple capacities, and feel quite constrained in my ability to criticise CEA publicly.

Comment by Arepo on Leaning into EA Disillusionment · 2022-09-16T19:04:00.846Z · EA · GW

Some other thoughts about possible factors:

  • A small number of people and organisations who set the tone for the movement seems in tension with it being 'a movement'. While movements might rally around popular figures, eg MLK, this feels like a different phenomenon where the independent funding many orgs receive make the prominence of those people less organic
  • Lack of transparency in some of the major orgs
  • No 'competition' among orgs means if you have a bad experience with one of them, it feels like there's nothing you can do about it, esp since they're so intertwined - which exacerbates the first two concerns
  • Willingness to dismiss large amounts of work by very intelligent people justified by a sense that they're not asking the right questions
  • Strong emphasis on attracting young people to the movement means that as you get older, you tend to feel less kinship with it

Other than the third, which I think is a real problem, one could argue all these are necessary - but I still find myself emotionally frustrated by them to varying degrees. And I imagine if I am, others are too.

Comment by Arepo on Leaning into EA Disillusionment · 2022-09-16T18:26:41.632Z · EA · GW

I would guess both that disillusioned people have low value critiques on average, and that there are enough of them that if we could create an efficient filtering process, there would be gold in there.

Though another part of the problem is that the most valuable people are generally the busiest, and so when they decide they've had enough they just leave and don't put a lot of effort into giving feedback.

Comment by Arepo on Climate Change & Longtermism: new book-length report · 2022-09-16T10:32:39.473Z · EA · GW

That assumes a relatively happy path. If there's some other major one-off catastrophe (eg a major pandemic), the long-term effects of climate change will end up being far harder to deal with.

Comment by Arepo on Modelling Great Power conflict as an existential risk factor · 2022-09-14T08:28:43.717Z · EA · GW

I'm just reading the relevant section in Will's book, and noticed the footnote 'there is some evidence suggesting that future power transitions may pose a lower risk of war, not an elevated one, and some researchers believe that it is equality of capabilities, not the transition process that leads to equality, that raises the risk of war.'

If this is true, and if we believe China is overtaking the US, this implies that accelerating the transition eg by encouraging policies specifically to boost China's growth rate would reduce the risk of conflict (since the period of equality would be shorter).

Comment by Arepo on Modelling the odds of recovery from civilizational collapse · 2022-09-12T10:25:44.863Z · EA · GW

I hadn't seen this post before, but this is basically a summary of the project I've just started doing with a LTFF grant.

Comment by Arepo on Venn diagrams of existential, global, and suffering catastrophes · 2022-09-12T09:07:16.208Z · EA · GW

I'm honestly wondering if we should deliberately reject all the existing terminology and try to start again, since a) as you say, many organisations use these terms inconsistently with each other, and b) the terms aren't etymologically intuitive. That is, 'existential catastrophes' needn't either threaten existence or seem catastrophic, and 'global' catastrophes needn't affect the whole globe, or only the one globe.

Comment by Arepo on Venn diagrams of existential, global, and suffering catastrophes · 2022-09-12T08:45:08.693Z · EA · GW

Also it would be useful to have a term that covered the union of any two of the three circles, esp 'global catastrophe' + 'existential catastrophe', but you might need multiple terms to account for the amibigity/uncertainty.

Comment by Arepo on Existential risk pessimism and the time of perils · 2022-09-09T12:24:42.506Z · EA · GW

I had another thought on why you might be underrating space settlement. The threat of an engineered pandemic, nuclear war etc constitute a certain type of constantish risk per year per colony. So we can agree that colonies reduce risk on nuclear war, and disagree for now on biorisk.

AIs seem like a separate class of one-off risk. At some point, we'll create (or become) a superhuman AI, and at that point it will either wipe us out or not. If so, that I agree that even multiple colonies in the solar system, and perhaps even in other stars wouldn't afford much protection - though they might afford some. But if not, it becomes much harder to envisage another AI coming along and doing what the first one didn't, since now we presumably have an intelligence that can match it (and had a head start).

On this view, AI has its own built-in time-of-perilsness, and if in the scenario where it doesn't wipe us all out it doesn't also permanently fix all our problems, space colonisation now reduces the risk of the remaining threats by a much larger proportion.

Comment by Arepo on Venn diagrams of existential, global, and suffering catastrophes · 2022-09-08T16:53:01.253Z · EA · GW

One further ambiguity that IMO would be worth resolving if you ever come to edit this is between 'unrecoverable collapse' and 'collapse that in practice we don't recover from'. The former sounds much more specific (eg a Mad Maxy scenario where we render so much surface area permanently uninhabitable by humans, such that we'd never again be able to develop a global economy) and so much lower probability.

Comment by Arepo on Open EA Global · 2022-09-01T16:08:25.477Z · EA · GW

I disagreed when at the start of this post on the grounds that I strongly prefer smaller events, but updated towards agreeing fairly strongly, subject to the logistical issues Scott mentions at the start.

  1. Saulius' point that we could try it once seems excellent
  2. I really dislike the desire to police EA culture that people are expressing elsewhere in the comments. Hardcore vegans didn't stop being hardcore vegans when vegan circles started admitting reduceatarians et al, and I don't see any reason to think hardcore EAs would disappear or have trouble meeting each other just because less hardcore EAs started showing up.
  3. If they did have trouble meeting each other, you could define one or more submovements that people could opt into, possibly with requirements for entry to their congresses.
  4. The event's culture is still going to be heavily dominated by the talks, marketing, and prearranged norms.
  5. You could raise the price to a profitmaking point, funding the smaller, weirder conferences, while still offering free/subsidised access to promising people who couldn't afford it.
Comment by Arepo on Host your EA Fellowship in the EA Gathertown! · 2022-08-23T08:24:54.275Z · EA · GW

I'm on a pretty average for Europe connection for whatever that's worth, and I haven't noticed much lag. It seems to use a few more system resources than Meet (not sure about Zoom), which would make sense given that it's doing quite a bit more. But if you're worried I would say just give it a go and see how well it works.

To answer the second question, there are some welcoming hours which you can see by subscribing to the calendar, but it's completely fine to just show up and join a coworking desk. There's usually someone around who'll reach out to offer you an intro tour, and if not, there's a norms document next to where you first spawn in the map that lays out the practices we've found most helpful.

Comment by Arepo on $1,000 Squiggle Experimentation Challenge · 2022-08-16T10:22:03.867Z · EA · GW

I'm unsure from this post what the prize criteria are. Like, 'the judges will be assessing based on which entries are the most <what?>'

Comment by Arepo on Announcing Squiggle: Early Access · 2022-08-16T10:19:01.322Z · EA · GW

This looks great! I'm just starting a project that I'll definitely try it out for.

When you say it doesn't have 'robust stability and accuracy', can you be more specific? How likely is it to return bad values? And how fast is it progressing in this regard?

Comment by Arepo on Existential risk pessimism and the time of perils · 2022-08-15T08:28:26.858Z · EA · GW

Hey :)

Re 1, we needn't be talking about planets. In principle, any decently sized rocky body in the asteroid belt can be a colony, or you could just build O'Neill cylinders. They might not be long-term sustainable without importing resources, but doing so wouldn't be a problem in a lot of catastrophic scenarios, eg where some major shock destroyed civilisation on planets and left behind most of the minerals. In this scenario 'self-sustainability' is more like a scale than a distinct property, and having more sustainable-ish colonies seems like it would still dramatically increase resilience.

At some point you'll still hit a physical limit of matter in the system, so such a growth rate wouldn't last that long, but for this discussion it wouldn't need to. Even just having colonies on the rocky planets and major moons would reduce the probability of any event that didn't intentionally target all outposts getting them all would be much closer to zero. At 2^n growth rate (which actually seems very conservative to me in the absence of major catastrophes - Earth alone seems like it could hit that growth rate for a few centuries) I feel like you'd have reduced the risk of non-targeted catastrophes to effectively zero by the time you had maybe 10 colonies?

Re 2, I think we're disagreeing where you say we're agreeing :P - I think the EA movement probably overestimates the probability of 'recovery' from global catastrophe, esp where 'recovery' really means 'get all the way to the glorious Virgo supercluster future'. If I'm right then they're effectively existential risks with a 0.1 multiplier, or whatever you think the probability of non-recovery is.

In scenarios such as the sleeper virus, it seems like more colonies would still provide resilience. Presumably if it's possible to create such a virus it's possible to detect and neutralise it before its activation, and the probability of doing so is some function of time - which more colonies would give you more of, if you couldn't activate it til it had infected everyone. I feel like this principle would generalise to almost any technological threat that was in principle reversible.

Comment by Arepo on Existential risk pessimism and the time of perils · 2022-08-13T13:02:08.914Z · EA · GW

I think this was a fantastic post - interesting and somehow, among all that maths, quite fun to read :P I'm about to start working on a related research project, so if you'd be willing to talk through this in person I'd love to arrange something. I'll PM you my contact details.

A couple of stylistic suggestions:

Change the title! My immediate response was 'oh god, another doom and gloom post', and one friend already mentioned to me he was going to put off reading it 'until he was in too good a mood'. I think that's doing it an injustice.

Label the axes tables explicitly - I found them hard to parse, even having felt like I grasped the surrounding discussion.

Substantively, my main disagreement is on the space side. Firstly I agree with weeatquince that in 20 centuries we could get way beyond one offworld colony. Elon Musk's goal last I heard was to get a self-sustaining Mars settlement by the 2050s, and if we get that far I'd expect the 'forcing function' he/Zubrin describe to incentivise faster colonisation of other habitats. Even assuming Musk is underestimating the challenge by a factor of 2-3, such that it will take a century then if there aren't any hard limits like insuperable health problems associated with microgravity, a colonisation rate of 1 new self-sustaining colony per colony per century, or 2^n total colonies, where n is number of centuries from now seems quite plausible to me! - at least up to the point where we've started using up all the rocky bodies in the solar system (but we'll hopefully be sending colonisation missions to other star systems by then - which would necessarily be capable of self-sufficiency even if our whole system is gone by the time they get there).

Secondly, I think even short term it's a better defence than you give it credit for. Engineered pandemics would have a much tougher time both reaching Mars without advance warning from everyone dying on Earth, and then spreading around a world which is presumably going to be heavily airlocked, and generally have much more active monitoring and control of its environment. Obviously it's hard to say much about 'other/unforeseen anthropogenic risks', but we should presumably discount some of their risk for similar reasons. More importantly IMO, the 'green' probability estimates you cite are for those things directly wiping out humanity, not the risk of them causing some major catastrophe, the latter of which I would guess is in total much higher than the risk of catastrophe from the red box. And IMO the EA movement tends to overestimate the probability that we can fully rebuild (and then reach the stars) from civilisational collapse. If you put that probability at 90-99% then such catastrophes are essentially a 0.01-0.1x multiplier on the loss of value of collapse - so they could still end up being a major x-risk factor if the triggering events are sufficiently likely (this is the focus of the project I mentioned).

Comment by Arepo on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-08T08:28:43.303Z · EA · GW

I can't follow what you're saying in the 'AGI will be aligned by default' section. I think you're saying in that scenario it will be so good that you should disregard everything else and try and make it happen ASAP? If so, that treats all other x-risk and trajectory change scenarios as having probability indistinguishable from 0, which can't be right. There's always going to be one you think has distinctly higher probability than the others, and (as a longtermist) you should work on that.

Comment by Arepo on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-08T08:21:41.689Z · EA · GW

Luisa's post addresses our chance of getting killed 'within decades' of a civilisational collapse, but that's not the same as the chance that it prevents us ever becoming a happy intergalactic civilisation, which is the end state we're seeking. If you think that probability is 90%, given a global collapse, then the effective x-risk of that collapse is 0.1 * <its probability of happening>. One order of magnitude doesn't seem like that big a deal here, given all the other uncertainties around our future.

Comment by Arepo on Announcing: EA Engineers · 2022-07-07T18:04:41.414Z · EA · GW

I love this! EA needs more engineers of all varieties, IMO. I don't think you say explicitly what your funding model will be - would you aim to be a nonprofit or a fee-paid consultancy, or something else?

Comment by Arepo on Impact markets may incentivize predictably net-negative projects · 2022-06-29T08:20:26.696Z · EA · GW

Why? The less scrupulous one finds Anthropic in their reasoning, the less weight a claim that Wuhan virologists are 'not much less scrupulous' carries.

Comment by Arepo on Impact markets may incentivize predictably net-negative projects · 2022-06-23T10:19:07.171Z · EA · GW

Strong disagree. A bioweapons lab working in secret on gain of function research for a somewhat belligerent despotic government, which denies everything after an accidental release is nowhere near any model I have of 'scrupulous altruism'.

Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.

Comment by Arepo on Impact markets may incentivize predictably net-negative projects · 2022-06-23T08:31:23.469Z · EA · GW

I'm talking about the unilateralist's curse with respect to actions intended to be altruistic, not the uncontroversial claim that people sometimes do bad things. I find it hard to believe that any version of the lab leak theory involved all the main actors scrupulously doing what they thought was best for the world.

I think we should be careful with arguments that such and such existential risk factor is entirely hypothetical.

I think we should be careful with arguments that existential risk discussions require lower epistemic standards. That could backfire in all sorts of ways, and leads to claims like one I heard recently from a prominent player that a claim about artificial intelligence prioritisation for which I asked for evidence is 'too important to lose to measurability bias'.

Comment by Arepo on Impact markets may incentivize predictably net-negative projects · 2022-06-22T17:42:35.894Z · EA · GW

Is there any real-world evidence of the unilateralist's curse being realised? My sense historically is that this sort of reasoning to date has been almost entirely hypothetical, and has done a lot to stifle innovation and exploration in the EA space.

Comment by Arepo on Does the Forum Prize lead people to write more posts? · 2022-06-08T11:10:12.172Z · EA · GW

Another vote against this being a wise metric, here. Anecdotally, while writing my last post when (I thought) the prize was still running, I felt both a) incentivised to ensure the quality was as high as I could make it and b) less likely to actually post as a consequence (writing higher quality takes longer).

And that matches what I'd like to see on the forum - better signal to noise ratio, which can be achieved both by increasing the average quality of posts and by decreasing the number of marginal posts.

Comment by Arepo on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T10:32:15.517Z · EA · GW

Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you're comfortable that the granularity of the priors you're forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.

In the case of 'cluelessness', it honestly seems better framed as 'laziness' to me. There's no principled reason why we can't throw a bunch of resources at refining and parameterising cost-effectiveness analyses like these, but Givewell afaict don't do it because they like to deal in relatively granular priors and longtermist organisations don't do it afaict because post-'Beware Suprising and Suspicious Convergences' no-one takes the idea seriously that global poverty research could be a good use of longtermist resources. I think that's a shame, both because it doesn't seem either surprising or suspicious to me that high granularity interventions could be more effective long-term than low-granularity ones (eg 'more AI safety research') - IMO the planning fallacy gets much worse over longer periods - and because this...

Plausibly what we really need is more emphasis on geopolitical stability, well-being enhancing values, and resilient, well-being enhancing governance institutions. If that were the case, I’d expect the case for altruistically donating bednets to help the less well-off is fairly straightforward.

... seems to me like it should be a much larger part of the conversation. The only case I've seen for disregarding it amounts to hard cluelessness - we 'know' extinction reduces value by a vast amount (assuming we think the future is +EV) - whereas trajectory change is difficult to map out. But as above, that seems like lazy reasoning that we could radically improve if we put some resources into it.

Comment by Arepo on Should large EA nonprofits consider splitting? · 2022-06-07T10:47:36.816Z · EA · GW

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.

This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.

I'm reminded of Scott Siskind on prediction markets - while there might be some grantmakers who I happen to trust, EA prioritisation is exceptionally hard, and I think 'have the community have as representative a say in it as they want to have' is a far better Schelling point than 'appoint a handful of gatekeepers and encourage everyone to defer to them'.

First of all, relevant xkcd.

This seems like a cheap shot. What's the equivalent of systemwide security risk in this analogy? Looking at the specific CEA form example, if you fill out a feedback form at the event, do CEA currently need to share it among their forum, community health, movement building departments? If not, then your privacy would actually increase post-split, since the minimum number of people you could usefully consent to sharing it with would have decreased.

Also, what's the analogy where you end up with an increasing number of sandboxes? The worst case scenario in that respect seems to be 'organisations realise splitting didn't help and recombine to their original state'.

Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

I agree in the sense that overhead would increase in expectation, but a) the gains might outweigh it - IMO higher fidelity comparison is worth a lot and b) it also seems like there's a <50% but plausible chance that movement-wide overhead would actually decrease, since you'd need shared services for helping establish small organisations. And that's before considering things like efficiency of services, which I'm confident would increase for the reasons I gave here.

Comment by Arepo on Revisiting the karma system · 2022-05-30T18:48:38.236Z · EA · GW

Fwiw I didn't downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I'm also finding it hard to parse some of what you say. 

A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.

This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I'll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.

Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in 'ITN' reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it's a heuristic with interesting mathematical properties, but it's not a separate factor, as it's often presented. For example, in 80k's new climate change profile, they cite 'not neglected' as one of the two main arguments against working on it. I find this quite disappointing - all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.

Also, it seems like you are close to implicating literally any belief?

I don't know why you conclude this. I specified 'belief shared widely among EAs and not among intelligent people in general'. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.

Comment by Arepo on Revisiting the karma system · 2022-05-30T18:03:13.131Z · EA · GW

For those who enjoy irony: the upvotes on this post pushed me over the threshold not only to 6-karma strong upvotes, but for my 'single' upvoted now being double-weighted.

Comment by Arepo on Revisiting the karma system · 2022-05-30T09:32:59.368Z · EA · GW

Often authors mention the issue, but don't offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it. 

 

You've seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink - because things we believe are true don't qualify. 

So can you tell me what conditions you think would be sufficient to judge something as groupthink before I try to satisfy you? 

Also do we agree that if groupthink turns out to be a phenomenon among EAs then the karma system would tend to accentuate it? Because if that's true then unless you think the probability of EA groupthink is 0, this is going to be an expected downside of the karma system - so the argument should be whether the upsides outweigh the downsides, not whether the downsides exist.

Comment by Arepo on Revisiting the karma system · 2022-05-30T08:47:06.760Z · EA · GW

As a datum I rarely look beyond the front page posts, and tbh the majority of my engagement probably comes from the EA forum digest recommendations, which I imagine are basically a curated version of the same.

Comment by Arepo on Revisiting the karma system · 2022-05-30T08:39:17.968Z · EA · GW

'Personally I'd rather want the difference to be bigger, since I find it much more informative what the best-informed users think.'

This seems very strange to me. I accept that there's some correlation between upvoted posters and epistemic rigour, but there's a huge amount of noise, both in reasons for upvotes and in subject areas. EA includes a huge diversity of subject areas each requiring specialist knowledge. If I want to learn improv, I don't go to a Fields Medalist winner or Pulitzer prize winning environmental journalist, so why should the equivalent be true on here?

Comment by Arepo on Some unfun lessons I learned as a junior grantmaker · 2022-05-27T15:29:39.734Z · EA · GW

That makes sense, though I don't think it's as clear a dividing line as you make out. If you're submitting a research project for eg, you could spend a lot of time thinking about parameters vs talking about the general thing you want to research, and the former could make the project sound significantly better - but also run the risk you get rejected because those aren't the parameters the grant manager is interested in.

Comment by Arepo on Some unfun lessons I learned as a junior grantmaker · 2022-05-26T08:43:03.956Z · EA · GW

'It’s rarely worth your time to give detailed feedback'

This seems at odds with the EA Funds' philosophy that you should make a quick and dirty application that should be 'the start of a conversation'.

Comment by Arepo on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-19T08:54:24.809Z · EA · GW

I think you're mixing up updates and operations. If I understand you right, you're saying each user on the forum can get promoted at most 16 times, so at most each strong update gets incremented 16  times. 

But you have to count the operations of the algorithm that does that. My naive effort is something like this: Each time a user's rank updates (1 operation), you have to find and update all the posts and users that received their strong upvotes (~N operations where N is either their number of strong upvotes, or their number of votes depending on how the data is stored). For each of those posts' users, you now need to potentially do the same again (N^2 operations in the worst case) and so on. 

(Using big O approach of worst case after ignoring constants)

The exponent probably couldn't get that high - eg maybe you could prove no cascade would cause a user to be promoted more than once in practice (eg each karma threshold is >2x the previous, so if a user was one karma short of it, and all their karma was in strong upvotes, then at most their karma could double unless someone else was somehow multiply promoted), so I was probably wrong that it's computationally intractable. I do think it could plausibly impose a substantial computational burden on a tiny operation like CEA though, so it'd be  someone would need to do the calculations carefully before trying to implement it.

There's also the philosophical question of whether it's a good idea - if we think increasing karma is a proxy for revealing good judgement, then we might want to retroactively reward users for upvotes from higher ranked people. If we think it's more like a proxy for developing good judgement, then maybe the promotee's earlier upvotes shouldn't carry any increased weight, or at least not as much.

Comment by Arepo on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-18T14:37:45.857Z · EA · GW

To be clear, I'm looking at the computational costs, not algorithmic complexity which I agree isn't huge.

Where are you getting 2x from for computations? If User A has cast strong upvotes to up to N different people, each of who has cast strong upvotes to up to N different people, and so on up to depth D, then naively a promotion for A seems to have O(N^D) operations, as opposed to O(1) for the current algorithm. (though maybe D is a function of N?)

In practice as Charles says big O is probably giving a very pessimistic view here since there's a large gap between most ranks, so maybe it's workable - though if a lot of the forum's users are new (eg if forum use grows exponentially for a while, or if users cycle over time) then you could have a large proportion of users in the first three ranks, ie being relatively likely to be promoted by a given karma increase.

Comment by Arepo on EA will likely get more attention soon · 2022-05-18T14:06:08.683Z · EA · GW

I just posted a comment giving a couple of real-life anecdotes showing this effect.

Comment by Arepo on EA will likely get more attention soon · 2022-05-18T14:04:44.051Z · EA · GW

For the last several years, most EA organizations did little or no pursuit of media coverage. CEA’s advice on talking to journalists was (and is) mostly cautionary. I think there have been good reasons for that — engaging with media is only worth doing if you’re going to do it well, and a lot of EA projects don’t have this as their top priority.  

 

I think this policy has been noticeably harmful, tbh. If the supporters of something won't talk to the media, the net result seems to be that the media talk to that thing's detractors instead, and so you trade low-fidelity positive reporting for lower-fidelity condemnation.

Two real-life anecdotes to support this: 

  1. At the EA hotel, we turned away a journalist at the door, who'd initially contacting me sounding very positive about the idea. He wrote a piece about it anyway, and instead of interviews with the guests, concluded with a perfunctory summary of the neighbours' very lukewarm views.
  2. At a public board games event we were introducing ourselves while setting up for a 2-hour game, and described my interest in EA as a way of making conversation. The only person at the table who recognised the name turned to me and said 'oh... that's the child molestation thing, right?' It turns out everything he knew about the movement was from a note published by Kathy Forth making various unsubstantiated accusations about the EA and rationalist movements without distinguishing between them. I felt morally committed to the game at that point, so... that was an uncomfortable couple of hours.