Posts

Remuneration In Effective Altruism 2022-07-25T12:35:22.478Z
What psychological traits predict interest in effective altruism? 2022-02-25T15:38:26.226Z
Robin Hanson on the Long Reflection 2021-10-03T16:42:56.880Z
Virtues for Real-World Utilitarians 2021-08-05T22:56:53.819Z
Stefan_Schubert's Shortform 2019-10-04T18:32:56.962Z
Considering Considerateness: Why communities of do-gooders should be exceptionally considerate 2017-05-31T22:41:27.190Z
Effective altruism: an elucidation and a defence 2017-03-22T17:06:50.202Z
Hard-to-reverse decisions destroy option value 2017-03-17T17:54:34.688Z
Understanding cause-neutrality 2017-03-10T17:43:51.345Z
Should people be allowed to ear-mark their taxes to specific policy areas for a price? 2015-09-13T11:01:32.358Z
Effective Altruism’s fact-value separation as a weapon against political bias 2015-09-11T14:58:04.983Z
Political Debiasing and the Political Bias Test 2015-09-11T14:52:47.510Z
Why the triviality objection to EA is beside the point 2015-07-20T19:29:13.261Z
Opinion piece on the Swedish Network for Evidence-Based Policy 2015-06-09T14:35:32.973Z
The effectiveness-alone strategy and evidence-based policy 2015-05-07T10:52:36.891Z

Comments

Comment by Stefan_Schubert on Internationalism is a key value in EA · 2022-08-12T21:04:31.680Z · EA · GW

Cf this post from 2014 which had a similar message.

Comment by Stefan_Schubert on Let’s not glorify people for how they look. · 2022-08-11T16:46:51.937Z · EA · GW

I disagree - I haven't seen any discussion of this, and the arguments come off as earnest and not as applause lights.

Comment by Stefan_Schubert on EA 1.0 and EA 2.0; highlighting critical changes in EA's evolution · 2022-08-10T15:32:25.925Z · EA · GW

I agree. It seems obvious that effective altruism has changed in important ways. Yes, some characterisations of this change are exaggerated, but to deny that there's been a change altogether doesn't seem right to me.

Comment by Stefan_Schubert on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T22:20:01.256Z · EA · GW

Yeah agreed; I got that.

Comment by Stefan_Schubert on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T20:40:34.263Z · EA · GW

"EA community building", "making a ton of money", or [being an] "UX designer for EA organisations like 80K" can be pursued in order to mitigate AI risk, but I wouldn't say they're intrinsically "AI-related". Instead of "AI-related career paths" I'd call them something like "career paths that can be useful for addressing AI risk".

Comment by Stefan_Schubert on Most* small probabilities aren't pascalian · 2022-08-07T20:25:16.383Z · EA · GW

I agree with this.

People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listener's estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascal's mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.

Comment by Stefan_Schubert on [Linkpost] Criticism of Criticism of Criticism, ACX · 2022-08-04T22:24:42.356Z · EA · GW

Thanks for asking for his permission and posting this. I think this is an important post that should be on the forum.

Comment by Stefan_Schubert on Emphasizing emotional altruism in effective altruism · 2022-07-30T15:07:19.618Z · EA · GW

I agree that emotional approaches can have upsides. But they can also have downsides. For instance, Paul Bloom has a book-length discussion (which covers EA) on the downsides of empathy. Likewise, it's well-known that moral outrage can have severely negative consequences. I think this post would have benefitted from more discussion on the potential costs of a more emotional strategy, since it seems a bit one-sided to only discuss the potential benefits. (And the comments to this post seem pretty one-sided as well.)

Comment by Stefan_Schubert on Remuneration In Effective Altruism · 2022-07-29T20:56:43.005Z · EA · GW

Thanks, good point. I agree that this is an additional problem for that strategy. My discussion about it wasn't very systematic.

Comment by Stefan_Schubert on Interactively Visualizing X-Risk · 2022-07-29T16:53:59.671Z · EA · GW

What about something relating to hope? Say "the Tree of Hope". Combining the positive "tree" and the more negative "x-risk" may be slightly odd. But it depends on what framing you want to go for.

Comment by Stefan_Schubert on Remuneration In Effective Altruism · 2022-07-29T10:45:59.011Z · EA · GW

I was trying to understand your argument, and suggested two potential interpretations.

there's something odd about very altruistically capable people requiring very high salaries, lest they choose to go and do non-impactful jobs instead

Sound more like your classic corporate lawyer than an effective effective altruist

I don't understand where you're trying to go with these sorts of claims. I'm saying that I believe that compensation helps recruitment and similar, and therefore increase impact; and that I don't think that higher compensation harms value-alignment to the extent that's often claimed. How do the quoted claims relate to those arguments? And if you are trying to make some other argument, how does it influence impact?

I'm not sure about the fraction of monetised impact bit of that relevant. As someone who runs an org, I only have access to my budget, not the monetised impact - a job might have '£1m a year of impact', but that's um, more than 4x HLI's budget. For someone with enormous resources, eg Open Phil, it might make more sense to think like this.

It's relevant because orgs' budgets aren't fixed. Funders should take the kind of reasoning I outline here into account when they decide how much to fund an org.

I've been very clear that I don't have non-anecdotal evidence, and called for more research in my post.

Comment by Stefan_Schubert on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T14:34:47.942Z · EA · GW

A third strategy is selecting for people who are (somewhat) resilient to status and monetary incentives

Thanks for raising this possibility. However, I think EA has already effectively selected for quite low sensitivity to monetary incentives at least. Also, selecting more for insensitivity to status and monetary incentives than we do now would likely make EA worse along other important dimensions (since they're likely not much correlated with such insensitivity). I also think it's relatively hard to be strongly selective with respect to status insensitivity. For these reasons, my sense is that selecting more for these kinds of incentive insensitivity isn't the right strategy.

Comment by Stefan_Schubert on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T14:24:46.494Z · EA · GW

I'd argue that it should happen at the level of choosing how generously to fund different types of organizations, rather than how to set individual salaries within those organizations.

I'm not sure I follow. My argument is (in part) that organisations may want to pay more for impactful roles they otherwise can't fill. Do you object to that, and if so, why?

Comment by Stefan_Schubert on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T10:26:33.175Z · EA · GW

I think that we as a community need to put status in better places.

My sense is that it's relatively hard (though not completely infeasible) to change the status of particular jobs by just trying to agree to value them more.

An alternative strategy is to pay more for impactful roles that you can't otherwise fill. That would directly incentivise people to take them, and probably increase their status as well.

Comment by Stefan_Schubert on Remuneration In Effective Altruism · 2022-07-28T09:32:56.251Z · EA · GW

I'm not quite sure I understand the argument. One interpretation of it is that higher salaries don't actually have positive incentive effects on recruitment, motivation, etc. It would be good to have more data on that, but my sense is that they do have an effect. With respect to this argument, one needs to consider how high salaries are as a fraction of the monetised impact of the jobs in question. If that fraction is low, as Thomas Kwa suggests, then it could be worth increasing salaries substantially even if the effect on impact (in terms of percentages) is relatively modest.

Another interpretation is that one needs to pay low salaries to filter for value-alignment. I discussed that argument critically in two of my posts.

Comment by Stefan_Schubert on Wanting to dye my hair a little more because Buck dyes his hair · 2022-07-22T23:08:36.430Z · EA · GW

I guess some of those things you could reward monetarily. Monetary rewards seem easier to steer than more nebulous social rewards ("let's agree to celebrate this"), even though the latter should be used as well. (Also, what's monetarily rewarded tends to rise in social esteem; particularly so if the monetary rewards are explicitly given for impact reasons.)

Comment by Stefan_Schubert on Fixing bad incentives in EA · 2022-07-22T17:34:17.348Z · EA · GW

Develop a norm against long-term EA projects and long-term employment in EA

That doesn't seem like a good norm to me. 

If the cash transfers were guaranteed for a lifetime, the motivation to make smart decisions is less

That's not analogous. Individuals and organisations aren't guaranteed continued employment/funding - it's conditional on performance. And given continued performance, I don't think there should be a term limit. I think that would threaten to destroy useful projects. Likewise, it would mean that experienced, valuable staff couldn't continue at their org.

Comment by Stefan_Schubert on The value of content density · 2022-07-21T07:52:51.446Z · EA · GW

My sense is that when you try to skim, you often miss points.

To the extent that one wants something like that, I'd rather employ features like abstracts, appendices, etc, which allow those who want to spend less time on a post to skip parts of it entirely.

Comment by Stefan_Schubert on Impact Markets: The Annoying Details · 2022-07-16T00:02:49.534Z · EA · GW

If we allowed founders to capture more of the surplus, then maybe the founder would end up with $50 billion, and the charity and its beneficiaries would only get $50 billion, which seems much worse for them. Given that our goal here is to help beneficiaries (and by implication charities, since any money charities save goes to the beneficiaries eventually), this seems pretty bad.

As far as I understand, your worry is that investors might pay so much for impact certificates that it lowers total impact (because it means less money to beneficiaries/programs). But shouldn't investors take such effects into account when they decide what to pay for an impact certificate - and when they estimate what other investors will be will be willing to pay for the impact certificate? As far as I can tell, this argument seems to assume that the investors would be irrational, and/or that impact certificate prices will for some other reason deviate from what's socially optimal. I guess the hope is that the socially optimal price will effectively be akin to a Schelling point, and that investors should try to guess what that price is. If so, they wouldn't overpay for impact certificates, but simply pay the socially optimal amount. But maybe there's some consideration here I'm missing - I'm not read up on these issues.

Comment by Stefan_Schubert on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-06-30T21:54:08.037Z · EA · GW

Great, thanks - I appreciate it. I'd love a systematic study akin to the one Seb Farquhar did years back.

https://forum.effectivealtruism.org/posts/Q83ayse5S8CksbT7K/changes-in-funding-in-the-ai-safety-field

Comment by Stefan_Schubert on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-06-30T21:34:48.306Z · EA · GW

Do you have a rough estimate of the current size?

Comment by Stefan_Schubert on How To Prevent EA From Ever Turning Into a Cult · 2022-06-30T16:38:27.766Z · EA · GW

Do you think it would be better to not suggest any action, or to filter these suggestions without any input from other people? 

I'm not sure what you mean. I'm saying the post should have been more carefully argued.

discouraging intimate relationships between senior and junior employees of EA orgs

I assume you hereby mean the same org (I think that's the natural reading). But the post rather says:

Senior community members should avoid having intimate relationships with junior members.

That's a very different suggestion.

Comment by Stefan_Schubert on How To Prevent EA From Ever Turning Into a Cult · 2022-06-30T10:55:15.775Z · EA · GW

Could you elaborate more on (some of) these considerations and why you think the cultishness risk is being overestimated relative to them?

My intuition is that it's being generally underestimated

I didn't say the cultishness risk is generally overestimated. I said that this particular post overestimates that risk relative to other considerations, which are given little attention. I don't think it's right to suggest a long list of changes based on one consideration alone, while mostly neglecting other considerations. That is especially so since the cult notion is anyway kind of vague.

Comment by Stefan_Schubert on How To Prevent EA From Ever Turning Into a Cult · 2022-06-30T06:54:21.823Z · EA · GW

There are many considerations of relevance for these choices besides the risk of becoming or appearing like a cult. My sense is that this post may overestimate the importance of that risk relative to those other considerations.

I also think that in some cases, you could well argue that the sign is the opposite to that suggested here. E.g. frugality could rather be seen as evidence of cultishness.

Comment by Stefan_Schubert on Let's not have a separate "community building" camp · 2022-06-29T23:52:23.150Z · EA · GW

Yeah, I get that. I guess it's not exactly inconsistent with the shot through formulation, but probably it's a matter of taste how to frame it so that the emphasis gets right.

Comment by Stefan_Schubert on Let's not have a separate "community building" camp · 2022-06-29T22:58:28.716Z · EA · GW

I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.

Maybe something like "Most community building should be shot through by direct work" - or something more distantly related to that.

Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.

Comment by Stefan_Schubert on Let's not have a separate "community building" camp · 2022-06-29T19:26:48.216Z · EA · GW

Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, it's often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.

Comment by Stefan_Schubert on Limits to Legibility · 2022-06-29T18:46:22.580Z · EA · GW

I liked this post by Katja Grace on these themes.

Here is one way the world could be. By far the best opportunities for making the world better can be supported by philanthropic money. They last for years. They can be invested in a vast number of times. They can be justified using widely available information and widely verifiable judgments.

Here is another way the world could be. By far the best opportunities are one-off actions that must be done by small numbers of people in the right fleeting time and place. The information that would be needed to justify them is half a lifetime’s worth of observations, many of which would be impolite to publish. The judgments needed must be honed by the same.

These worlds illustrate opposite ends of a spectrum. The spectrum is something like, ‘how much doing good in the world is amenable to being a big, slow, public, official, respectable venture, versus a small, agile, private, informal, arespectable one’.

In either world you can do either. And maybe in the second world, you can’t actually get into those good spots, so the relevant intervention is something like trying to. (If the best intervention becomes something like slowly improving institutions so that better people end up in those places, then you are back in the first world). 

An interesting question is what factor of effectiveness you lose by pursuing strategies appropriate to world 1 versus those appropriate to world 2, in the real world. That is, how much better or worse is it to pursue the usual Effective Altruism strategies (GiveWell, AMF, Giving What We Can) relative to looking at the world relatively independently, trying to get into a good position, and making altruistic decisions.

I don’t have a good idea of where our world is in this spectrum. I am curious about whether people can offer evidence.

Comment by Stefan_Schubert on Leftism virtue cafe's Shortform · 2022-06-29T11:23:43.739Z · EA · GW

Thanks, I think this is an interesting take, in particular since much of the commentary is rather the opposite - that EAs should be more inclined not to try to get into an effective altruist organisation. 

I think one partial - and maybe charitable - explanation why independent grants are so big in effective altruism is that it scales quite easily - you can just pay out more money, and don't need a managerial structure. By contrast, scaling an organisation takes time and is difficult.

I could also see room for organisational structures that are somewhere in-between full-blown organisations and full independence.

Overall I think this is a topic that merits more attention.

Comment by Stefan_Schubert on "Two-factor" voting ("two dimensional": karma, agreement) for EA forum? · 2022-06-26T12:59:24.943Z · EA · GW

Interesting point. 

I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.

Comment by Stefan_Schubert on EA Forum feature suggestion thread · 2022-06-25T15:59:40.309Z · EA · GW

I would prefer a more failproof anti-spam system; e.g. preventing new accounts from writing Wiki entries, or enabling people to remove such spam. Right now there is a lot of spam on the page, which reduces readability.

Comment by Stefan_Schubert on Product Managers: the EA Forum Needs You · 2022-06-23T15:28:07.826Z · EA · GW

Thanks!

Comment by Stefan_Schubert on Product Managers: the EA Forum Needs You · 2022-06-23T10:05:07.670Z · EA · GW

Extraordinary growth. How does it look on other metrics; e.g. numbers of posts and comments? Also, can you tell us what the growth rate has been per year? It's a bit hard to eyeball the graph. Thanks.

Comment by Stefan_Schubert on Impact markets may incentivize predictably net-negative projects · 2022-06-21T20:11:34.275Z · EA · GW

This kind of thing could be made more sophisticated by making fines proportional to the harm done

I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.

Comment by Stefan_Schubert on Impact markets may incentivize predictably net-negative projects · 2022-06-21T15:52:01.202Z · EA · GW

They refer to Drescher's post. He writes:

But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.

Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great success, you may make back 1,000 x your investment. So investors want to invest into many (say, 100) moonshot projects hoping that one will succeed.

When it comes to for-profits, governments are to some extent trying to limit or tax externalities, and one could also argue that if one company didn’t cause them, then another would’ve done so only briefly later. That’s cold comfort to most people, but it’s the status quo, so we would like to at least not make it worse.

Charities are more (even more) of a minefield because there is less competition, so it’s harder to argue that anything anyone does would’ve been done anyway. But at least they don’t have as much capital at their disposal. They have other motives than profit, so the externalities are not quite the same ones, but they too increase incarceration rates (Scared Straight), increase poverty (preventing contraception), reduce access to safe water (some Playpumps), maybe even exacerbate s-risks from multipolar AGI takeoffs (some AI labs), etc. These externalities will only get worse if we make them more profitable for venture capitalists to invest in.

We’re most worried about charities that have extreme upsides and extreme downsides (say, intergalactic utopia vs. suffering catastrophe). Those are the ones that will be very interesting for profit-oriented investors because of their upsides and because they don’t pay for the at least equally extreme downsides.

 

Comment by Stefan_Schubert on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T13:13:34.995Z · EA · GW

If anything, I think that prohibiting posts like this from being published would have a more detrimental effect on community culture.

Of course, people are welcome to criticise Ben's post - which some in fact do. That's a very different category from prohibition.

Comment by Stefan_Schubert on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T11:46:55.359Z · EA · GW

I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form. 

That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn't be posted.

Comment by Stefan_Schubert on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-20T09:51:41.747Z · EA · GW

This question is related to the question of how much effort effective altruism as a whole should put into movement growth relative to direct work. That question has been more discussed; e.g. see the Wiki entry and posts by Peter Hurford, Ben Todd, Owen Cotton-Barratt, and Nuño Sempere/Phil Trammell.

Comment by Stefan_Schubert on RyanCarey's Shortform · 2022-06-17T10:15:15.254Z · EA · GW

Yeah, I think it would be good to introduce premisses relating to the time that  AI and bio capabilities that could cause an x-catastrophe ("crazy AI" and "crazy bio") will be developed. To elaborate on a (protected) tweet of Daniel's.

Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they're uncorrelated, in your view.

Suppose also that we modify 2 into "a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe". (I think that captures the spirit of Ryan's version of 2.)

Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn't an accidental x-catastrophe, there is a non-accidental AI x-catastrophe - meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.

Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50/50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let's ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense - and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.

In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn't ideal.)

Comment by Stefan_Schubert on RyanCarey's Shortform · 2022-06-16T21:34:25.132Z · EA · GW

I like this approach, even though I'm unsure of what to conclude from it. In particular, I like the introduction of the accident vs non-accident distinction. It's hard to get an intuition of what the relative chances of a bio-x-catastrophe and an AI-x-catastrophe are. It's easier to have intuitions about the relative chances of:

  1. Accidental vs non-accidental bio-x-catastrophes
  2. Non-accidental AI-x-catastrophes vs non-accidental bio-x-catastrophes
  3. Accidental vs non-accidental AI-x-catastrophes

That's what you're making use of in this post. Regardless of what one thinks of the conclusion, the methodology is interesting.

Comment by Stefan_Schubert on Are too many young, highly-engaged longtermist EAs doing movement-building? · 2022-06-16T21:28:24.008Z · EA · GW

I agree that more data on this issue would be good (even though I don't share the nervousness, since my prior is more positive). There was a related discussion some years ago about "the meta-trap". (See also this post and this one.)

Comment by Stefan_Schubert on Charlotte's Shortform · 2022-06-16T17:40:29.208Z · EA · GW

Thanks - fwiw I think this merits being posted as a normal article as opposed to on the short form.

Comment by Stefan_Schubert on What is the overhead of grantmaking? · 2022-06-15T20:38:43.757Z · EA · GW

Thanks for doing this; I think this is useful. It feels vaguely akin to Marius's recent question of the optimal ratio of mentorship to direct work. More explicit estimates of these kinds of questions would be useful.

Blonergan's comment is good, though - and it shows the importance of trying to estimate the value of people's time in dollars.

Comment by Stefan_Schubert on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-06-15T11:11:12.630Z · EA · GW

I've written a blog post relating to this article, arguing that while levels of demandingness are conceptually separate from such trade-offs, what kinds of resources we most demand may empirically affect the overall level of demandingness.

Comment by Stefan_Schubert on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-15T11:02:45.710Z · EA · GW

Meta-comment - this is a great question. Probably there are many similar questions about difficult prioritisation decisions that EAs normally try to solve individually (and which many, myself included, won't be very deliberate and systematic about). More discussions and estimates about such decisions could be helpful.

Comment by Stefan_Schubert on Nick Bostrom - Sommar i P1 Radio Show · 2022-06-13T19:32:10.847Z · EA · GW

Thanks, very helpful! (For other readers; Gavin compiled all those songs on Spotify.)

Comment by Stefan_Schubert on The importance of getting digital consciousness right · 2022-06-13T13:22:33.179Z · EA · GW

But afaict you seem to say that the public needs to have the perception that there's a consensus. And I'm not sure that they would if experts only agreed on such conditionals.

Comment by Stefan_Schubert on Leftism virtue cafe's Shortform · 2022-06-13T11:19:32.289Z · EA · GW

Good post. I've especially noticed such a discrepancy when it comes to independence vs deference to the EA consensus. It seems to me that many explicitly argue that one should be independent-minded, but that deference to the EA consensus is rewarded more often than those explicit discussions about deference suggest. (However, personally I think deference to EA consensus views is in fact often warranted.) You're probably right that there is a general pattern between stated views and what is in fact rewarded across multiple issues.

Comment by Stefan_Schubert on The importance of getting digital consciousness right · 2022-06-13T11:09:18.634Z · EA · GW
  • More work needs to be done on building consensus among consciousness researchers – not in finding the one right theory (plenty of people are working on that), but identifying what the community thinks it collectively knows.

I'm a bit unsure what you mean by that. If consciousness researchers continue to disagree on fundamental issues - as you argue they will in the preceding section - then it's hard to see that there will be a consensus in the standard sense of the word.

Similarly, you write:

They need to speak from a unified and consensus-driven position.

But in the preceding section you seem to suggest that won't be possible. 

Fwiw my guess is that even in the absence of a strong expert consensus, experts will have a substantial influence over both policy and public opinion.

Comment by Stefan_Schubert on Nick Bostrom - Sommar i P1 Radio Show · 2022-06-13T10:44:30.826Z · EA · GW

Thanks a lot for providing this show with English subtitles!

Some of the songs were excluded for copyright reasons. The complete list of songs (afaik) that Bostrom played can be found here. The original version (with all the music) was ~85 minutes, I think.

Sommar i P1 is one of the most popular programs on Swedish Radio - it's been running since 1959. Max Tegmark has also had an episode.