Posts

Global catastrophic risks law approved in the United States 2023-03-07T14:28:06.278Z
Introducing the new Riesgos Catastróficos Globales team 2023-03-03T23:04:35.063Z
Epoch Impact Report 2022 2023-02-02T13:09:00.316Z
Literature review of Transformative Artificial Intelligence timelines 2023-01-27T20:36:10.378Z
A digression about area-specific work in LatAm 2022-12-17T14:59:46.710Z
Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south. 2022-12-17T14:44:01.279Z
Supporting Projects in the Spanish-Speaking Effective Altruism Community 2022-12-16T20:09:52.463Z
Join Riesgos Catastróficos Globales 2022-12-15T19:20:04.287Z
The Spanish-Speaking Effective Altruism community is awesome 2022-12-07T17:54:03.166Z
AI Forecasting Research Ideas 2022-11-17T17:37:40.600Z
Some research ideas in forecasting 2022-11-15T19:47:09.399Z
Bahamian Adventures: An Epic Tale of Entrepreneurship, AI Strategy Research and Potatoes 2022-08-09T08:37:36.728Z
Results of a Spanish-speaking essay contest about Global Catastrophic Risk 2022-07-15T16:53:43.538Z
Announcing Epoch: A research organization investigating the road to Transformative AI 2022-06-27T13:39:16.475Z
Estamos expandiendo nuestra red de expert@s en Riesgos Catastróficos Globales 2022-05-23T18:09:55.400Z
Potatoes: A Critical Review 2022-05-10T15:27:28.674Z
Concurso de ensayos sobre Riesgos Catastróficos Globales en Español 2022-03-17T16:49:54.812Z
Patricia Hall & The Warlock Curse 2022-03-06T15:29:42.985Z
Principled extremizing of aggregated forecasts 2021-12-29T18:49:04.187Z
'Tis The Season of Change 2021-12-12T14:02:39.486Z
A bottom-up approach for improving public decision making 2021-12-01T09:14:11.662Z
Improving the Public Management of Global Catastrophic Risks in Spain 2021-12-01T09:13:55.482Z
Takeaways from our interviews of Spanish Civil Protection servants 2021-11-24T09:12:52.711Z
Can we influence the values of our descendants? 2021-11-16T10:36:17.162Z
Persistence - A critical review [ABRIDGED] 2021-11-10T11:30:51.522Z
Jsevillamol's Shortform 2021-10-23T23:04:14.012Z
My current best guess on how to aggregate forecasts 2021-10-06T08:33:20.349Z
Announcing riesgoscatastroficosglobales.com 2021-09-14T15:42:55.087Z
When pooling forecasts, use the geometric mean of odds 2021-09-03T09:58:19.282Z
My first PhD year 2021-08-31T11:31:49.939Z
[Link post] Parameter counts in Machine Learning 2021-07-01T15:44:18.410Z
Everyday longtermism in practice 2021-04-06T14:42:14.117Z
Quantum computing timelines 2020-09-15T14:15:29.399Z
Assessing the impact of quantum cryptanalysis 2020-07-22T11:26:21.286Z
My experience as a CLR grantee and visiting researcher at CSER 2020-04-29T19:03:42.434Z
Modelling Vantage Points 2020-01-01T16:50:11.108Z
Quantum Computing : A preliminary research analysis report 2019-11-05T14:25:41.628Z
My experience on a summer research programme 2019-09-22T09:54:39.044Z
Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) 2019-09-05T14:56:29.449Z
A summary of Nicholas Beckstead’s writing on Bayesian Ethics 2019-09-04T09:44:24.260Z
How to generate research proposals 2019-08-01T16:38:53.790Z

Comments

Comment by Jaime Sevilla (Jsevillamol) on Nick Bostrom should step down as Director of FHI · 2023-03-04T17:51:16.071Z · EA · GW

This post argues that:

  • Bostrom's micromanagement has led to FHI having staff retention problems.
  • Under his leadership, there have been considerable tensions with Oxford University and a hiring freeze.
  • In his racist apology, Bostrom failed to display tact, wisdom and awareness.
  • Furthermore, this apology has created a breach between FHI and its closest collaborators and funders.
  • Both the mismanagement of staff and the tactless apology caused researchers to renounce

While I'd love for FHI staff to comment and add more context, all of this matches my impressions. 

Given this, I stand with the message of the post. Bostrom has been a better researcher than administrator, and it would make sense for him to focus on what he does best. I'd recommend Bostrom and FHI consider having Bostrom step down as director.

Edit: Sean adds a valuable perspective that I highly recommend reading, highlighting Bostrom's contributions to creating a unique research environment. He suggests co-directorship as an alternative to consider to Bostrom stepping down.

Comment by Jaime Sevilla (Jsevillamol) on There are no coherence theorems · 2023-02-21T06:14:33.986Z · EA · GW

Don't get me wrong, I just think this is an extremely uncharitable and confusing way of presenting your work.

I think it's otherwise a great collection of coherence theorems and the discussion about completeness seems alright, though I haven't read closely.

Comment by Jaime Sevilla (Jsevillamol) on There are no coherence theorems · 2023-02-21T05:05:24.286Z · EA · GW

My quick take after skimming: I  am quite confused about this post.
Of course the VNM theorem IS a coherence theorem.
How... could it not be a coherence theorem?

It tells you that actors following four intuitive properties can be represented as utility maximisers. We can quibble about the properties, but the result sounds important regardless for understanding agency!

The same reasoning could be applied to argue that Arrow's Impossibility Theorem is Not Really About Voting. After all, we are just introducing all these assumptions about what good voting looks like!

Comment by Jaime Sevilla (Jsevillamol) on There are no coherence theorems · 2023-02-21T04:52:07.583Z · EA · GW

Not central to the argument, but I feel someone should be linking here to Garrabrant's rejection of the independence axiom, which is fairly compelling IMO.

Comment by Jaime Sevilla (Jsevillamol) on Literature review of Transformative Artificial Intelligence timelines · 2023-02-08T15:25:11.181Z · EA · GW

Thank you Lizka, this is really good feedback.

Comment by Jaime Sevilla (Jsevillamol) on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T14:27:43.142Z · EA · GW

I'd personally err towards different subsections rather than different tabs, but glad to see you experimenting to help EA focus on more object level issues!

Comment by Jaime Sevilla (Jsevillamol) on Donation recommendations for xrisk + ai safety · 2023-02-07T13:25:16.430Z · EA · GW

Here is a write up of the organisation vision one year ago:

https://forum.effectivealtruism.org/posts/LyseHBvjAbYxJyKWk/improving-the-public-management-of-global-catastrophic-risks

Not sure why the link above is not working for you. Here is the link again:

https://riesgoscatastroficosglobales.com/

Comment by Jaime Sevilla (Jsevillamol) on Donation recommendations for xrisk + ai safety · 2023-02-06T22:21:32.611Z · EA · GW

If you want to support work in other contexts, Riesgos Catastróficos Globales is working on improving GCR management in Spain and Latin America.

I believe this project can improve food security in nuclear winter (tropical countries are very promising as last-resort global food producers), biosecurity vigilance (the recent H5N1 episode happened in Spain and there are some easy improvements to biosec in LatAm)  and potentially AI policy in Spain.

Funding is very constrained, we currently have runway until May, and each $10k extends the runway by one month.

We are working on a way to receive funds with our new fiscal sponsor, though we can already facilitate a donation if you write to info@riesgoscatastroficosglobales.com.

(disclaimer: I am a co-founder of the org and acting as interim director)

Comment by Jaime Sevilla (Jsevillamol) on Have you tried to bring forecasting techniques to your company? How did it work out? · 2023-02-05T06:09:17.004Z · EA · GW

Have you read this report yet? https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting

Comment by Jaime Sevilla (Jsevillamol) on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-05T05:26:56.230Z · EA · GW

FWIW here are a few pieces of uninformed evidence about Atlas Fellowship. This is scattered, biased and unfair; do not take it seriously.

  1. I have a lot of faith in Jonas Vollmer as a leader of the project, and stories like Habryka's tea table make me think that he is doing a good job of overseeing the project expenses
  2. I have heard other rumours in SF about outrageous expenses like a $100k statue (this sounds ridiculous so I probably misheard?) or spending a lot of money on buying and reforming a venue
  3. I have also heard rumours about a carefree attitude towards money in general, and the staff transmitting that to the alumni
  4. I've also heard someone involved in the project complain about mismanagement and being overworked
  5. I'm surprised that the fellowships seem to be offered unconditionally - having been involved in many talent camps I'd be surprised if it raises the application quality much, and it seems that you can have my h better discretion after the summer program. But Jonas has experience grant making and finding talent, so maybe all the relevant screening happened before the project (?).

My impression of the project remains positive, and this is mostly driven by the involvement of Jonas.

On the other hand, from the description on paper I think it's probably less cost effective and more risky than other efforts like Carreras con Impacto or SPARC.

I'd be curious to hear more from the Atlas alumni and staff about how they think the project went/is going however.

Comment by Jaime Sevilla (Jsevillamol) on Literature review of Transformative Artificial Intelligence timelines · 2023-01-31T22:30:15.672Z · EA · GW

I don't think it's impossible - you could start from Harperin's et al basic setup [1] and plug in some numbers about p doom, the long rate growth rate etc and get a market opinion.

I would also be interested in seeing the analysis of hedge fund experts and others. In our cursory lit review we didn't come across any which was readily quantifiable (would love to learn if there is one!).

[1] https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or

Comment by Jaime Sevilla (Jsevillamol) on Literature review of Transformative Artificial Intelligence timelines · 2023-01-30T13:21:59.914Z · EA · GW

I am not sure I follow 100%: is your point that the WBE path is disjunctive from others?

Note that many of the other models are implicitly considering WBE, eg the outside view models.

Comment by Jaime Sevilla (Jsevillamol) on Literature review of Transformative Artificial Intelligence timelines · 2023-01-30T13:11:33.331Z · EA · GW

Extracting a full probability distribution from eg real interest rates requires multiple assumptions about eg GDP growth rates after TAI, so AFAIK nobody has done that exercise.

Comment by Jaime Sevilla (Jsevillamol) on Pineapple now lists marketing, comms & fundraising talent, fiscal sponsorship recs, private database (Jan '23 Update) · 2023-01-27T18:28:32.508Z · EA · GW

$420 per placement is insanely good cost-effectiveness!
In contrast, we spend ~$8000  per new hire at Epoch on evaluations.
If the process significantly alleviates the vetting burden on the orgs I am pretty impressed. 

Very excited to see the progress of this org!

Comment by Jaime Sevilla (Jsevillamol) on Jsevillamol's Shortform · 2023-01-19T17:50:36.730Z · EA · GW

Quick guide on the agree/disagree voting system:
 

  • When you upvote a post/comment, you are recommending that more people ought to read and engage with it.
  • When you agree vote a post/comment, you are communicating that you endorse its conclusions/recommendations.
  • Symmetrically, if you downvote a post/comment you are recommending against engaging with it.
  • And similarly, if you disagree vote a post/comment you are communicating that you don't endorse it's conclusions/recommendations.

Upvotes determine the order of posts of comments and determine which comments are automatically hidden, so have a measurable effect on how many people read them.

Agree votes AFAIK do not affect content recommendations, but are helpful to understand whether there is community support for a conclusion, and if so in which direction.

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-19T02:38:37.459Z · EA · GW

Ways of engaging #4: making a database of experts in fields who are happy to review papers and reports from EAs

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-19T02:36:25.844Z · EA · GW

Ways of engaging #3: inviting experts from fields to EAG(X)s

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-19T02:35:58.955Z · EA · GW

Ways of engaging #2: proactively offering funding experts from respective fields to work on EA relevant topics

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-19T02:35:23.863Z · EA · GW

Ways of engaging #1: literature reviews and introductions of each field for an EA audience.

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-19T02:34:13.849Z · EA · GW

More transparency about money flows seems important for preventing fraud, understanding centralization of funding (and so correlated risk) and allowing people to better understand the funding ecosystem!

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-18T19:08:44.195Z · EA · GW

FWIW I was delaying engaging with recent proposals for improving EA, and I really appreciate that Nathan is taking the time to facilitate that conversation.

Comment by Jaime Sevilla (Jsevillamol) on Possible changes to EA, a big upvoted list · 2023-01-18T19:05:48.786Z · EA · GW

Every EA-affiliated org should clearly state in their website their sources of funding that contributed over >$100k.

Comment by Jaime Sevilla (Jsevillamol) on My personal takeaways from EAGxLatAm · 2023-01-10T15:11:02.640Z · EA · GW

¡Hola! Te recomiendo que te unas al Slack de la comunidad hispanohablante a través de este link.

Comment by Jaime Sevilla (Jsevillamol) on EA Forum feature suggestion thread · 2022-12-26T05:42:04.016Z · EA · GW

I can confirm I have access to coauthored post analytics! Great work dev team!

Comment by Jaime Sevilla (Jsevillamol) on Link-post for Caroline Ellison’s Guilty Plea · 2022-12-22T14:58:03.147Z · EA · GW

Props to wayne for providing regular and consistent updates to his beliefs, that's actually pretty amazing

Comment by Jaime Sevilla (Jsevillamol) on A digression about area-specific work in LatAm · 2022-12-20T17:19:18.836Z · EA · GW

Is more like:

  1. I am talking about the LatAm community because this is the community I am familiar with
  2. I don't have great insight into the grantmaker case in specific. I suspect they are overvaluing general community-building work over cause-specific work, which I think is a reasonable thing to disagree on.
  3. While the subjects of the post have been repeatedly discouraged (by the grantmakers and others) to do cause-specific work in LatAm, they have come to interact and meet other individuals from UK/US who lack expertise in the topic who were encouraged and supported to do cause-specific work in LatAm (by different funders, I believe).

I conjecture (but do not claim) that people in US/UK are better connected and have more opportunities for encouragement and funding compared to people in LatAm. If the people encouraging the US/UK people met these LatAm people, I think they would agree they are better prepared to do it (since they have cause-specific expertise and local knowledge).

Comment by Jaime Sevilla (Jsevillamol) on A digression about area-specific work in LatAm · 2022-12-19T17:33:35.258Z · EA · GW

Basically, yes, though:

  1. They wanted to do a mixture of "original research" and "community building specifically focused on their area of expertise"
  2. The grantmaker didn't explicitly say they were a bad fit for it, so it could be construed as inquiring about their theory of impact. A charitable interpretation is that the grantmaker put the grant on hold because they thought the would-be grantee was tackling too many tasks simultaneously, or because of external factors (e.g. FTX) that were not clearly communicated.
  3. A similar scenario has happened other times with other people. I highlighted this because it left a written record behind so it was easier for me to understand what happened and write about it, even though I don't think it's a good central example.
Comment by Jaime Sevilla (Jsevillamol) on A digression about area-specific work in LatAm · 2022-12-18T19:41:21.588Z · EA · GW

I meant area-specific (as in eg biosecurity projects) in Latin America

Comment by Jaime Sevilla (Jsevillamol) on A digression about area-specific work in LatAm · 2022-12-18T15:19:06.861Z · EA · GW

In this case, a mixture of developing research, getting involved in existing initiatives and doing community building for two specific cause areas they have certified expertise in.

This as opposed to eg arranging a translation for the Precipice, evangelizing and running events for the core ideas of Effective Altruism.

For example imagine that the people I mentioned intended to work on AI safety and biosecurity in the sidelines while doing community building work.

Comment by Jaime Sevilla (Jsevillamol) on Creating a database for base rates · 2022-12-12T14:01:53.368Z · EA · GW

FWIW I'm one of the future users of this project and regularly chatting to this team.

My use case is for research, eg validating this approach with empirical data .

I expect this database will be useful in the future as a benchmark to test similar approaches, and the program probably justifies its (low) costs in those grounds alone.

Comment by Jaime Sevilla (Jsevillamol) on The Spanish-Speaking Effective Altruism community is awesome · 2022-12-08T21:16:54.570Z · EA · GW

Sounds great, thank you Zoe!

Comment by Jaime Sevilla (Jsevillamol) on Announcing the first issue of Asterisk · 2022-11-23T19:12:17.425Z · EA · GW

I thought that the point was to help with active reading and little more.

Comment by Jaime Sevilla (Jsevillamol) on What is the best source to explain short AI timelines to a skeptical person? · 2022-11-23T17:45:17.999Z · EA · GW

Cold Takes has a pretty good summary of arguments for <50 years timelines.

Comment by Jaime Sevilla (Jsevillamol) on Some important questions for the EA Leadership · 2022-11-16T21:37:35.234Z · EA · GW

Separately from the FTX issue, I'd be curious about you dissecting what of Zoe's ideas you think are worth implementing and what would be worse and why.

 

My takes:

  • Set up whistleblower protection schemes for members of EA organisations  => seems pretty good if there is a public commitment from an EA funder to something like "if you whistleblow we'll cover your salary if you are fired while you search another job" or something like that
  • Transparent listing of funding sources on each website of each institution => Seems good to keep track of who receives money from who
  • Detailed and comprehensive conflict of interest reporting in grant giving => My sense is that this is already handled sensibly  enough, though I don't have great insight on grantgiving institutions
  • Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% => this seems bad for incentives and complicated to put into action
  • Within 5 years: EA funding decisions are made collectively => seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
  • No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known => Meh, I'm indifferent since I just don't consume that kind of content so I don't know the effects it has, though I am erring towards it being somewhat good to give voice to others
  • Increase transparency over 
    • Who gets accepted/rejected to EAG and why => seems hard to implement, though there could be some model letters or something
    • leaders/coordination forum => I don't sense this forum is nowhere as important as these recommendations imply
  •  Set up: ‘Online forum of concerns’ => seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
Comment by Jaime Sevilla (Jsevillamol) on Some research ideas in forecasting · 2022-11-16T17:49:02.939Z · EA · GW

 I am so dumb I was mistakenly using odds instead of probs to compute the brier score :facepalm:

And yes, you are right, we should extremize before aggregating. Otherwise, the method is equivalent to geo mean of odds.

It's still not very good though

Comment by Jaime Sevilla (Jsevillamol) on Some research ideas in forecasting · 2022-11-16T07:12:11.511Z · EA · GW

Thanks Jonas!

  1. I'd forgotten about that great article! Linked.
  2. I feel some of these would be good bachelor / MSc theses yeah!
Comment by Jaime Sevilla (Jsevillamol) on Under what conditions should FTX grantees voluntarily return their grants? · 2022-11-11T19:49:57.455Z · EA · GW

It would, however, send a credible signal that the EA community does not benefit from fraud, and create an incentive to 1) scrutinize better future donors and 2) to not engage in fraud for the sake of the community.

Comment by Jaime Sevilla (Jsevillamol) on My current best guess on how to aggregate forecasts · 2022-11-06T16:39:11.031Z · EA · GW

Without more context, I'd say that fit a distribution to each array and then aggregate them using a weighted linear aggregate of the resulting CDFs, assigning a weight proportional to your confidence on the assumptions that produced the array.

Comment by Jaime Sevilla (Jsevillamol) on My current best guess on how to aggregate forecasts · 2022-11-06T14:36:33.140Z · EA · GW

Depends on whether you are aggregating distributions or point estimates.

If you are aggregating distributions, I would follow the same procedure outlined in this post, and use the continuous version of the geometric mean of odds I outline in footnote 1 of this post.

If you are aggregating point estimates, at this point I would use the procedure explained in this paper, which is taking a sort of extremized average. I would consider a log transform depending on the quantity you are aggregating. (though note that I have not spent as much time thinking about how to aggregate point estimates)

Comment by Jaime Sevilla (Jsevillamol) on Recommend Me EAs To Write About · 2022-10-28T20:11:47.707Z · EA · GW

Some cool people from the Spanish-Speaking community:

  • The coordinator Sandra Malagón, who in the space of one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.
  • Pablo Melchor, founder of Ayuda Efectiva, the Spanish GiveWell
  • Melanie Basnak, senior research manager at Rethink Priorities
  • Juan García, researcher at ALLFED, who works in food security
  • Ángela María Aristizábal, researcher at FHI, who works in GCRs and community building
  • Pablo Stafforini, who built the EA Forum Wiki, is involved in many cool projects and has been involved since the very beginning of EA
  • Michelle Bruno, an early career person who works now in community building in Mexico and in a biosecurity project
  • Jaime Fernández who works in community building in Colombia and is researching some philosophy topics
  • Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project.
Comment by Jaime Sevilla (Jsevillamol) on Reslab Request for Information: EA hardware projects · 2022-10-26T14:41:38.154Z · EA · GW

Well, time-travel machines are a type of hardware... 👅

Comment by Jaime Sevilla (Jsevillamol) on Effective Altruism's Implicit Epistemology · 2022-10-20T12:12:12.199Z · EA · GW

Brilliant! I found this a really good introduction to some of the epistemic norms I most value in the EA community.

It's super well written too.

Comment by Jaime Sevilla (Jsevillamol) on The property rights approach to moral uncertainty · 2022-10-16T12:44:20.099Z · EA · GW

PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.

That seemed like the case to me.

I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.

Comment by Jaime Sevilla (Jsevillamol) on The property rights approach to moral uncertainty · 2022-10-16T12:41:06.549Z · EA · GW

then there's not really any principled reason to rule out trying to take into account allocations you can't possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd

I don't understand why 1) this is the case or 2) why this is undersirable.

If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it's entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.

I imagine the internal dialogue here between the longtermist and neartermist being like "look I don't know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I'm just going to let you have it"

I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe

I don't understand what you mean.

it conflicts with separability, the intuition that what you can't affect (causally or acausally) shouldn't matter to your decision-making

Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.

other main approaches to moral uncertainty aren't really sensitive to how others are allocating resources in a way that the proportional view isn'

I am not familiar with other proposals to moral uncertainty, so probably you are right!

(Generally I would not take it too seriously what I am saying - I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)

Comment by Jaime Sevilla (Jsevillamol) on The property rights approach to moral uncertainty · 2022-10-15T11:43:31.252Z · EA · GW

TL;DR, so this might be addressed in the paper

FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.

In particular:

  1. I think it's reasonable for one of the moral theories to give up part of their alloted resources if the other moral theory believes the stakes are sufficiently high. The distribution should be stakes sensitive (though inter-moral theory comparisons of stakes is something that is not clear how to do)
  2. The answer does not seem to guide individual action very well, at least in the example. Even accepting proportionality, it seems that how I split my portfolio should be influenced by the resource allocation of the world at large.
Comment by Jaime Sevilla (Jsevillamol) on Changing Licences on the EA Forum · 2022-10-09T16:18:42.422Z · EA · GW

The Stack Overflow case [1] that Thomas linked to in another comment seems a good place to learn from.

I think multiple license support on a post-by-post basis is a must. Old posts must be licensed as all-rights-reserved, except for the right of publication on the Forum (which is understood that the authors have granted de facto when they published).

New posts can be required to use a particular license or (even better) users can choose what license to use, with the default being preferably CC-BY per the discussion on other comments.

The license on all posts should be ideally updatable at will, and I would see it as positive to nudge users to update the license in old posts to CC-BY (perhaps sending them an email or a popup next time they log in that gathers their explicit permission to do so).

[1] https://meta.stackexchange.com/questions/333089/stack-exchange-and-stack-overflow-have-moved-to-cc-by-sa-4-0

Comment by Jaime Sevilla (Jsevillamol) on Changing Licences on the EA Forum · 2022-10-09T12:19:40.006Z · EA · GW

To be clear, the thing that made me feel weird is the implication that this would be applied retroactively and without explicit consent from you each user (which I assume is not what was meant, but it is how it read to me).

I'm perfectly fine with contributions going forward requiring a specific license as in arXiv (preferably requiring a minimal license that basically allows reproduction in the EA Forum and then having default options for more permissive licenses), as long as this is clearly explained (eg a disclaimer below the publish button, a pop-up, or a menu requiring you to choose a license).

I am also fine applying this change retroactively, as long as authors give their explicit permissions and have a chance before of removing content they do not want to be released this way.

Comment by Jaime Sevilla (Jsevillamol) on Changing Licences on the EA Forum · 2022-10-08T13:48:38.982Z · EA · GW

Epistemic status: out of my depth

  1. The license should be opt-out (in fact I don't think you can legally force a license on the content created by authors without their explicit consent?)
  2. CC-BY would be a much better default choice. Commercial use is an important aspect of truly open source content.
  3. Even better to offer multiple license options on posts, so people can tailor it to their needs. I'm a big fan of how this is handled for example in arXiv or GitHub, with multiple options.

I notice I had a hair-raising chill when reading this part:

we are planning to make Forum content published under a Creative Commons Attribution-NonCommercial license

This made me feel as if you were implying to be owners of the content in the Forum, which you are not - the respective authors are.

I believe that what you were trying to convey is:

We plan to add an opt-out option for authors to release future content under a XX license

There is also the question of how to handle past content.

The simplest option would be to leave everything with their default option (which for posts without an explicit license would be all-rights-reserved under current copyright law), but add the possibility for authors to change the license manually.

A more cumbersome option, but that might help with increasing the availability of content, is some sort of pop-up asking for explicit permission to change all past content of current users to CC-BY, though I imagine that can be more work to implement and not clearly worth it.

Comment by Jaime Sevilla (Jsevillamol) on What is the most pressing feature to add to the forum? Upvote for in general, agreevote to agree with reasoning. · 2022-10-07T00:13:53.693Z · EA · GW

You can do this easily enough with external tools. I use the stayfocused plugin on Chrome for this.

Comment by Jaime Sevilla (Jsevillamol) on What is the most pressing feature to add to the forum? Upvote for in general, agreevote to agree with reasoning. · 2022-10-07T00:09:57.787Z · EA · GW

Having a TL;DR box at the beginning of the posts sounds amazing