[Link post] Parameter counts in Machine Learning 2021-07-01T15:44:18.410Z
Everyday longtermism in practice 2021-04-06T14:42:14.117Z
Quantum computing timelines 2020-09-15T14:15:29.399Z
Assessing the impact of quantum cryptanalysis 2020-07-22T11:26:21.286Z
My experience as a CLR grantee and visiting researcher at CSER 2020-04-29T19:03:42.434Z
Modelling Vantage Points 2020-01-01T16:50:11.108Z
Quantum Computing : A preliminary research analysis report 2019-11-05T14:25:41.628Z
My experience on a summer research programme 2019-09-22T09:54:39.044Z
Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) 2019-09-05T14:56:29.449Z
A summary of Nicholas Beckstead’s writing on Bayesian Ethics 2019-09-04T09:44:24.260Z
How to generate research proposals 2019-08-01T16:38:53.790Z


Comment by Jsevillamol on Forecasting Newsletter: July 2021 · 2021-08-01T20:30:00.128Z · EA · GW

In Section 4 we shift attention to the computational complexity of agreement, the subject of our deepest technical result. What we want to show is that, even if two agents are computationally bounded, after a conversation of reasonable length they can still probably approximately agree about the expectation of a [0, 1] random variable. A large part of the problem is to say what this even means. After all, if the agents both ignored their evidence and estimated (say) 1/2, then they would agree before exchanging even a single message. So agreement is only interesting if the agents have made some sort of “good-faith effort” to emulate Bayesian rationality.


TYPO: This belongs to the section on Aumann's agreement, but is listed in the problem of priors section 

Comment by Jsevillamol on What would you ask a policymaker about existential risks? · 2021-07-07T07:22:52.233Z · EA · GW

In the last few months my colleague Juan García and I have been interviewing civil servants working on risk management in Spain with a similar purpose. It has gone quite well, and we have both learnt a lot and have been tentatively invited to provide input into the capital's new risk management plan.

Some questions we have been asking (sometimes in a roundabout way, as we learned the vocabulary they are familiar with):

  • As you see it, what are the key functions of your organization?
  • What are the top risks you focus on? How did the current prioritization of risks came to be? Are there any active recurrent efforts to map and consider new risks?
  • What kind of prevention / planification efforts happen for each of the prioritized risks?
  • What tools does your organization have to anticipate emergencies?
  • How does the emergency response apparatus get activated? Who are the key decision makers involved?
  • How are new, unforeseen risks treated? As a concrete recent example, what was the role of your organization during COVID-19?
  • What do you see as the most important function of [the organization you work in]? What are some past operations in your organization you would highlight as examples of the importance of your organization?
  • What initiatives are being taken to improve the operation of your organization? What are your key bottlenecks? What do you think should be improved further?
  • How has public risk management changed in the last few years? What
  • What other organizations do you often collaborate with? Can you introduce us to some people there to interview them?
  • How can people interested in improving the system can get involved? Specifically, how can academics researching global risk management help you make better decisions?

Some mindset advice:

  • Be friendly, show them you are on their side by focusing on their triumphs rather than their failures. The main goal is to learn their framework, not to push a new framework onto them.
  • Learn their language and use them. If you talk about GCRs right off the bat they will be intimidated and talk in circles. Ask easy questions first, possibly things you could have learnt out of their websites, to warm them up and learn what concepts they use to think about such things.
  • When talking about more weird things, ask for personal opinions. Civil servants are very careful about saying that they are worried about extreme food shortages if it may reflect on their organization, but they are more willing to note personal worries.
  • Do not push people to talk about things they might not want to talk about. You are going to be keen on talking about existential risk and GCRs. They will want to talk about forest fires and floods. Focus on the commonalities of both things - how are risks in general prioritized?
  • But don't let them talk abstractly. Focus on concrete details and paraphrase.
  • While you are interviewing them to gather info, the best outcomes for these conversations are not the interview itself: it's a network of professionals you can contact and possibly getting involved in some capactity into higher level decision making.

If you have any questions, feel free to ask, either in a comment or through a PM. Also happy to schedule a meeting if it would be useful.

Comment by Jsevillamol on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T07:23:30.443Z · EA · GW

I couldn't find an easy way
Here there is a spreadsheet with the questions (the formatting is messed up though)

One can also download the Anki file and load it up in 

Comment by Jsevillamol on How do you track your donations? · 2021-06-29T12:40:06.718Z · EA · GW

There exists the option of generating a personalized invitation to join Ayuda Efectiva where you show information on your impact so far. It doesn't explicitly say the amount donated.

Comment by Jsevillamol on How do you track your donations? · 2021-06-29T12:14:44.265Z · EA · GW

I use Ayuda Efectiva to donate. 

They automatically manage your portfolio of donations to the charities they support (AMF, Malaria Consortium, SCI Foundation, Deworm the World and Hellen Keller International; I think more are coming later) and keep track of the donations you made, as well as their expected social impact.

They only operate in Spain, so if you want to get a fiscal deduction and live elsewhere you are out of luck. But I think it is a great model, and I wish it would be copied elsewhere!

Comment by Jsevillamol on What are some moral catastrophes events in history? · 2021-06-22T07:30:40.655Z · EA · GW

Wikipedia has a handy and terrifying list of genocides by death toll

Comment by Jsevillamol on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-25T18:17:04.793Z · EA · GW

One key argument made in the article is that drug use is relatively inelastic - spending more on enforcement does not seem to change the amoung of drugs consumed in a zone.
I found this persuasive, but I just found one piece of evidence on the contrary: Australia has had a lot of success combating heroin overdoses via enforcing drug trafficking laws 

Obviously the situation in Australia might be different than in other parts of the world, but this gives me a bit of pause. Definitely merits more analysis of the tradeoffs involved!

Comment by Jsevillamol on What are things everyone here should (maybe) read? · 2021-05-19T09:15:01.827Z · EA · GW

What's the coursera course you coursed and do you recommend it?

Comment by Jsevillamol on What are things everyone here should (maybe) read? · 2021-05-19T08:12:37.946Z · EA · GW

I would recommend everyone to read the book How to solve it, by Polya.

It covers basic techniques for solving a problem, from "solve a simpler problem" to "decompose the problem into subproblems". Its examples are high school trigonometric exercises, but the techniques apply much more widely.

I claim that if you understand the lessons in this book (which, granted, takes a lot of practice you will need to get elsewhere) you get 60% of the benefit of having coursed a math major.

Comment by Jsevillamol on What are things everyone here should (maybe) read? · 2021-05-19T08:07:31.412Z · EA · GW

Also, on the topic of probability, Jane Street's guide to probability and making markets is an express introduction and refresher to the topic (more the probability part than the making markets part, though that one is interesting too)

Comment by Jsevillamol on What are things everyone here should (maybe) read? · 2021-05-19T08:01:37.803Z · EA · GW

For econ, I have found the videos on Marginal Revolution University to be a good introduction to the basic concepts for somebody with zero background on economics (specially the course on microeconomics, and to a lesser extent the course on macroeconomics).

For stats I am still searching, but when I was preparing for an interview with DeepMind they recommended me PennState's online material for their stat414 and stat415 courses and they are alright.

Comment by Jsevillamol on How should we run the EA Forum Prize? · 2021-05-05T09:11:25.023Z · EA · GW

So for me the prize fulfills some very important purposes. Perhaps the most important two are:

  • Curating the best content
  • Rewarding content creators for producing content


Curating the best content

I regularly use the prize posts as a "summary of the best of the month" which I greatly appreciate. It helps me focus my attention on the best articles of the month. It is also a great experience for the authors, who just publish content as usual and without any additional overhead sometimes they get selected for the prize. This is how I wish more academic areas worked - everything published openly in a preprint archive and then the journals acting as "curators", selecting the best work. I really wish something like this will remain in the forum (a "best of the month" selection).

This is a very useful function also to help analyze a posteriori the impact of the best pieces of the forum, as for example with this  post.


Rewarding our content creators for producing content

I would think some people who are specially competitive are motivated by the prize to write more. But I don't know how large of a share of the community is like that.

Instead, I think the most important reward is making people feel proud and recognized for their work. When somebody I knew won the forum or comment prize, they were showered with praise and felt happy and appreciated.

This goal is a bit at tension with the goal of curating the best content. People are sometimes disheartened by the posts being consistently won by professional researchers who have the time and experience to write very good posts.

I think I would like this goals to be somehow separated - though I admit I am a bit confused about how one would go about doing that.

Comment by Jsevillamol on How likely is a nuclear exchange between the US and Russia? · 2021-05-04T19:15:03.075Z · EA · GW

Why do you choose an arithmetic mean for aggregating these estimates? 


This is a good point.

I'd add that as a general rule when aggregating binary predictions one should default to the average log odds, perhaps with an extremization factor as described in (Satopää et al, 2014).

The reasons are a) empirically, it seems to work better, b) the way Bayes rules works it seems to suggest very strongly than log odds are the natural unit of evidence, c) apparently there are some complex theoretical reasons ("external bayesianism") why this is better (the details go a bit over my head).

Comment by Jsevillamol on Concerns with ACE's Recent Behavior · 2021-04-16T12:15:14.145Z · EA · GW

What is EAA? Effective Animal Advocacy?

Comment by Jsevillamol on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-06T22:36:22.679Z · EA · GW

My (naive) understanding is that the risk of a recession today is not much lower than in 2007-08.

So the answer to whether EAs would be working on this back then rounds down to whether EAs are looking into macroeconomic risk today.

And the answer to that is mixed: there is actually a tag in the Forum for this very problem, which includes a reference to OpenPhil's program on macroeconomic policy stabilization

But there are no articles under that tag, and I haven't heard much discussion on the topic outside of OpenPhil.

Comment by Jsevillamol on How much does performance differ between people? · 2021-04-05T22:21:50.623Z · EA · GW

Thank you for writing this!

While this is not the high note of the paper, I read with quite some interest your notes about heavy tailed distributions. 

I think that the concept of heavy tailed distributions underpins a lot of considerations in EA, yet as you remark many people (including me) are still quite confused  about how to formalize the concept effectively, and how often it applies in real life.

Glad to see more thinking going into this!

Comment by Jsevillamol on What Makes Outreach to Progressives Hard · 2021-03-15T14:11:22.095Z · EA · GW

I find your steelman convincing (would love more intersectionalists to confirm though!).

Re: downsides of intercause prioritization. Beyond making people feel bad about their work, systematic prioritization can systematically misallocate resources, while a more informal, holistic and intersectional approach is less likely to make this kind of mistake.

Arguably, while EAs are very well aware of the importance of hit-based giving, they are overly focused on a few cause areas. Meanwhile my (naive) impression is that intersectionalists are succesfully tackling a much wider array of problem areas and interventions, from community help to international aid and political lobbying. 

I do not think it is a stretch to think that prioritization frameworks are  partly to blame for cause convergence in the EA community.

Comment by Jsevillamol on What Makes Outreach to Progressives Hard · 2021-03-14T23:40:26.387Z · EA · GW

I feel like invoking worldview diversification here is discussing things at the wrong level. 

Is like saying "oh its ok that you believe in intersectionality, because from a worldview diversification perspective we want to work on many causes anyway", and failing to address the fundamental disagreement that within their worldview a intersectionalist does not find cause prioritization useful.

Like, I feel the crux of intersectionality is about different problems being interwoven in complex, hard-to-understand ways. So as OP pointed out, if you believe this you'll need to address all problems at once by radically restructuring society.

Meanwhile, the crux of worldview diversificationists is that we are not certain of our own values and how they will change, so it is better to hedge your bets by compromising between many views.

Comment by Jsevillamol on Superforecasting in a nutshell · 2021-02-26T18:53:13.834Z · EA · GW


How can I put footnotes on my posts?!?!

Comment by Jsevillamol on Big List of Cause Candidates · 2021-02-17T23:04:17.925Z · EA · GW

Why would research on 'minor' GCRs like the ones mentioned by Arepo be harder than eg AI alignment?

My impression is that there is plenty of good research on eg effects of CO2  on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.

Is the argument that they are less neglected?

Comment by Jsevillamol on Everyday Longtermism · 2021-02-01T21:59:58.208Z · EA · GW

Brainstorming some concrete examples of what everyday longtermism might look like:

> Alice is reviewing a CV for an applicant. The applicant does not meet the formal requirements for the job, but Alice wants to hire them anyway. Alice visualices the hundreds of people making a similar decision to hers. She would be ok with hiring this specific applicant, because she trusts her instincts a lot. But she would not trust 100 people in a similar position to make the right choice; ignorning the recruiting guidelines might disadadvantage minorities in a illegible way. She decides that the best would be if everyone chose to just follow the procedure, so she chooses to forego her intuition to favor better decision making overall.

> Beatrice is offered a job as a Machine Learning engineer to help the police with some automated camera monitoring. Before deciding whether to accept, she seeks out open criticism of that line of work, and tries to imagine what are some likely consequences of developing  that kind of technology, both positive and negative. After doing a balance she realizes that while it will most likely be positive there is a plausible chance that it will enable really bad situations, and rejects the job offer.

> Carol reads an interesting article. She wants to share it in social media. She could spend some effort paraphrasing the key ideas in the article, or just share the link. She has internalized that spending one minute summarizing key ideas might well be worth a lot of time saved to her friends who otherwise could use her summary to decide whether to read the whole article. Out of habit she summarizes the article as best as she can, making it clear who she genuinely thinks would benefit from reading the article. 

Comment by Jsevillamol on Some thoughts on EA outreach to high schoolers · 2020-09-16T16:52:43.456Z · EA · GW

Without entering into too many sensitive details, when I have looked at the output of similar programs I have noticed that I was excited about the career path of 1 out of every 3 participants.

But a) I dont know how much of it was counterfactual, b) when I made the estimation I had an incentive to produce an optimistic answer and c) it relies on my subjective judgement, which you may not trust.

Also worth noting that I think that the raw conversion rate is not the right metric to focus on - the outliers usually account for most the impact of these programs.

Comment by Jsevillamol on Quantum computing timelines · 2020-09-16T14:47:04.518Z · EA · GW

The citation is a link: (Grace, 2020)

Just in case:

Comment by Jsevillamol on Quantum computing timelines · 2020-09-16T11:16:32.362Z · EA · GW

It is not intended to be a calibrated estimated, though we were hoping that it could help others make calibrated estimations.

The ways that a calibrated estimate would differ include:

1. The result is a confidence interval, not a credence interval (most places in the paper where it says probability is should say confidence, I apologize for the oversight), so your choice of prior can make a big difference to the associated credence interval.

2. The model is assuming that no discontinuous progress will happen, but we do not know whether this will hold. (Grace, 2020) estimates a yearly rate of discontinous breakthroughs on any given technology of 0.1%, so I'd naively expect a 1-(1-0.1%)^20 = 2% chance that there is such a discontinuous breakthrough for quantum computing in the next 20 years.

3. The model makes optimistic assumptions of progress - namely that a) the rate of exponential progress will hold for both the physical qubit count and the gate error rate, b) there is no correlation between the metrics in a system (which we show it is probably an optimistic assumption, since it is easier to optimize only one of the metrics than both) and c) we ignore the issue of qubit connectivity due to lack of data and modelling difficulty.

If I was pressed to put a credence bound on it, I'd assign about 95% chance that EITHER the model is basically correct OR that the timelines are slower than expected (most likely if the exponential trend of progress on gate error rate does not hold in the next 20 years), for an upper bound on the probability that we will have RSA 2048 quantum attacks by 2040 of <5% + 95% 5% ~= 10%.

Either case, I think that the model should make us puzzle over the expert timelines, and inquire whether they are taking into account any extra information or being too optimistic.

EDIT: I made an artihmetic mistake, now corrected (thanks to Eric Martin for pointing it out)

Comment by Jsevillamol on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T09:34:25.263Z · EA · GW

In a context where multiple forecasts have already been made (by you or other people), use the geometric mean of the odds as a blind aggregate:

If you want to get fancy, use an extremized version of this pooling method, by scaling the log odds by a factor :

Satopaa et al have found that in practice gives the best results.

Comment by Jsevillamol on Assessing the impact of quantum cryptanalysis · 2020-07-24T10:37:23.983Z · EA · GW

I think we broadly agree.

I believe that chemistry and material science are two applications where quantum computing might be a useful tool, since simulating very simple physical systems is something where a quantum computer excels at but arguably significantly slower to do in a classical computer.

On the other hand, people more versed on material science and chemistry I talked to seemed to believe that (1) classical approximations will be good enough to approach problems in these areas and (2) in silico design is not a huge bottleneck anyway.

So I am open to a quantum computing revolution in chemistry and material science, but moderately skeptical.

Summarizing my current beliefs about how important quantum computing will be for future applications:

  • Cryptoanalysis => very important for solving a handful of problems relevant for modern security, with no plausible alternative
  • Chemistry and material science => Plausibly useful, not revolutionary.
  • AI and optimization => unlikely to be useful, huge constraints to overcome
  • Biology and medicine => not useful, systems too complex to model
Comment by Jsevillamol on Assessing the impact of quantum cryptanalysis · 2020-07-23T11:34:37.205Z · EA · GW

Thank you so much for your kind words and juicy feedback!

Google has already deployed post-quantum schemes as a test

I did not know about this, and this actually updates me on how much overhead will be needed for post quantum crypto (the NIST expert I interviewed gave me an impression that it was large and essentially would need specialized hardware to meet performance expectations, but this seems to speak to the contrary (?))

here may be significant economic costs due to public key schemes deployed "at rest"

To make sure I understand your point, let me try to paraphase. You are pointing out that:

1) past communications that are recorded will be rendered insecure by quantum computing

2) there are some transition costs associated with post quantum crypto - which are related to for example the cost of rebuilding PGP certificate networks.

If so, I agree that this is a relevant consideration but does not change the bottom line.

Thank you again for reading my paper!

Comment by Jsevillamol on Assessing the impact of quantum cryptanalysis · 2020-07-22T15:05:07.043Z · EA · GW

Note that we believe that quantum supremacy has already been achieved.

As in, the quantum computer Sycamore from Google is capable of solving a (toy) problem that we currently believe unfeasible in a classical computer.

Of course, there is a more interesting question of when will we be able to solve practical problems using quantum computing. Experts believe that the median for a practical attack on modern crypto is ~2035.

I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.

The paper in my post goes in more detail about this.

Comment by Jsevillamol on Update on civilizational collapse research · 2020-02-11T10:21:14.734Z · EA · GW
I'm currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.

Can you explain the bit about long-lived radionuclides?

How would they be produced? How would they affect "technological carrying capacity"?

Comment by Jsevillamol on [WIP] Summary Review of ITN Critiques · 2019-10-09T09:46:32.431Z · EA · GW

Thank you for writing this up - always good to see criticism of key ideas.

I want to contest point 4.

The fact that we can decompose "Good done / extra person or $" into three factors that can be roughly interpreted as Scale, Tractability and Neglectedness is not a problem, but a desirable property.

In ultimate instance, we want to evaluate marginal cost effectiveness ie "Good done / extra person or $". However this is difficult, so we want to split it up in simpler terms.

The mathematical equation that decomposes the cost serves as a guarantee that by estimating all three factors we will not be leaving anything important behind.

Comment by Jsevillamol on Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) · 2019-09-06T17:05:48.426Z · EA · GW

I do agree with your assesment, and I would be medium excited about somebody informally researching what algorithms can be quantized to see if there is low hanging fruit in terms of simplifying assumptions that could be made in a world where advanced AI is quantum-powered.

However my current intuition is there is no much sense in digging in this unless we were sort of confident that 1) we will have access to QC before TAI and that 2) QC will be a core component of AI.

To give a bit more context to the article, Pablo and me originally wrote it because we disagreed on whether current research in AI Alignment would still be useful if quantum computing was a core component of advanced AI systems.

Had we concluded that quantum ofuscation threatened to invalidate some assumptions made by current research, we would have been more emphatic about the necessity of having quantum computing experts working on "safeguarding our research" on AI Alignment.

Comment by Jsevillamol on Cause X Guide · 2019-09-02T10:49:26.604Z · EA · GW

I like this post a lot; it is succint and provides a great actionable for EAs to act on.

Stylistically I would prefer if the Organization section was broken down into a paragraphs per section to make it easier to read.

I like that you precommited to a transparent way of selecting the new causes you present to the readers and limited the scope to 15. I would personally have liked to see them broken up in sections depending on what method they were chosen by.

For other readers who are eager for more, here there are other two that satisfy the criteria but I suppose did not make it to the list:

Atomically Precise Manufacturing (cause area endorse by two major organizations - OPP and Eric Drexler from FHI)

Aligning Recommender Systems (cause profile with more than 50 upvotes in the EA forum)

Comment by Jsevillamol on How to generate research proposals · 2019-08-07T18:37:50.312Z · EA · GW

As further reading I recently came across Research as a Stochastic Decision Process, which discusses another systematic approach to research.

Summary copy pasted from the article:


Many of our default intuitions about how to pursue uncertain ideas are counterproductive:

  • We often try easier tasks first, when instead we should try the most informative tasks first.
  • We often conflate a high-level approach with a low-level instantiation of the approach.
  • We are often too slow to try to disprove our own ideas.

Building frameworks that reify the research process as a concrete search problem can help unearth these incorrect intuitions and replace them with systematic reasoning.

Comment by Jsevillamol on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-03T20:31:48.730Z · EA · GW

Strong upvoting because I want to incentivize people to write and share more summaries.

Summaries are awesome and allow me to understand the high level of papers that I would not have read otherwise. This summary in particular is well written and well-formatted.

Thanks for writing it and sharing it!

Comment by Jsevillamol on How to generate research proposals · 2019-08-03T08:41:40.781Z · EA · GW

Totally second the motion of empowering junior researchers to ask for research ideas here and somewhere else.

Also, I'd encourage you to write down your own research agenda in the form of a blogpost listing some open questions in this forum!

It will be useful for other researchers and you will get interesting feedback on your ideas :)