Posts

My experience on a summer research programme 2019-09-22T09:54:39.044Z · score: 40 (17 votes)
Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) 2019-09-05T14:56:29.449Z · score: 22 (14 votes)
A summary of Nicholas Beckstead’s writing on Bayesian Ethics 2019-09-04T09:44:24.260Z · score: 35 (15 votes)
How to generate research proposals 2019-08-01T16:38:53.790Z · score: 87 (37 votes)

Comments

Comment by jsevillamol on [WIP] Summary Review of ITN Critiques · 2019-10-09T09:46:32.431Z · score: 13 (7 votes) · EA · GW

Thank you for writing this up - always good to see criticism of key ideas.

I want to contest point 4.

The fact that we can decompose "Good done / extra person or $" into three factors that can be roughly interpreted as Scale, Tractability and Neglectedness is not a problem, but a desirable property.

In ultimate instance, we want to evaluate marginal cost effectiveness ie "Good done / extra person or $". However this is difficult, so we want to split it up in simpler terms.

The mathematical equation that decomposes the cost serves as a guarantee that by estimating all three factors we will not be leaving anything important behind.

Comment by jsevillamol on Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) · 2019-09-06T17:05:48.426Z · score: 1 (1 votes) · EA · GW

I do agree with your assesment, and I would be medium excited about somebody informally researching what algorithms can be quantized to see if there is low hanging fruit in terms of simplifying assumptions that could be made in a world where advanced AI is quantum-powered.

However my current intuition is there is no much sense in digging in this unless we were sort of confident that 1) we will have access to QC before TAI and that 2) QC will be a core component of AI.

To give a bit more context to the article, Pablo and me originally wrote it because we disagreed on whether current research in AI Alignment would still be useful if quantum computing was a core component of advanced AI systems.

Had we concluded that quantum ofuscation threatened to invalidate some assumptions made by current research, we would have been more emphatic about the necessity of having quantum computing experts working on "safeguarding our research" on AI Alignment.

Comment by jsevillamol on Cause X Guide · 2019-09-02T10:49:26.604Z · score: 12 (8 votes) · EA · GW

I like this post a lot; it is succint and provides a great actionable for EAs to act on.

Stylistically I would prefer if the Organization section was broken down into a paragraphs per section to make it easier to read.

I like that you precommited to a transparent way of selecting the new causes you present to the readers and limited the scope to 15. I would personally have liked to see them broken up in sections depending on what method they were chosen by.

For other readers who are eager for more, here there are other two that satisfy the criteria but I suppose did not make it to the list:

Atomically Precise Manufacturing (cause area endorse by two major organizations - OPP and Eric Drexler from FHI)

Aligning Recommender Systems (cause profile with more than 50 upvotes in the EA forum)

Comment by jsevillamol on How to generate research proposals · 2019-08-07T18:37:50.312Z · score: 4 (3 votes) · EA · GW

As further reading I recently came across Research as a Stochastic Decision Process, which discusses another systematic approach to research.

Summary copy pasted from the article:


Summary

Many of our default intuitions about how to pursue uncertain ideas are counterproductive:

  • We often try easier tasks first, when instead we should try the most informative tasks first.
  • We often conflate a high-level approach with a low-level instantiation of the approach.
  • We are often too slow to try to disprove our own ideas.

Building frameworks that reify the research process as a concrete search problem can help unearth these incorrect intuitions and replace them with systematic reasoning.

Comment by jsevillamol on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-03T20:31:48.730Z · score: 4 (4 votes) · EA · GW

Strong upvoting because I want to incentivize people to write and share more summaries.

Summaries are awesome and allow me to understand the high level of papers that I would not have read otherwise. This summary in particular is well written and well-formatted.

Thanks for writing it and sharing it!

Comment by jsevillamol on How to generate research proposals · 2019-08-03T08:41:40.781Z · score: 1 (1 votes) · EA · GW

Totally second the motion of empowering junior researchers to ask for research ideas here and somewhere else.

Also, I'd encourage you to write down your own research agenda in the form of a blogpost listing some open questions in this forum!

It will be useful for other researchers and you will get interesting feedback on your ideas :)