My upcoming CEEALAR stay

post by quinn · 2020-12-14T06:23:04.709Z · EA · GW · 5 comments


  weights to my time
  possible disruption to these weights
  how formal am I being with time management? 
  hours in absolute terms
  what could go horribly wrong? 
  what are my blindspots? 
  why I'm doing this

I want to publicly state my intentions going into my CEEALAR stay, to reflect on later and observe any drift that occurs for better or worse. I want to be accountable to the community, since in some sense I'm an expenditure of community resources. If you have any feedback or advice, please drop it below! 

I am admitted to CEEALAR January to July 2021 (6 months is the maximum tourist visa for a US citizen).

weights to my time

I'm forecasting the following weights assigned to my time.

  1. 60% on writing Coq until I complete either 4 or 5 volumes of Software Foundations. I completed volume 1 last year and I might skip volume 4.
  2. 15% haskell/typescript projects. (If I go back to industry after this, I'd like it to be a haskell job, moreover I want to get better at web dev since I dumped a lot of time into ML and can barely hold a website together)
  3. 5% remote college stuff from back home (a physics course and tutoring for the math dept) (the course is base credentialism i.e. I'm not actually learning the material just optimizing for the A, while the tutoring is to increase my mastery over calc3 and graph theory)
  4. 5% writing nonfiction on the following subjects:
    1. infodemics (including organizing a panel for EA Philly then producing transcripts for the EA forum)
    2. my thinking about freedom/autonomy and how it impacts moral patienthood and ai safety (response to When is Unaligned AI Morally Valuable, response to Fun Theory, etc.)
    3. the situation of proof assistants & formal verification in AI safety as I forecast it (tentatively called "Going Long on Formal Verification")
  5. 5% producing and hosting the Technical AI Safety Podcast.
  6. 5% commitments to projects I was doing anyway, like this weekly twitch stream and this group [LW · GW] as well as attending lectures and community events.
  7. 5% applying to colleges. (i have a 3.9/4 community college GPA and I want to do elite undergrad when I get back to the states)

After I complete Software Foundations I'll either 1. pivot to replicating ML papers. Right now, I'm especially interested in this soon to be covered in TASP episode 1, but as I cover more papers for TASP I might discover something else I'd like to replicate. Or 2. roll my first proof assistant. 

After every last college deadline is over I'll reapportion that time to something, to be determined. 

possible disruption to these weights

  1. I get admitted to AI Safety Camp
  2. I get the CHAI internship
  3. I boost the podcast up from 1-2 episodes per month or I'm simply wrong about how long 1-2 episodes per month takes.

I think in each of these cases I'll reduce time I spend on Software Foundations, even if it means not getting to the end of the book before I have to go back to the states. 

how formal am I being with time management? 

I haven't decided yet if I want the cognitive overhead of a timekeep app, clocking in and clocking out for each project etc. but the data could be valuable. You might ask "what's the point of declaring weights if you won't be keeping score numerically?" which is a reasonable question, is it an awful answer to say that I'm just using my intuition to approximate the weights? 

I do have formal task management in Jira, multiple kanban boards collated into a single todo list with due dates, and I think this is the right step as it has negligible cognitive overhead. 

hours in absolute terms

A lot of this work is rather demanding intellectually, I think 40 hours/week should be fine, and I should cut myself off around 50/week to avoid burnout. I think one complete day off per week to sit around and recharge is the right move. I'm open to being wrong; I'm open to finding that 30-35 hours per week is my maximal effectiveness. 

what could go horribly wrong? 

I could be drastically miscalculating my motivation, interests, and stamina. I don't know what to do to guard against this besides

  1. revising the weights only if I'm faced with drastic motivation/interests misalignment.
  2. going easy on myself in hours per week terms if I'm facing stamina droughts.
  3. remembering to exercise and take my meds.
  4. I'm resting a lot right now during my last month in the states before I leave, cuz I want to be refreshed and ready to go when I arrive in Blackpool.

what are my blindspots? 

can anybody identify any? 

why I'm doing this

my vision and ambitions: 


Comments sorted by top scores.

comment by Tetraspace Grouping · 2020-12-14T23:55:45.989Z · EA(p) · GW(p)

I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?

comment by quinn · 2020-12-16T20:52:56.373Z · EA(p) · GW(p)

Maybe! I'm only going after a steady stream of 2-3 chapters per week. Be in touch if you're interested: I'm re-reading the first quarter of PLF since they published a new version in the time since I knocked out the first quarter of it. 

comment by ShayBenMoshe (shaybenmoshe) · 2020-12-14T22:08:11.667Z · EA(p) · GW(p)

Thanks for sharing that, that sounds like an interesting plan.

A while ago I was trying to think about potential ways to have large impact via formal verification (after reading this post [EA · GW]). I didn't give it much attention, but it looks like others and I [EA(p) · GW(p)] don't see a case for this career path to be highly impactful, but I'd to love be proven wrong. I would appreciate it if you could elaborate on your perspective on this. I should probably mention that I couldn't find a reference to formal verification at agent foundations (but I didn't really read it), and Vanessa seemed to reference it as a tangential point, but I might be wrong about both.

I'm interested in formal verification from a purely mathematical point of view. That is, I think it's important for math (but I don't think that formalizing [mainstream] math is likely to be very impactful outside of math). Additionally, I am interested in ideas developed in homotopy type theory, because of their connections to homotopy theory, rather than because I think it is impactful.

comment by quinn · 2020-12-16T20:51:24.753Z · EA(p) · GW(p)

Thanks for the comment. I wasn't aware of yours and Rohin's discussion on Arden's post. Did you flesh out the inductive alignment idea on lw or alignment forum? It seems really promising to me. 

I want to jot down notes more substantive than "wait until I post 'Going Long on FV' in a few months" today. 

FV in AI Safety in particular

As Rohin's comment suggests, both aiming proofs about properties of models toward today's type theories and aiming tomorrow's type theories toward ML have two classes of obstacles: 1. is it possible? 2. can it be made competitive?

I've gathered that there's a lot of pessimism about 1, in spite of MIRI's investment in type theory and in spite of the word "provably" in CHAI's charter. My personal expected path to impact as it concerns 1. is "wait until theorists smarter than me figure it out", and I want to position myself to worry about 2.. 

I think there's a distinction between theories and products, and I think programmers need to be prepared to commercialize results. There's a fundamental question: should we expect that a theory's competitiveness can be improved one or more orders of magnitude by engineering effort, or will engineering effort only provide improvements of less than an order of magnitude? I think a lot depends on how you feel about this. 


While I agree that proof assistants right now are much slower than doing math proofs yourself, verification is a pretty immature field. I can imagine them becoming a lot better such that they do actually become better to use than doing math proofs yourself, and don't think this would be the worst thing to invest in.

Asya may not have been speaking about AI safety here, but my basic thinking is that if less primitive proof assistants end up drastically more competitive, and at the same time there are opportunities convert results in verified ML into tooling, expertise in this area could gain a lot of leverage. 

FV in other paths to impact


it is plausibly still worthwhile becoming an expert on formal verification because of the potential applications to cybersecurity. (Though it seems like in that case you should just become an expert on cybersecurity.)

It's not clear to me that grinding FV directly is as wise as, say, CompTIA certifications. From the expectation that FV pays dividends in advanced cybersec, we cannot conclude that FV is relevant to early stages of a cybersec path. 

Related: Information security careers for GCR reduction [EA · GW]. I think the software safety standards in a wide variety of fields have a lot of leverage over outcomes.

comment by abergal · 2020-12-18T05:08:13.553Z · EA(p) · GW(p)

I was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.