Comment by milan_griffes on What type of Master's is best for AI policy work? · 2019-02-23T00:27:10.665Z · score: 2 (1 votes) · EA · GW

Oh interesting. Reflecting back, do you feel like you should've just gone into the Civil Service directly, or are you happy with the route you chose?

Comment by milan_griffes on What's the best Security Studies Master's program? · 2019-02-22T23:55:11.004Z · score: 3 (2 votes) · EA · GW

Update: since posting, I've heard that Georgetown's program is the best (a little cheaper than equally prestigious programs, more schedule flexibility).

Comment by milan_griffes on How Can Each Cause Area in EA Become Well-Represented? · 2019-02-22T23:05:44.002Z · score: 2 (1 votes) · EA · GW

Thanks for taking the time & care to write this up.

A lot of this info isn't made publicly or easily accessible in events, materials, and resources provided by the EA community. This isn't because of negligence.

Could you expand a little more on what considerations you think are driving this?

Comment by milan_griffes on What type of Master's is best for AI policy work? · 2019-02-22T22:47:58.002Z · score: 2 (1 votes) · EA · GW

Awesome!

My first question: did you end up doing something after the program that you wouldn't have been able to do w/o having done the program first?

What type of Master's is best for AI policy work?

2019-02-22T20:04:47.502Z · score: 9 (4 votes)

What's the best Security Studies Master's program?

2019-02-22T20:01:37.670Z · score: 7 (2 votes)
Comment by milan_griffes on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-21T22:33:26.108Z · score: 2 (1 votes) · EA · GW

Got it, thanks!

Comment by milan_griffes on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-21T19:06:23.610Z · score: 2 (1 votes) · EA · GW

How is karma allocated for co-authored posts?

Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T16:04:02.897Z · score: 8 (2 votes) · EA · GW
I feel like that second part of the plan should be fronted a bit more!

Would probably incur a lot of bad PR.

Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T15:55:56.509Z · score: 8 (2 votes) · EA · GW
... why you would drop everything and race to be the first to build an aligned AGI if you're Eliezer. But if you're Paul, I'm not sure why you would do this, since you think it will only give you a modest advantage.

Good point. Maybe another thing here is that under Paul's view, working on AGI / AI alignment now increases the probability that the whole AI development ecosystem heads in a good direction. (Prestigious + safe AI work increases the incentives for others to do safe AI work, so that they appear responsible.)

Speculative: perhaps the motivation for a lot of OpenAI's AI development work is to increase its clout in the field, so that other research groups take the AI alignment stuff seriously. Also sucking up talented researchers to increase the overall proportion of AI researchers that are working in a group that takes safety seriously.

Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T01:07:26.369Z · score: 4 (8 votes) · EA · GW

It's in their charter:

Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T00:50:34.503Z · score: 9 (3 votes) · EA · GW
I'd like to understand in more detail how this analogy breaks down.

I think the important disanalogy is that once you've created a safe AGI of sufficient power, you win. (Because it's an AGI, so it can go around doing powerful AGI stuff – other projects could be controlled or purchased, etc.)

It's not for sure the case that first-past-the-post will be the end-of-the-day winner, but being first-past-the-post is probably a big advantage. Bostrom has some discussion of this in the multipolar / singleton section of Superintelligence, if I recall correctly.

Drexler's Comprehensive AI Services is an alternative framing for what we mean by AGI. Probably relevant here, though I haven't engaged closely with it yet.

Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T00:44:55.407Z · score: 8 (2 votes) · EA · GW
From how Paul Christiano frames it, it seems like it's "create AGI, and make sure it's aligned."

I think that's basically right. I believe something like was Eliezer's plan too, way back in the day, but then he updated to believing that we don't have the basic ethical, decision theoretic, and philosophical stuff figured out that's prerequisite to actually making a safe AGI. More on that in his Rocket Alignment Dialogue.

Comment by milan_griffes on Confused about AI research as a means of addressing AI risk · 2019-02-21T00:42:06.435Z · score: 4 (2 votes) · EA · GW

+1 to Paul's 80,000 Hours interview being awesome.

Comment by milan_griffes on Time-series data for income & happiness? · 2019-02-20T18:05:42.364Z · score: 2 (1 votes) · EA · GW

That definitely matches my intuition too.

Comment by milan_griffes on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T17:42:14.311Z · score: 3 (2 votes) · EA · GW

Is there a postmortem somewhere on Certificates of Impact & challenges they faced when implementing?

Comment by milan_griffes on Open Thread: What’s The Second-Best Cause? · 2019-02-20T17:14:49.504Z · score: 5 (4 votes) · EA · GW

I think causes that are more robust to cluelessness should be higher priority than causes that are less so.

I feel pretty uncertain about which cause in the "robust-to-cluelessness" class should be second priority.

If I had to give an ordered list, I'd say:

1. AI alignment work

2. Work to increase the number of people that are both well-intentioned & highly capable

3. ...

Comment by milan_griffes on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T15:21:49.627Z · score: 3 (2 votes) · EA · GW

Got it. So this would go something like:

  • There's a prize!
  • I'm going to do X, which I think will win the prize!
  • Do you want to buy my rights to the prize, once I win it after doing X ?

Seems like this will select for sales & persuasion ability (which could be an important quality for successfully executing projects).

Comment by milan_griffes on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T05:43:31.524Z · score: 2 (1 votes) · EA · GW

So the prize money gets paid out in 2022, in the tl;dr example? (I'm a little unclear about that from my quick read.)

This means that the Impact Prize wouldn't help teams fund their work during the 2019-22 period. Am I understanding that correctly?

Time-series data for income & happiness?

2019-02-20T05:38:23.800Z · score: 7 (2 votes)
Comment by milan_griffes on You have more than one goal, and that's fine · 2019-02-20T05:18:33.570Z · score: 5 (4 votes) · EA · GW

Could you say a little more about how you decide what size each pot of money should be?

Comment by milan_griffes on Major Donation: Long Term Future Fund Application Extended 1 Week · 2019-02-17T15:31:35.038Z · score: 5 (4 votes) · EA · GW

If someone's already applied to the Fund for this round, do they need to take any further action? (in light of the new donation & deadline extension)

Comment by milan_griffes on The Need for and Viability of an Effective Altruism Academy · 2019-02-16T00:42:24.561Z · score: 4 (3 votes) · EA · GW

The whole thread around the comment you linked to seems relevant to this.

Comment by milan_griffes on The Need for and Viability of an Effective Altruism Academy · 2019-02-16T00:39:07.802Z · score: 3 (2 votes) · EA · GW

Oh yeah, good call. Forgot about the Pareto Fellowship.

Comment by milan_griffes on The Need for and Viability of an Effective Altruism Academy · 2019-02-15T23:10:01.746Z · score: 3 (5 votes) · EA · GW

Paradigm Academy comes to mind. Curious about how you see your proposal as being different from that.

Comment by milan_griffes on EA Community Building Grants Update · 2019-02-15T22:57:10.607Z · score: 2 (1 votes) · EA · GW

Thanks for all that you're doing to make REACH happen!

Comment by milan_griffes on Introducing GPI's new research agenda · 2019-02-15T22:55:34.555Z · score: 5 (3 votes) · EA · GW

Very nice.

Is there a quick way to use the agenda to see GPI's research prioritization? (e.g. perhaps the table of contents is ordered from high-to-low priority?)

Comment by milan_griffes on Three Biases That Made Me Believe in AI Risk · 2019-02-14T16:28:10.073Z · score: 1 (1 votes) · EA · GW

Me too!

Comment by milan_griffes on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-13T17:45:09.289Z · score: 9 (8 votes) · EA · GW
Comments on any issue are generally welcome but naturally you should try to focus on major issues rather than minor ones. If you post a long line of arguments about education policy for instance, I might not get around to reading and fact-checking the whole thing, because the model only gives a very small weight to education policy right now (0.01) so it won't make a big difference either way. But if you say something about immigration, no matter how nuanced, I will pay close attention because it has a very high weight right now (2).

I think this begs the question.

If modeler attention is distributed proportional to the model's current weighting (such that discussion of high-weighted issues receive more attention than discussion of low-weighted issues), it'll be hard to identify mistakes in the current weighting.

Comment by milan_griffes on EA grants available to individuals (crosspost from LessWrong) · 2019-02-13T06:28:33.914Z · score: 1 (1 votes) · EA · GW

YC 120 isn't quite a funding source, but getting in would connect you with a bunch of possible funders. Applications close on Feb 18th.

Comment by milan_griffes on EA grants available to individuals (crosspost from LessWrong) · 2019-02-13T06:26:53.215Z · score: 2 (2 votes) · EA · GW

For sure. Also check with Tyler before applying because there's some stuff he definitely won't fund (and he replies to his email).

Comment by milan_griffes on The Narrowing Circle (Gwern) · 2019-02-13T06:21:04.000Z · score: 4 (3 votes) · EA · GW

Eh, but nowadays we're "responsible" in a way that carries dark undertones.

Many US elderly aren't embedded in multigenerational communities, but instead warehoused in nursing homes (where they aren't in regular contact with their families & don't have a clear role to play in society).

Hard to say whether this is an improvement over how things were 100 years ago. I do know that I'm personally afraid of ending up in a nursing home & plan to make arrangements to reduce the probability of such.

Comment by milan_griffes on The Narrowing Circle (Gwern) · 2019-02-12T15:29:11.766Z · score: 5 (3 votes) · EA · GW

Seems like a real shift. (Perhaps driven by the creation of a nursing home industry?)

Comment by milan_griffes on What we talk about when we talk about life satisfaction · 2019-02-11T19:00:10.650Z · score: 1 (1 votes) · EA · GW

Thanks! This is from the Oxford Handbook of Happiness?

Comment by milan_griffes on Arguments for moral indefinability · 2019-02-11T18:54:55.100Z · score: 2 (2 votes) · EA · GW

This is great – thank you for taking the time to write it up with such care.

I see overlap with consequentialist cluelessness (perhaps unsurprising as that's been a hobbyhorse of mine lately).

Comment by milan_griffes on My positive experience taking the antidepressant Wellbutrin / Bupropion, & why maybe you should try it too · 2019-02-06T18:37:17.660Z · score: 10 (6 votes) · EA · GW

Was chatting with Gwern about this. An excerpt of their thoughts (published with permission):

Wellbutrin/Buproprion is a weird one. Comes up often on SSC and elsewhere as surprisingly effective and side-effect free, but also with a very wide variance and messy mechanism (even for antidepressants) so anecdotes differ dramatically.
With antidepressants, you're usually talking multi-week onset and washout periods, so blinded self-experiments would take something like half a year for decent sample sizes. It's not that easy to get if you aren't willing to go through a doctor (ie no easy ordering online from a DNM or clearnet site, like modafinil)...
Finally, as far as I can tell, my personal problems have more to do with anxiety than depression, and anti-anxiety is not what buproprion is generally described as best for, so my own benefit is probably less than usual. I thought about it a little but decided it was too weird and hard to get, and self-experiments would take too long.
Comment by milan_griffes on My positive experience taking the antidepressant Wellbutrin / Bupropion, & why maybe you should try it too · 2019-02-06T00:29:04.235Z · score: 2 (2 votes) · EA · GW

Nice. I love ideas with the shape of "Consider trying this thing because the costs are low, even if you're not sure if it will help or pretty sure it won't."

The main confounder I worry about is that I changed what I spent most of my time doing at work around that time, and I think that also improved my life.

Given confounders like this, it'd be great to see someone run a Gwern-style controlled trial on their Wellbutrin use. The value of information would probably be quite high.

It'd be sorta tricky to do given the on- and off-ramping effects of the drug, so perhaps should only be undertaken by someone with sufficient slack to accommodate.

Comment by milan_griffes on EA Boston 2018 Year in Review · 2019-02-06T00:20:55.974Z · score: 2 (2 votes) · EA · GW

Co-authorship!

To mods: how is karma distributed for co-authored posts?

Comment by milan_griffes on What we talk about when we talk about life satisfaction · 2019-02-06T00:18:09.271Z · score: 1 (1 votes) · EA · GW

Got it. I'm somewhat more bearish than you re: academic philosophers sharing my goals here. (Though some definitely do! Generalizations are hard.)

Comment by milan_griffes on What we talk about when we talk about life satisfaction · 2019-02-05T20:29:56.631Z · score: 1 (1 votes) · EA · GW

Huh, I feel like the same issue would arise for (e.g.) eudaimonia, if we tried to spec out what it is we mean exactly by "eudaimonia."

(My model here is that the psychological constructs are an attempt at specifying + making quantifiable concepts that philosophy had identified but left vague.)

Comment by milan_griffes on Near-term focus, robustness, and flow-through effects · 2019-02-05T15:12:42.959Z · score: 5 (5 votes) · EA · GW

Most of my impulse towards short-termism arises from concerns about cluelessness, which I wrote about here.

Holding a person-affecting ethic is another reason to prioritize the short-term; Michael Plant argues for the person-affecting view here.

What we talk about when we talk about life satisfaction

2019-02-04T23:51:06.245Z · score: 18 (7 votes)
Comment by milan_griffes on EA Hotel Fundraiser 2: current guests and their projects · 2019-02-04T22:35:38.247Z · score: 9 (7 votes) · EA · GW

Great to see so many folks working at cool stuff at the EA Hotel!

Thank you for taking the time to write this up, and for everything else you've done to make this happen.

Comment by milan_griffes on If slow-takeoff AGI is somewhat likely, don't give now · 2019-02-04T18:26:30.344Z · score: 1 (1 votes) · EA · GW
The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal.

Makes sense.

I realize I was writing from the perspective of a small-scale donor (whose donations trade off meaningfully against their saving & consumption goals).

From the perspective of a fully altruistic donor (who's not thinking about such trade-offs), doing current AI philanthropy seems really good (if the donor thinks current opportunities are sensible bets).

Comment by Milan_Griffes on [deleted post] 2019-02-04T17:34:22.533Z

SNP = "single-nucleotide polymorphism"?

Perhaps add a definition in-line at first use as I had to google for that & don't know enough genetics to be confident that it's right.

Comment by milan_griffes on Latest Research and Updates for January 2019 · 2019-01-31T17:18:33.637Z · score: 1 (1 votes) · EA · GW

+1

Does anyone know how the BCC vertical came about?

Does it have any direct connection with EA?

Comment by milan_griffes on Latest Research and Updates for January 2019 · 2019-01-31T17:17:26.510Z · score: 8 (6 votes) · EA · GW
Emergent Ventures is looking to fund projects on "advancing humane solutions to those facing adversity – based on tolerance, universality, and cooperative processes"

Recommend shooting Tyler Cowen an email checking about the chances for your idea before submitting an app. He's pretty responsive to email & there are areas Emergent Ventures almost certainly won't fund, so checking first can save a bunch of time.

Comment by milan_griffes on Cost-Effectiveness of Aging Research · 2019-01-31T17:13:15.087Z · score: 5 (4 votes) · EA · GW
GiveWell estimates a cost of $1965 for a gain of ~8 DALY-equivalents, or $437.50 per DALY, from giving malaria-preventing mosquito nets to children in developing countries.

Just flagging that GiveWell's view about mosquito net DALYs has changed a lot:

  • In 2015, I believe they were modeling each life saved by mosquito nets as being equivalent to 36.53 DALYs, following Lopez et al. 2006, Table 5.1 p. 402
  • In 2016, they modeled each under-age-5 life saved by mosquito nets as being equivalent to 7 DALYs (presumably following an intuition that young infants don't yet have a fully formed personhood & thus have less moral patienthood than people above the age of 5)
  • In 2017, they stopped using DALYs altogether, noting "We felt that using the DALY framework in an unconventional way could lead to confusion, while strictly adhering to the conventional version of the framework would prevent some individuals from adequately accounting for their views within our CEA."
Comment by milan_griffes on What Are Effective Alternatives to Party Politics for Effective Public Policy Advocacy? · 2019-01-30T20:00:04.002Z · score: 5 (4 votes) · EA · GW

Ballot initiatives, at least in the US.

Comment by milan_griffes on Is intellectual work better construed as exploration or performance? · 2019-01-29T18:13:22.085Z · score: 1 (1 votes) · EA · GW

Yeah, I've realized I'm most interested in the question of which metaphor is better to be holding while doing intellectual work.

See this comment.

Comment by milan_griffes on Is intellectual work better construed as exploration or performance? · 2019-01-28T18:08:11.487Z · score: 1 (1 votes) · EA · GW

Thanks, I found this helpful. TED talks are a great example of intellectual performance without a negative connotation.

I've realized I'm most interested in the question of which metaphor to be holding while doing intellectual work.

On that, I think it makes sense to be (almost) exclusively using the "exploration" metaphor when doing intellectual work.

Then, it seems good to switch to the "performance" metaphor when it's time to propagate ideas (or hand off to a partner specialized in intellectual performance).

Open question for me: Is it costly to grow skillful in intellectual performance? Does it make one's intellectual work worse / less truth-seeking? (My intuition is "yes, it's costly" but seems plausible that the performance skill could be safely compartmentalized.)

Comment by milan_griffes on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-27T18:04:07.785Z · score: 1 (1 votes) · EA · GW
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.

Agree this is important. As I've thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.

I'd probably benefit from having a formal model here, so I might make one.

Comment by milan_griffes on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-27T18:01:52.199Z · score: 1 (1 votes) · EA · GW

Thanks for tying this to mission hedging – definitely seems related.

Comment by milan_griffes on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-27T18:00:44.982Z · score: 1 (1 votes) · EA · GW
Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities.

Perhaps that, but even if they don't, the returns from a market-tracking index fund could be very high in the case of transformative AI.

I'm imagining two scenarios:

1. AI research progresses & AI companies start to have higher-than-average returns

2. AI research progresses & the returns from this trickle through the whole market (but AI companies don't have higher-than-average returns)

A version of the argument applies to either scenario.

Is intellectual work better construed as exploration or performance?

2019-01-25T22:00:52.792Z · score: 11 (4 votes)

If slow-takeoff AGI is somewhat likely, don't give now

2019-01-23T20:54:58.944Z · score: 21 (14 votes)

Giving more won't make you happier

2018-12-10T18:15:16.663Z · score: 41 (29 votes)

Open Thread #42

2018-10-17T20:10:00.472Z · score: 3 (3 votes)

Doing good while clueless

2018-02-15T05:04:25.291Z · score: 19 (15 votes)

How tractable is cluelessness?

2017-12-29T18:52:56.369Z · score: 10 (5 votes)

“Just take the expected value” – a possible reply to concerns about cluelessness

2017-12-21T19:37:07.709Z · score: 12 (7 votes)

What consequences?

2017-11-23T18:27:21.894Z · score: 21 (20 votes)

Reading recommendations for the problem of consequentialist scope?

2017-08-02T02:07:46.769Z · score: 6 (6 votes)

Should Good Ventures focus on current giving opportunities, or save for future giving opportunities?

2016-11-07T16:10:29.709Z · score: 4 (6 votes)