Systematically under explored project areas? 2015-09-28T20:16:19.754Z · score: 11 (11 votes)
Criminal Justice Reform: DEA enforcement incentives? 2015-03-25T02:20:15.761Z · score: 1 (1 votes)
At the intersection of Global Health and Global Risks: Bill Gates talks about epidemic preparation [LINK] 2015-03-18T20:07:52.496Z · score: 4 (4 votes)
How Much Can We Generalize from Impact Evaluations? (link) 2014-10-30T08:09:17.489Z · score: 9 (9 votes)


Comment by romeostevens on What are the best arguments that AGI is on the horizon? · 2020-02-17T02:45:11.951Z · score: 4 (3 votes) · EA · GW

Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-15T20:05:47.031Z · score: 1 (1 votes) · EA · GW

You don't, but that's a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-14T19:51:29.123Z · score: 3 (2 votes) · EA · GW

The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-14T19:50:07.595Z · score: 10 (4 votes) · EA · GW

First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.

>he thinks the world is metaphorically more made of liquids than solids.

Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our parallel processing already thinks is the correct shape that an answer is supposed to take. If we go back to the idea of security mindest, then the representation that one tends to use will be made up of components, your type system for uncertainty will be uncertainty of those components varying. So which sorts of things your representation uses as building blocks will be the kinds of uncertainty that you have an easier time thinking about and managing. Going upstream in this way should resolve a bunch of downstream tangles since the generators for the shape/direction/magnitude (this is an example of such a choice that might impact how I think about the problem) of the updates will be clearer.

This gets at a way of thinking about metaphilosophy. We can ask what more general class of problems AI safety is an instance of, and maybe recover some features of the space. I like the capability amplification frame because it's useful as a toy problem to think about random subsets of human capabilities getting amplified, to think about the non-random ways capabilities have been amplified in the past, and what sorts of incentive gradients might be present for capability amplification besides just the AI research landscape one.

Comment by romeostevens on Prioritizing among the Sustainable Development Goals · 2020-02-07T19:05:03.663Z · score: 4 (3 votes) · EA · GW

EA is well positioned for moonshot funding (though to date has mostly attracted risk averse donors AFAICT). It seems like an interesting generator to ask what moonshots look like for these categories.

Comment by romeostevens on The Intellectual and Moral Decline in Academic Research · 2020-02-07T18:54:07.711Z · score: 10 (7 votes) · EA · GW

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

Comment by romeostevens on 80,000 Hours: Ways to be successful that people don't talk about enough · 2020-01-31T16:45:56.489Z · score: 3 (2 votes) · EA · GW

The number of people working on things outside the overton window is sharply limited by being able and willing to risk being unsuccessful.

Comment by romeostevens on Doing good is as good as it ever was · 2020-01-29T14:36:33.075Z · score: 4 (3 votes) · EA · GW

Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.

Comment by romeostevens on Doing good is as good as it ever was · 2020-01-26T13:48:32.186Z · score: 16 (7 votes) · EA · GW

I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.

Comment by romeostevens on Love seems like a high priority · 2020-01-20T04:15:06.479Z · score: 7 (5 votes) · EA · GW

Dating apps have misaligned incentives. A dating app run as a non profit could plausibly out compete on the metric of successful couple formation.

Comment by romeostevens on How Much Leverage Should Altruists Use? · 2020-01-07T04:40:07.101Z · score: 1 (1 votes) · EA · GW

IIRC Interactive Brokers isn't going to let you lever up more than about 2:1, though if you have 'separate' personal and altruistic accounts you can potentially lever your altruistic side higher. e.g. if you have 50k in personal accounts and 50k in altruistic accounts, you can get 100k in margin, allowing you to lever up the altruistic side 3:1.

Lazy people can access mild leverage (1.5:1) through NTSX for low fees. Many brokerages don't grant access to the more extreme 3:1 ETFs.

Comment by romeostevens on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-06T02:27:32.422Z · score: 5 (3 votes) · EA · GW

Thanks for fleshing this out.

Comment by romeostevens on Why and how to start a for-profit company serving emerging markets · 2019-11-06T02:25:24.519Z · score: 9 (8 votes) · EA · GW

Fantastic! I like everything about this post, except its length. I wish it were longer as I think there is a ton to learn from your experience.

Comment by romeostevens on The Future of Earning to Give · 2019-10-18T00:55:01.917Z · score: 4 (3 votes) · EA · GW

Ra could be seen partially as everyone's credibility heuristics being way too highly correlated. People seem happy with exploration and diversity on the object level, but much less comfortable, due to lack of clear signals on how to evaluate, exploration and diversity on the heuristic/methods level.

I think the history of how much trouble MAPS had is instructive.

Comment by romeostevens on The evolutionary argument against cognitive enhancement research is weak · 2019-10-16T23:02:33.354Z · score: 2 (2 votes) · EA · GW

Downside maladaptation to an unusual environment (modernity) seems common.

Comment by romeostevens on Will MacAskill on his ‘Eat That Elephant’ routine, learning from successful people, and the diminishing marginal returns of time spent working [blog cross-post] · 2019-10-12T01:43:44.425Z · score: 9 (5 votes) · EA · GW

My current best guess is that doing lots of work is at least weakly negatively correlated with doing important work.

Comment by romeostevens on Why do social movements fail: Two concrete examples. · 2019-10-10T16:41:34.063Z · score: 6 (5 votes) · EA · GW

I was thinking about

EA goodharting on analysis theater

Marxism becoming a counter-culture thing

Globalisation and China

Military industrial complex

(note I will not get into discussion of these due to politics. I think the lens is the interesting thing and would discuss more neutral examples. I was just answering the question honestly.)

Comment by romeostevens on What actions would obviously decrease x-risk? · 2019-10-07T18:04:51.202Z · score: 1 (4 votes) · EA · GW

I'd suggest keeping brainstorming and debates about obviousness thresholds separate as the latter discourages people from ideating.

Comment by romeostevens on What actions would obviously decrease x-risk? · 2019-10-06T23:59:48.192Z · score: 6 (4 votes) · EA · GW

Increasing the ease/decreasing the formality of world leaders talking to each other as per the Red Phone. World leaders mostly getting educated at the same institutions helps enormously with communication as well, though it does increase other marginal risks due to correlated blind spots.

Biorisk mitigation becoming much higher status a field and thus attracting more top talent.

Pakistan not having nukes.

Comment by romeostevens on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-06T23:54:59.372Z · score: 3 (3 votes) · EA · GW

>in-depth marginal cost-effectiveness analysis.

I'd recommend finding an easy to remember name for the proposal.

Marginal Efficiency Gain Analysis? (MEGA)

something metaphorical? (what well known existing thing does something similar?)

Also some worked examples will both help cement the idea and show possible areas of improvement.

Comment by RomeoStevens on [deleted post] 2019-10-06T23:40:36.124Z

I think maintaining a lot of optionality winds up turning into risk aversion in practice.

Comment by romeostevens on Why do social movements fail: Two concrete examples. · 2019-10-06T23:36:35.960Z · score: 3 (3 votes) · EA · GW

Glad to see an analysis of GS!

I'll add that many movements fail by succeeding too well at something that was only incidental to the original aims of the movement.

Comment by romeostevens on Analgesics for farm animals · 2019-10-04T16:21:30.321Z · score: 6 (4 votes) · EA · GW

I'm excited about this area and only wish I had the funds to make grants in it. I think finding people to consult with who have been through the process of FDA change before would be especially helpful.

>I'm sure that there are people already working on this.

IIRC there are a couple advocacy groups but they seemed a bit orphaned due to pushback within the normal animal welfare memeplex (abolitionists vs reductionists, plays out in climate change too on the nuclear and geoengineering fronts). I think this is neglected and there's an opportunity for a motivated group to move the needle substantially.

Comment by romeostevens on The Germy Paradox: An Introduction · 2019-10-04T16:12:54.502Z · score: 7 (4 votes) · EA · GW

Thanks for the substantive engagement even though I was pretty terse on justification. I'm less concerned when I see engagement with differential infohazard analysis (i.e. some parts of this might have problems and some might not). I still feel a sense of caution about EA getting involved in this area given its poor track record of taking into account existing best practices/chesterton fences.

+1 for comparing it to existing works in the area to help reason about this.

Comment by romeostevens on Altruism Coach · 2019-10-02T21:53:16.589Z · score: 8 (5 votes) · EA · GW

Thank you very much for offering your services. It's awesome to have a concrete go to referral rather than some ambiguous sense that such services are reachable.

Comment by romeostevens on Why is the amount of child porn growing? · 2019-10-02T21:47:19.298Z · score: 8 (3 votes) · EA · GW

>Communication technology didn't change much in that timeframe

I find it plausible that de facto availability of secure communication channels had a lowered enough technical bar that thresholds were passed in that time frame.

Comment by romeostevens on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-28T16:21:01.435Z · score: 3 (2 votes) · EA · GW

Some examples:

thinking in terms of generational shifts

thinking in terms of a given outcome things are evolving towards

thinking in terms of specific scenarios

thinking in terms of cycles

any of these can be thought about on multiple temporal horizons and people will give different answers depending on their mental habits.

Comment by romeostevens on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-28T03:46:26.804Z · score: 2 (2 votes) · EA · GW

Inferential distances and thus discount rates are vastly different depending on which metaphors you use to think about the future.

Comment by romeostevens on The Germy Paradox: An Introduction · 2019-09-26T20:11:01.901Z · score: 1 (4 votes) · EA · GW

I don't think public discourse around this is a good idea. Same as the reports on nuclear weapons trying to demonstrate that a nuclear exchange 'wouldn't be that bad' or publicly wondering in detailed ways about why copycat attacks of certain kinds aren't more common.

Comment by romeostevens on Model-free and model-based cognition in deontological and consequentialist reasoning · 2019-09-24T22:23:41.806Z · score: 1 (1 votes) · EA · GW

+1 and to generalize I think a bunch of philosophical debates are basically reifications of different sorts of ways different pattern matching cognitive systems operate. We let the urge to compress for efficiency reasons get a bit out of hand and try to build perverse monisms out of everything.

Comment by romeostevens on [Link] Research as a Stochastic Decision Process · 2019-09-12T22:09:57.354Z · score: 4 (3 votes) · EA · GW

Even smart people will often intuitively (that is to say, without realizing it, or only dimly realize it) shy away from the part of the project that would provide information that would tell them they're doing the wrong thing. This is part of the value of things like gantt charts and other project maps in that even though the plans they are typically used to generate fail when colliding with reality, they can alert you to ways you are fooling yourself about the most uncertain parts of a project.

Comment by romeostevens on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-10T20:30:43.493Z · score: 4 (3 votes) · EA · GW

Although I lean in the direction that Hillary would have been a lower war risk than Trump, the fact that it's at all uncertain is depressing.

Comment by romeostevens on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-09-10T20:23:29.267Z · score: 1 (1 votes) · EA · GW

This is really interesting. I'm curious about crowding out and marginal dollar effects. i.e. the smart money spends all its resources on this, allowing the dumb money to free ride and keep on with the status quo (or even get worse with less perceived consequences). Meanwhile, there are now far less smart dollars available to fund weird moonshots that only the smart money can think about.

One solution: more funding for geoengineering moonshots (and please, with fewer assumptions that geoengineering automatically means that safety and reversibility aren't major design criteria).

Comment by romeostevens on My recommendations for RSI treatment · 2019-09-10T15:00:42.670Z · score: 3 (2 votes) · EA · GW

This is a big part of the reason why a split keyboard can be so helpful since it really makes maintaining better posture much more comfortable and intuitive.

I also recommend a roost laptop stand to get the monitor up to eye level.

Comment by romeostevens on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-06T17:38:33.015Z · score: 3 (3 votes) · EA · GW

Most utilitarian gotchas are either circular or talking about leaky abstractions. 'Assume higher utility from taking option X, but OH NO, you forgot about consideration Y! Science have gone too far!'

See also aether variables.

Comment by romeostevens on Cause X Guide · 2019-09-05T12:09:37.140Z · score: 1 (1 votes) · EA · GW

I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.

Comment by romeostevens on Cause X Guide · 2019-09-04T14:57:29.721Z · score: 7 (5 votes) · EA · GW

We seem to be having different conversations. I think you're looking for strong evidence of stronger, more universal claims than I am making. I'm trying to say that this hypothesis (for some children) should be within the window of possibility and worthy of more investigation. There's a potential motte and bailey problem with that, and the claims about evidence for benefit from schooling broadly should probably be separated from evidence for harms of schooling in specific cases.

>Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. -Meditations on Moloch

Imagine that an altruistic community in such a world is very open minded and willing consider not shocking yourself all the time, but wants to see lots of evidence for it produced by the tazer manufacturers, since after all they know the most about tazers and whether they are harmful...

If you give children the option of being tazed or going to school some of them are going to pick the tazer.

Comment by romeostevens on Cause X Guide · 2019-09-04T01:38:00.704Z · score: 3 (4 votes) · EA · GW

It seems like you're arguing from common sense?

Comment by romeostevens on Cause X Guide · 2019-09-03T16:04:30.144Z · score: 5 (3 votes) · EA · GW

>There is strong evidence that the majority of children will never learn to read unless they are taught.

This is a different claim. I don't know of strong evidence that children will fail to learn to read if not sent to school.

Comment by romeostevens on Cause X Guide · 2019-09-02T01:22:53.813Z · score: 6 (8 votes) · EA · GW

Although it seems to be fine for the majority, school drives some children to suicide. Given that there is little evidence of benefit from schooling, advocating for letting those most affected have alternative options could be high impact.

Comment by romeostevens on Cause X Guide · 2019-09-02T01:13:56.681Z · score: 8 (4 votes) · EA · GW

Easing euthenasia legal and logistics obstacles for those with painful terminal illness.

Comment by romeostevens on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-17T19:39:52.874Z · score: 4 (3 votes) · EA · GW

The raising money for famous scientists part seems at odds with some of the optimism in the early sections. Any further comment on this?

Comment by romeostevens on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-05T03:31:49.665Z · score: 7 (5 votes) · EA · GW

Still seems worth it, FB might just eventually ban. ( I sort of doubt anything would happen if you link to an informational infographic)

Comment by romeostevens on [Link] Thiel on GCRs · 2019-07-24T03:46:15.899Z · score: 14 (6 votes) · EA · GW

I think how the 'middle class' (a relative measure) of the USA is doing is fairly uninteresting overall. I think most meaningful progress at the grand scale (decades to centuries) is how fast is the bottom getting pulled up and how high can the very top end (bleeding edge researchers) go. Shuffling in the middle results in much wailing and gnashing of teeth but doesn't move the needle much. Their main impact is just voting for dumb stuff that harms the top and bottom.

Comment by romeostevens on [Link] Thiel on GCRs · 2019-07-23T16:54:34.293Z · score: 10 (3 votes) · EA · GW

Economic growth likely isn't stagnating, it just looks that way due to some catch up growth effects:

Comment by romeostevens on Defining Effective Altruism · 2019-07-21T06:21:29.203Z · score: 3 (8 votes) · EA · GW

Maximizing is usually a bad idea.

Comment by romeostevens on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-18T07:17:38.199Z · score: 5 (4 votes) · EA · GW

Reminds me of how revolutionaries think they're really sticking it to the elites when they protest against free markets. But elites hate free markets, they try to insulate themselves from them as much as possible. Why would you want competition from up and coming new elites? That's why elites fund the useful idiots who think they are revolutionaries.

There's also a weird thing where newly minted elites don't think of themselves as elites and so don't engage with the possibility of moving major equilibria even though they are potentially large enough to do so, and doing so can be much more powerful than tuning efficiencies in existing equilibria. Probably fears related to consequentialist cluelessness as well.

Comment by romeostevens on [Link] "Why Responsible AI Development Needs Cooperation on Safety" (OpenAI) · 2019-07-12T22:25:07.152Z · score: 5 (3 votes) · EA · GW

First thought is to wonder why prizes aren't more common. E.g. awards for fostering cross organizational coordination, either on the object (direct cross org efforts that result in research) or meta level (platforms, conferences etc.). One guess is that prize grantors don't gain enough from granting them. Grantors might also have a systematic aversion to paying for things that have already happened without much guarantee that doing so will incentivize further desired behavior.

Comment by romeostevens on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-12T20:54:24.983Z · score: 12 (4 votes) · EA · GW

Funding more parallel work. If something is worth doing once, and is cheap, it is very likely worth doing 2-3 times and then having the teams crux on conclusions, data, and methodology.

Comment by romeostevens on Rationality, EA and being a movement · 2019-07-12T17:48:11.069Z · score: 12 (4 votes) · EA · GW

Yes, that's the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are 'reasonable and defensible' from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.