Systematically under explored project areas? 2015-09-28T20:16:19.754Z · score: 11 (11 votes)
Criminal Justice Reform: DEA enforcement incentives? 2015-03-25T02:20:15.761Z · score: 1 (1 votes)
At the intersection of Global Health and Global Risks: Bill Gates talks about epidemic preparation [LINK] 2015-03-18T20:07:52.496Z · score: 4 (4 votes)
How Much Can We Generalize from Impact Evaluations? (link) 2014-10-30T08:09:17.489Z · score: 9 (9 votes)


Comment by romeostevens on New Top EA Causes for 2020? · 2020-04-02T05:14:51.147Z · score: 1 (1 votes) · EA · GW

Training children that it is a good idea to keep psychopaths as pets as long as they are cute probably results in them voting actors into positions of authority later in life.

Comment by romeostevens on New Top EA Causes for 2020? · 2020-04-01T18:03:00.383Z · score: 10 (7 votes) · EA · GW

Exploit selection effects on prediction records to influence policy.

During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem. When a crisis looms on the horizon, EAs coordinate to all make different predictions thus maximizing the chance that one of them will appear prescient and thus obtain outsize influence.

Comment by romeostevens on What posts do you want someone to write? · 2020-03-28T02:26:39.194Z · score: 8 (2 votes) · EA · GW

"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"

"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)

"Matching problems: a literature review"

"Entropy for intentional content: a formal model" (AI related)

"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)

"One weird trick that made my note taking 10x more useful"

Comment by romeostevens on Coronavirus Research Ideas for EAs · 2020-03-27T21:57:53.406Z · score: 9 (6 votes) · EA · GW

A lot of people are willing to try new things right now. Rapid prototyping of online EA meetups could lead to better ability to do remote collaboration permanently. This helps cut against a key constraint in matching problems, co-location.

Comment by romeostevens on What are the key ongoing debates in EA? · 2020-03-12T06:47:32.245Z · score: 7 (3 votes) · EA · GW

Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

Comment by romeostevens on What are the key ongoing debates in EA? · 2020-03-09T00:57:26.632Z · score: 12 (13 votes) · EA · GW

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

Comment by romeostevens on COVID-19 brief for friends and family · 2020-03-06T20:05:18.979Z · score: 1 (1 votes) · EA · GW

Touchscreen styluses for all those public touchscreens.

Comment by romeostevens on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-28T02:47:25.694Z · score: 1 (1 votes) · EA · GW

At $50 per ton cost to sequester the average American would need to generate $1000 per year of positive impact to offset their co2 use. The idea that the numbers are even close to comparable means priors are way way off. The signaling commons have been polluted on this front from people impact larping their short showers, lack of water at restaurants and other absurdities.

Comment by romeostevens on Why SENS makes sense · 2020-02-23T20:55:29.820Z · score: 3 (3 votes) · EA · GW

I think that much of the disconnect comes down to focusing on goals over methods. I think it is better to think of goals as orienting us in the problem-space, while most of the benefits accrue along the way. By the time you make it a substantial fraction of the way to a goal, you'll likely be in a much better position to realize the original goal was slightly off and adjust course. So 'eliminating all infectious disease' could easily be criticized as unrealistic for endless reasons, yet it is very useful for orienting us to be scope sensitive, think in terms of hits-based reasoning and so on. Similarly, even having an 'N problems of aging' list to argue about is because someone did the work of trying to figure out what it would take at a multi-year research level. If we want to talk about neglected areas of funding, I think a great place to start is neglect for funding promising methods or directions that might plausibly generate new methods with less focus on what the particular outcomes might be. Or, to sort of paraphrase Hanson and Bostrom a bit: new considerations generally trump fine tuning of existing considerations.

What could we measure that would make seemingly intractable problems trivial? Can we take moonshots at those? And I'm not talking about actually funding the moonshot once the opportunity has been identified. I'm talking about the seed research to identify plausibility, funding small numbers of people at the 1 year level to do deep dives in much weirder areas than in house researchers have been doing.

Comment by romeostevens on Harsanyi's simple “proof” of utilitarianism · 2020-02-23T17:58:21.768Z · score: 3 (3 votes) · EA · GW

> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.

yup, thanks. Also across time as well as across agents at a particular moment.

Comment by romeostevens on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T21:55:15.118Z · score: 7 (5 votes) · EA · GW

Like other links between VNM and Utilitarianism, this seems to roll intersubjective utility comparison under the rug. The agents are likely using very different methods to convert their preferences to the given numbers, rendering the aggregate of them non rigorous and subject to instability in iterated games.

Comment by romeostevens on What are the best arguments that AGI is on the horizon? · 2020-02-17T02:45:11.951Z · score: 4 (3 votes) · EA · GW

Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-15T20:05:47.031Z · score: 2 (2 votes) · EA · GW

You don't, but that's a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-14T19:51:29.123Z · score: 4 (3 votes) · EA · GW

The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).

Comment by romeostevens on My personal cruxes for working on AI safety · 2020-02-14T19:50:07.595Z · score: 10 (4 votes) · EA · GW

First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.

>he thinks the world is metaphorically more made of liquids than solids.

Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our parallel processing already thinks is the correct shape that an answer is supposed to take. If we go back to the idea of security mindest, then the representation that one tends to use will be made up of components, your type system for uncertainty will be uncertainty of those components varying. So which sorts of things your representation uses as building blocks will be the kinds of uncertainty that you have an easier time thinking about and managing. Going upstream in this way should resolve a bunch of downstream tangles since the generators for the shape/direction/magnitude (this is an example of such a choice that might impact how I think about the problem) of the updates will be clearer.

This gets at a way of thinking about metaphilosophy. We can ask what more general class of problems AI safety is an instance of, and maybe recover some features of the space. I like the capability amplification frame because it's useful as a toy problem to think about random subsets of human capabilities getting amplified, to think about the non-random ways capabilities have been amplified in the past, and what sorts of incentive gradients might be present for capability amplification besides just the AI research landscape one.

Comment by romeostevens on Prioritizing among the Sustainable Development Goals · 2020-02-07T19:05:03.663Z · score: 4 (3 votes) · EA · GW

EA is well positioned for moonshot funding (though to date has mostly attracted risk averse donors AFAICT). It seems like an interesting generator to ask what moonshots look like for these categories.

Comment by romeostevens on The Intellectual and Moral Decline in Academic Research · 2020-02-07T18:54:07.711Z · score: 10 (7 votes) · EA · GW

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

Comment by romeostevens on 80,000 Hours: Ways to be successful that people don't talk about enough · 2020-01-31T16:45:56.489Z · score: 3 (2 votes) · EA · GW

The number of people working on things outside the overton window is sharply limited by being able and willing to risk being unsuccessful.

Comment by romeostevens on Doing good is as good as it ever was · 2020-01-29T14:36:33.075Z · score: 4 (3 votes) · EA · GW

Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.

Comment by romeostevens on Doing good is as good as it ever was · 2020-01-26T13:48:32.186Z · score: 16 (7 votes) · EA · GW

I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.

Comment by romeostevens on Love seems like a high priority · 2020-01-20T04:15:06.479Z · score: 7 (5 votes) · EA · GW

Dating apps have misaligned incentives. A dating app run as a non profit could plausibly out compete on the metric of successful couple formation.

Comment by romeostevens on How Much Leverage Should Altruists Use? · 2020-01-07T04:40:07.101Z · score: 1 (1 votes) · EA · GW

IIRC Interactive Brokers isn't going to let you lever up more than about 2:1, though if you have 'separate' personal and altruistic accounts you can potentially lever your altruistic side higher. e.g. if you have 50k in personal accounts and 50k in altruistic accounts, you can get 100k in margin, allowing you to lever up the altruistic side 3:1.

Lazy people can access mild leverage (1.5:1) through NTSX for low fees. Many brokerages don't grant access to the more extreme 3:1 ETFs.

Comment by romeostevens on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-06T02:27:32.422Z · score: 5 (3 votes) · EA · GW

Thanks for fleshing this out.

Comment by romeostevens on Why and how to start a for-profit company serving emerging markets · 2019-11-06T02:25:24.519Z · score: 9 (8 votes) · EA · GW

Fantastic! I like everything about this post, except its length. I wish it were longer as I think there is a ton to learn from your experience.

Comment by romeostevens on The Future of Earning to Give · 2019-10-18T00:55:01.917Z · score: 4 (3 votes) · EA · GW

Ra could be seen partially as everyone's credibility heuristics being way too highly correlated. People seem happy with exploration and diversity on the object level, but much less comfortable, due to lack of clear signals on how to evaluate, exploration and diversity on the heuristic/methods level.

I think the history of how much trouble MAPS had is instructive.

Comment by romeostevens on The evolutionary argument against cognitive enhancement research is weak · 2019-10-16T23:02:33.354Z · score: 2 (2 votes) · EA · GW

Downside maladaptation to an unusual environment (modernity) seems common.

Comment by romeostevens on Will MacAskill on his ‘Eat That Elephant’ routine, learning from successful people, and the diminishing marginal returns of time spent working [blog cross-post] · 2019-10-12T01:43:44.425Z · score: 9 (5 votes) · EA · GW

My current best guess is that doing lots of work is at least weakly negatively correlated with doing important work.

Comment by romeostevens on Why do social movements fail: Two concrete examples. · 2019-10-10T16:41:34.063Z · score: 6 (5 votes) · EA · GW

I was thinking about

EA goodharting on analysis theater

Marxism becoming a counter-culture thing

Globalisation and China

Military industrial complex

(note I will not get into discussion of these due to politics. I think the lens is the interesting thing and would discuss more neutral examples. I was just answering the question honestly.)

Comment by romeostevens on What actions would obviously decrease x-risk? · 2019-10-07T18:04:51.202Z · score: 2 (5 votes) · EA · GW

I'd suggest keeping brainstorming and debates about obviousness thresholds separate as the latter discourages people from ideating.

Comment by romeostevens on What actions would obviously decrease x-risk? · 2019-10-06T23:59:48.192Z · score: 6 (4 votes) · EA · GW

Increasing the ease/decreasing the formality of world leaders talking to each other as per the Red Phone. World leaders mostly getting educated at the same institutions helps enormously with communication as well, though it does increase other marginal risks due to correlated blind spots.

Biorisk mitigation becoming much higher status a field and thus attracting more top talent.

Pakistan not having nukes.

Comment by romeostevens on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-06T23:54:59.372Z · score: 3 (3 votes) · EA · GW

>in-depth marginal cost-effectiveness analysis.

I'd recommend finding an easy to remember name for the proposal.

Marginal Efficiency Gain Analysis? (MEGA)

something metaphorical? (what well known existing thing does something similar?)

Also some worked examples will both help cement the idea and show possible areas of improvement.

Comment by RomeoStevens on [deleted post] 2019-10-06T23:40:36.124Z

I think maintaining a lot of optionality winds up turning into risk aversion in practice.

Comment by romeostevens on Why do social movements fail: Two concrete examples. · 2019-10-06T23:36:35.960Z · score: 3 (3 votes) · EA · GW

Glad to see an analysis of GS!

I'll add that many movements fail by succeeding too well at something that was only incidental to the original aims of the movement.

Comment by romeostevens on Analgesics for farm animals · 2019-10-04T16:21:30.321Z · score: 6 (4 votes) · EA · GW

I'm excited about this area and only wish I had the funds to make grants in it. I think finding people to consult with who have been through the process of FDA change before would be especially helpful.

>I'm sure that there are people already working on this.

IIRC there are a couple advocacy groups but they seemed a bit orphaned due to pushback within the normal animal welfare memeplex (abolitionists vs reductionists, plays out in climate change too on the nuclear and geoengineering fronts). I think this is neglected and there's an opportunity for a motivated group to move the needle substantially.

Comment by romeostevens on The Germy Paradox: An Introduction · 2019-10-04T16:12:54.502Z · score: 7 (4 votes) · EA · GW

Thanks for the substantive engagement even though I was pretty terse on justification. I'm less concerned when I see engagement with differential infohazard analysis (i.e. some parts of this might have problems and some might not). I still feel a sense of caution about EA getting involved in this area given its poor track record of taking into account existing best practices/chesterton fences.

+1 for comparing it to existing works in the area to help reason about this.

Comment by romeostevens on Altruism Coach · 2019-10-02T21:53:16.589Z · score: 8 (5 votes) · EA · GW

Thank you very much for offering your services. It's awesome to have a concrete go to referral rather than some ambiguous sense that such services are reachable.

Comment by romeostevens on Why is the amount of child porn growing? · 2019-10-02T21:47:19.298Z · score: 8 (3 votes) · EA · GW

>Communication technology didn't change much in that timeframe

I find it plausible that de facto availability of secure communication channels had a lowered enough technical bar that thresholds were passed in that time frame.

Comment by romeostevens on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-28T16:21:01.435Z · score: 3 (2 votes) · EA · GW

Some examples:

thinking in terms of generational shifts

thinking in terms of a given outcome things are evolving towards

thinking in terms of specific scenarios

thinking in terms of cycles

any of these can be thought about on multiple temporal horizons and people will give different answers depending on their mental habits.

Comment by romeostevens on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-28T03:46:26.804Z · score: 2 (2 votes) · EA · GW

Inferential distances and thus discount rates are vastly different depending on which metaphors you use to think about the future.

Comment by romeostevens on The Germy Paradox: An Introduction · 2019-09-26T20:11:01.901Z · score: 1 (4 votes) · EA · GW

I don't think public discourse around this is a good idea. Same as the reports on nuclear weapons trying to demonstrate that a nuclear exchange 'wouldn't be that bad' or publicly wondering in detailed ways about why copycat attacks of certain kinds aren't more common.

Comment by romeostevens on Model-free and model-based cognition in deontological and consequentialist reasoning · 2019-09-24T22:23:41.806Z · score: 1 (1 votes) · EA · GW

+1 and to generalize I think a bunch of philosophical debates are basically reifications of different sorts of ways different pattern matching cognitive systems operate. We let the urge to compress for efficiency reasons get a bit out of hand and try to build perverse monisms out of everything.

Comment by romeostevens on [Link] Research as a Stochastic Decision Process · 2019-09-12T22:09:57.354Z · score: 4 (3 votes) · EA · GW

Even smart people will often intuitively (that is to say, without realizing it, or only dimly realize it) shy away from the part of the project that would provide information that would tell them they're doing the wrong thing. This is part of the value of things like gantt charts and other project maps in that even though the plans they are typically used to generate fail when colliding with reality, they can alert you to ways you are fooling yourself about the most uncertain parts of a project.

Comment by romeostevens on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-10T20:30:43.493Z · score: 4 (3 votes) · EA · GW

Although I lean in the direction that Hillary would have been a lower war risk than Trump, the fact that it's at all uncertain is depressing.

Comment by romeostevens on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-09-10T20:23:29.267Z · score: 1 (1 votes) · EA · GW

This is really interesting. I'm curious about crowding out and marginal dollar effects. i.e. the smart money spends all its resources on this, allowing the dumb money to free ride and keep on with the status quo (or even get worse with less perceived consequences). Meanwhile, there are now far less smart dollars available to fund weird moonshots that only the smart money can think about.

One solution: more funding for geoengineering moonshots (and please, with fewer assumptions that geoengineering automatically means that safety and reversibility aren't major design criteria).

Comment by romeostevens on My recommendations for RSI treatment · 2019-09-10T15:00:42.670Z · score: 3 (2 votes) · EA · GW

This is a big part of the reason why a split keyboard can be so helpful since it really makes maintaining better posture much more comfortable and intuitive.

I also recommend a roost laptop stand to get the monitor up to eye level.

Comment by romeostevens on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-06T17:38:33.015Z · score: 3 (3 votes) · EA · GW

Most utilitarian gotchas are either circular or talking about leaky abstractions. 'Assume higher utility from taking option X, but OH NO, you forgot about consideration Y! Science have gone too far!'

See also aether variables.

Comment by romeostevens on Cause X Guide · 2019-09-05T12:09:37.140Z · score: 1 (1 votes) · EA · GW

I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.

Comment by romeostevens on Cause X Guide · 2019-09-04T14:57:29.721Z · score: 7 (5 votes) · EA · GW

We seem to be having different conversations. I think you're looking for strong evidence of stronger, more universal claims than I am making. I'm trying to say that this hypothesis (for some children) should be within the window of possibility and worthy of more investigation. There's a potential motte and bailey problem with that, and the claims about evidence for benefit from schooling broadly should probably be separated from evidence for harms of schooling in specific cases.

>Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. -Meditations on Moloch

Imagine that an altruistic community in such a world is very open minded and willing consider not shocking yourself all the time, but wants to see lots of evidence for it produced by the tazer manufacturers, since after all they know the most about tazers and whether they are harmful...

If you give children the option of being tazed or going to school some of them are going to pick the tazer.

Comment by romeostevens on Cause X Guide · 2019-09-04T01:38:00.704Z · score: 3 (4 votes) · EA · GW

It seems like you're arguing from common sense?

Comment by romeostevens on Cause X Guide · 2019-09-03T16:04:30.144Z · score: 5 (3 votes) · EA · GW

>There is strong evidence that the majority of children will never learn to read unless they are taught.

This is a different claim. I don't know of strong evidence that children will fail to learn to read if not sent to school.