Want to alleviate developing world poverty? Alleviate price risk.​ (2018) 2021-03-23T00:16:17.604Z
Suggestion that Zvi be awarded a prize for his COVID series 2020-09-24T19:16:52.487Z
Systematically under explored project areas? 2015-09-28T20:16:19.754Z
Criminal Justice Reform: DEA enforcement incentives? 2015-03-25T02:20:15.761Z
At the intersection of Global Health and Global Risks: Bill Gates talks about epidemic preparation [LINK] 2015-03-18T20:07:52.496Z
How Much Can We Generalize from Impact Evaluations? (link) 2014-10-30T08:09:17.489Z


Comment by RomeoStevens on We all teach: here's how to do it better · 2022-10-01T05:38:20.521Z · EA · GW

You may be interested in this convo I had about research on pedagogical models. The tl;dw if you just want the interventions that have replicated with large effects sizes:

  1. Deliberate practice
  2.  Lots of low stakes quizzing
  3. Elaboration of context (deliberately structuring things to give students the chance to connect knowledge areas themselves)
  4. Teaching the material to others (forcing organization of the material in a way helpful to the one doing the teaching, and helping them identify holes in their own understanding)
Comment by RomeoStevens on Effective altruism in the garden of ends · 2022-09-01T23:03:47.975Z · EA · GW

Root out maximizers within yourself. Even 'doing the most good.' Maximizer processes are cancer, trying to convert the universe into copies of themselves. But this destroys anything that the maximizing was for.

Comment by RomeoStevens on Apply for Red Team Challenge [May 7 - June 4] · 2022-04-02T16:39:57.382Z · EA · GW

Potentially of use in running a short workshop is the effectiveness of pedagogical techniques. From engaging with the literature on such, the highest quality systematic review I could find pointed to four techniques as showing robust effect size across many contexts and instantiations. They are

  1. Deliberate practice
  2. Cuing elaboration of context
  3. Regular low stakes quizzing
  4. Teaching the material to others
Comment by RomeoStevens on Want to alleviate developing world poverty? Alleviate price risk.​ (2018) · 2021-03-25T18:11:50.134Z · EA · GW

Lots of markets fail to clear for a long time until coordination problems are solved.

Comment by RomeoStevens on Don't Be Bycatch · 2021-03-23T00:29:44.572Z · EA · GW

I propose that March 26th (6 months equidistant from Petrov day) be converse Petrov day.

Comment by RomeoStevens on [deleted post] 2021-03-16T02:34:32.929Z

in the long run yes. But that's overly simplistic when considering humans because of all the things we might do to either memetically or technologically undermine evolutionary equilibria.

Comment by RomeoStevens on [deleted post] 2021-03-15T00:16:42.045Z

rK selection.

Comment by RomeoStevens on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-12-15T18:50:27.591Z · EA · GW

And later iirc "maybe not needing to hear their screams is what being the comet king means."

Comment by RomeoStevens on Thoughts on whether we're living at the most influential time in history · 2020-11-15T02:32:27.912Z · EA · GW

In order for hingeyness to stay uniform robustness to x-risk would need to scale uniformly with power needed to cause x-risk.

Comment by RomeoStevens on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-15T02:25:40.510Z · EA · GW

In the same way that an organism tries to extend the envelope of its homeostasis, an organization has a tendency to isolate itself from falsifiability in its core justifying claims. Beware those whose response to failure is to scale up.

Comment by RomeoStevens on Life Satisfaction and its Discontents · 2020-09-25T20:59:42.408Z · EA · GW

the matrix of the Neef model is pretty cool.

Comment by RomeoStevens on Open and Welcome Thread: August 2020 · 2020-08-24T19:05:27.810Z · EA · GW

Towards measuring poverty costs of covid from economic disruption:

Comment by RomeoStevens on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T00:40:29.813Z · EA · GW

Thank you for the work put into this.

I can imagine a world in which the idea of a peace summit that doesn't involve leaders taking mdma together is seen as an 'are you even trying' type thing.

Comment by RomeoStevens on Geographic diversity in EA · 2020-06-26T23:44:00.273Z · EA · GW

Great points. I feel like there's a rule of thumb somewhere in here like 'marginal dollars tend to be low information dollars' that feels helpful.

Comment by RomeoStevens on EA considerations regarding increasing political polarization · 2020-06-23T10:05:29.466Z · EA · GW

This portion of the PBS documentary A Century of Revolution covers the cultural revolution: (Around the 1 hour mark)

Recommended. One interesting bit for me is that I think foreign dictators often appear clownish because the translations don't capture what they were speaking to, either literally in terms of them being a good speech writer, or contextually in terms of not really being familiar with the cultural context that animates a particular popular political reaction. I think this applies even if you speak nominally the same language as the dictator but don't share their culture.

Comment by RomeoStevens on How to Measure Capacity for Welfare and Moral Status · 2020-06-02T09:29:41.758Z · EA · GW

Appreciate the care taken, especially in the atomistic section. One thing is that it seems to assume that best we can do with such a research agenda is analyze correlates, where what we really want is a causal model.

Comment by RomeoStevens on Some thoughts on deference and inside-view models · 2020-06-02T08:04:56.764Z · EA · GW

I really enjoyed this. A related thing is about a possible reason why more debate doesn't happen. I think when rationalist style thinkers debate, especially in public, it feels a bit high stakes. There is pressure to demonstrate good epistemic standards, even though no one can define a good basis set for that. This goes doubly so for anyone who feels like they have a respectable position or are well regarded. There is a lot of downside risk to them engaging in debate and little upside. I think the thing that breaks this is actually pretty simple and is helped out by the 'sorry' command concept. If it's a free move socially to choose whether or not to debate (which avoids the thing where a person mostly wants to debate only if they're in the mood and about the thing they are interested in but don't want to defend a position against arbitrary objections that they may have answered lots of times before etc.) and also a free move to say 'actually, some of my beliefs in this area are cached sorries, so I reserve the right to not have perfect epistemics here already, and we also recognize that even if we refute specific parts of the argument, we might disagree on whether it is a smoking gun, so I can go away and think about it and I don't have to publicly update on it' then it derisks engaging in a friendly, yet still adversarial form debate.

If we believe that people doing a lot of this play fighting will on average increase the volume and quality of EA output both through direct discovery of more bugs in arguments and in providing more training opportunity, then maybe it should be a named thing like Crocker's rules? Like people can say 'I'm open to debating X, but I declare Kid Gloves' or something. (What might be a good name for this?)

Comment by RomeoStevens on How should longtermists think about eating meat? · 2020-05-21T01:31:18.302Z · EA · GW

This is a great research question IMO

Comment by RomeoStevens on How should longtermists think about eating meat? · 2020-05-19T02:31:23.823Z · EA · GW

> Costs of being vegan are in fact trivial, despite all the complaining that meat-eaters do about it. For almost everyone there is a net health benefit and the food is probably more enjoyable than the amount of enjoyment one would have derived from sticking with one's non-vegan diet, or at the very least certainly not less so. No expenditure of will-power is required once one is accustomed to the new diet. It is simply a matter of changing one's mind-set.

Appreciate some of the points, but this part seems totally disconnected from what people report along several dimensions.

Comment by RomeoStevens on What's the big deal about hypersonic missiles? · 2020-05-19T02:27:27.380Z · EA · GW

Potential EA career: go in to defense R&D specifically for 'stabilizing' weapons tech i.e. doing research on things that would favor defense over offense. In 3d space, this is very hard.

Comment by RomeoStevens on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-19T02:23:05.961Z · EA · GW

This is only half formed but I want to say something about a slightly different frame for evaluation, what might be termed 'reward architecture calibration.' I think that while a mapping from this frame to various preference and utility formulations is possible, I like it more than those frames because it suggests concrete areas to start looking. The basic idea is that in principle it seems likely that it will be possible to draw a clear distinction between reward architectures that are well suited to the actual sensory input they receive and reward architectures that aren't (by dint of being in an artificial environment). In a predictive coding sense, a reward architecture that is sending constant error signals that an organism can do nothing about is poorly calibrated, since it is directing the organism's attention to the wrong things. Similarly there may be other markers that could be spotted in how a nervous system is sending signals e.g. lots of error collisions vs few, in the sense of two competing error signals pulling behavior in different directions. I'd be excited about a medium depth dive into the existing literature on distress in rats and what sorts of experiments we'd ideally want done to resolve confusions.

Comment by RomeoStevens on Modelers and Indexers · 2020-05-13T07:59:58.225Z · EA · GW

Literally today I was idly speculating that it would be nice to see more things that were reminiscent of the longer letters academics in a particular field would write to each other in the days of such. More willingness to explore at length. Lo and behold this very post appears. Thanks!

WRT content, you mention it in passing, but yeah this seems related to tendency towards optimization of causal reality (inductive) or social reality (anti-inductive).

Comment by RomeoStevens on Physical theories of consciousness reduce to panpsychism · 2020-05-07T08:39:18.519Z · EA · GW

Panpsychism still seems like a flavor of eliminativism to me. What do we gain by saying an electron is conscious too? Novel predictions?

Comment by RomeoStevens on The Alienation Objection to Consequentialism · 2020-05-05T08:01:28.344Z · EA · GW

Seems like you're trying to get at what I've seen referred to as 'multifinal means' at one point. Keyword might help find related stuff.

This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I've been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved via aether variables, we rearrange the model until the complexity (or uncertainty) has been shoved into a corner either implicitly or explicitly, which makes the rest of the model look very tidy indeed.

With consequentialism we say that one should allow the inputs to vary freely while holding the outputs fixed (our idea of what the outcome should be, or heuristics that evaluate outcomes etc.). We backprop the appropriate inputs from the outputs. Deontology says we can't control outputs, but we can control inputs, so we should allow outputs to vary freely while holding the inputs to some fixed ideal.

Both of these are a hope that one can avoid the nebulosity of having a full blown confusion matrix about inputs and outputs, and that changing problem to problem. That is to say, I have some control over which outputs to optimize for, and some control over inputs, and false positives and false negatives in my beliefs about both of those. Actual problem solving of any complexity at all both forward chains from known info about inputs, and backchains from previous data about outputs then tries to find places where the two branching chains meet. In the process of investigating this, beliefs about the inputs or outputs may also update.

More generally, I've been getting a lot of mileage out of thinking of 'philosophical positions' as different sorts of error checks that we use on decision processes.

It's also fun to think about this in terms of the heuristic that How to Measure Anything recommends:

Define parameters explicitly (what outputs do we think we care about, what inputs do we think we control)

Establish value of information (how much will it cost to test various assumptions)

Uncertainty analysis (narrowing confidence bounds)

Sensitivity analysis (how much does final proxy vary as a function of changes in inputs)

it's a non linear heuristic, so the info gathered in any one step can cause you to go back and adjust one of the others, which involves that sort of bouncing back and forth between forward chaining and back chaining.

Comment by RomeoStevens on If it's true that Coronavirus is "close to pathologically misaligned with some of our information distribution and decisionmaking rituals", then what things would help the response? · 2020-04-26T03:32:53.518Z · EA · GW

So a conceptual slice might be that not only do generals fight the last war, but the ontology of your institutions reflect the necessities of the last war.

Comment by RomeoStevens on Measuring happiness increases happiness · 2020-04-16T03:44:57.286Z · EA · GW

It has been noted that when status hierarchies diversify, creating more niches, that people are happier than when status hierarchies collapse to a single or a small number of very legible dimensions. This suggests that it would be possible to increase net happiness by studying the conditions by which these situations arise and tilting the playing field. E.g. are social media sites only having a negative impact on mental health because they compress the metrics by which success is measured?

Comment by RomeoStevens on New Top EA Cause: Politics · 2020-04-06T06:44:28.438Z · EA · GW

Related: surely someone somewhere is doing critical path analysis of vaccine development. It certainly wouldn't be the case that in the middle of a crisis people just keep on doing what they've always done. Even if it isn't anyone's job to figure out what the actual non parallelizable causal steps are in producing a tested vaccine and trimming the fat, someone would still take it on right?


Comment by RomeoStevens on New Top EA Causes for 2020? · 2020-04-02T05:14:51.147Z · EA · GW

Training children that it is a good idea to keep psychopaths as pets as long as they are cute probably results in them voting actors into positions of authority later in life.

Comment by RomeoStevens on New Top EA Causes for 2020? · 2020-04-01T18:03:00.383Z · EA · GW

Exploit selection effects on prediction records to influence policy.

During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem. When a crisis looms on the horizon, EAs coordinate to all make different predictions thus maximizing the chance that one of them will appear prescient and thus obtain outsize influence.

Comment by RomeoStevens on What posts do you want someone to write? · 2020-03-28T02:26:39.194Z · EA · GW

"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"

"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)

"Matching problems: a literature review"

"Entropy for intentional content: a formal model" (AI related)

"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)

"One weird trick that made my note taking 10x more useful"

Comment by RomeoStevens on Coronavirus Research Ideas for EAs · 2020-03-27T21:57:53.406Z · EA · GW

A lot of people are willing to try new things right now. Rapid prototyping of online EA meetups could lead to better ability to do remote collaboration permanently. This helps cut against a key constraint in matching problems, co-location.

Comment by RomeoStevens on What are the key ongoing debates in EA? · 2020-03-12T06:47:32.245Z · EA · GW

Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

Comment by RomeoStevens on What are the key ongoing debates in EA? · 2020-03-09T00:57:26.632Z · EA · GW

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

Comment by RomeoStevens on COVID-19 brief for friends and family · 2020-03-06T20:05:18.979Z · EA · GW

Touchscreen styluses for all those public touchscreens.

Comment by RomeoStevens on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-28T02:47:25.694Z · EA · GW

At $50 per ton cost to sequester the average American would need to generate $1000 per year of positive impact to offset their co2 use. The idea that the numbers are even close to comparable means priors are way way off. The signaling commons have been polluted on this front from people impact larping their short showers, lack of water at restaurants and other absurdities.

Comment by RomeoStevens on Why SENS makes sense · 2020-02-23T20:55:29.820Z · EA · GW

I think that much of the disconnect comes down to focusing on goals over methods. I think it is better to think of goals as orienting us in the problem-space, while most of the benefits accrue along the way. By the time you make it a substantial fraction of the way to a goal, you'll likely be in a much better position to realize the original goal was slightly off and adjust course. So 'eliminating all infectious disease' could easily be criticized as unrealistic for endless reasons, yet it is very useful for orienting us to be scope sensitive, think in terms of hits-based reasoning and so on. Similarly, even having an 'N problems of aging' list to argue about is because someone did the work of trying to figure out what it would take at a multi-year research level. If we want to talk about neglected areas of funding, I think a great place to start is neglect for funding promising methods or directions that might plausibly generate new methods with less focus on what the particular outcomes might be. Or, to sort of paraphrase Hanson and Bostrom a bit: new considerations generally trump fine tuning of existing considerations.

What could we measure that would make seemingly intractable problems trivial? Can we take moonshots at those? And I'm not talking about actually funding the moonshot once the opportunity has been identified. I'm talking about the seed research to identify plausibility, funding small numbers of people at the 1 year level to do deep dives in much weirder areas than in house researchers have been doing.

Comment by RomeoStevens on Harsanyi's simple “proof” of utilitarianism · 2020-02-23T17:58:21.768Z · EA · GW

> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.

yup, thanks. Also across time as well as across agents at a particular moment.

Comment by RomeoStevens on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T21:55:15.118Z · EA · GW

Like other links between VNM and Utilitarianism, this seems to roll intersubjective utility comparison under the rug. The agents are likely using very different methods to convert their preferences to the given numbers, rendering the aggregate of them non rigorous and subject to instability in iterated games.

Comment by RomeoStevens on What are the best arguments that AGI is on the horizon? · 2020-02-17T02:45:11.951Z · EA · GW

Note also that your question has a selection filter where you'd also want to figure out where the best arguments for longer timelines are. In an ideal world these two sets of things tend to live in the same place, in our world this isn't always the case.

Comment by RomeoStevens on My personal cruxes for working on AI safety · 2020-02-15T20:05:47.031Z · EA · GW

You don't, but that's a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.

Comment by RomeoStevens on My personal cruxes for working on AI safety · 2020-02-14T19:51:29.123Z · EA · GW

The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).

Comment by RomeoStevens on My personal cruxes for working on AI safety · 2020-02-14T19:50:07.595Z · EA · GW

First, doing philosophy publicly is hard and therefore rare. It cuts against Ra-shaped incentives. Much appreciation to the efforts that went into this.

>he thinks the world is metaphorically more made of liquids than solids.

Damn, the convo ended just as it was getting to the good part. I really like this sentence and suspect that thinking like this remains a big untapped source of generating sharper cruxes between researchers. Most of our reasoning is secretly analogical with deductive and inductive reasoning back-filled to try to fit it to what our parallel processing already thinks is the correct shape that an answer is supposed to take. If we go back to the idea of security mindest, then the representation that one tends to use will be made up of components, your type system for uncertainty will be uncertainty of those components varying. So which sorts of things your representation uses as building blocks will be the kinds of uncertainty that you have an easier time thinking about and managing. Going upstream in this way should resolve a bunch of downstream tangles since the generators for the shape/direction/magnitude (this is an example of such a choice that might impact how I think about the problem) of the updates will be clearer.

This gets at a way of thinking about metaphilosophy. We can ask what more general class of problems AI safety is an instance of, and maybe recover some features of the space. I like the capability amplification frame because it's useful as a toy problem to think about random subsets of human capabilities getting amplified, to think about the non-random ways capabilities have been amplified in the past, and what sorts of incentive gradients might be present for capability amplification besides just the AI research landscape one.

Comment by RomeoStevens on Prioritizing among the Sustainable Development Goals · 2020-02-07T19:05:03.663Z · EA · GW

EA is well positioned for moonshot funding (though to date has mostly attracted risk averse donors AFAICT). It seems like an interesting generator to ask what moonshots look like for these categories.

Comment by RomeoStevens on The Intellectual and Moral Decline in Academic Research · 2020-02-07T18:54:07.711Z · EA · GW

> stated that from 2000 to 2010, nearly 80,000 patients were involved in clinical trials based on research that was later retracted.

we can't know if this is a good or bad number without context.

Comment by RomeoStevens on 80,000 Hours: Ways to be successful that people don't talk about enough · 2020-01-31T16:45:56.489Z · EA · GW

The number of people working on things outside the overton window is sharply limited by being able and willing to risk being unsuccessful.

Comment by RomeoStevens on Doing good is as good as it ever was · 2020-01-29T14:36:33.075Z · EA · GW

Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.

Comment by RomeoStevens on Doing good is as good as it ever was · 2020-01-26T13:48:32.186Z · EA · GW

I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.

Comment by RomeoStevens on Love seems like a high priority · 2020-01-20T04:15:06.479Z · EA · GW

Dating apps have misaligned incentives. A dating app run as a non profit could plausibly out compete on the metric of successful couple formation.

Comment by RomeoStevens on How Much Leverage Should Altruists Use? · 2020-01-07T04:40:07.101Z · EA · GW

IIRC Interactive Brokers isn't going to let you lever up more than about 2:1, though if you have 'separate' personal and altruistic accounts you can potentially lever your altruistic side higher. e.g. if you have 50k in personal accounts and 50k in altruistic accounts, you can get 100k in margin, allowing you to lever up the altruistic side 3:1.

Lazy people can access mild leverage (1.5:1) through NTSX for low fees. Many brokerages don't grant access to the more extreme 3:1 ETFs.

Comment by RomeoStevens on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-11-06T02:27:32.422Z · EA · GW

Thanks for fleshing this out.