Do sequence obfuscation technologies present a bio-threat? 2021-09-03T09:33:33.533Z


Comment by BenStewart on The most important century and the representativeness of EA · 2021-09-30T06:17:36.501Z · EA · GW

No worries, and although I'm a little unsure if it is against forum rules or whatever,  this might be helpful: 

Comment by BenStewart on Summary of history (empowerment and well-being lens) · 2021-09-28T23:15:23.120Z · EA · GW

On the 'History is a story' point, Jo Guldi and David Armitage's 'History Manifesto' might be relevant (available open-access here: , summarised here: ). 

It's a book lamenting the tendency for history to focus on smaller and smaller topics and lengths of time, rather than attempting grander understanding of trends and forces. It also talks about the important role of history in informing policy, and even in understanding possible futures. The summary discusses some significant pushback, but it's interesting nonetheless.

Comment by BenStewart on The most important century and the representativeness of EA · 2021-09-28T06:23:49.896Z · EA · GW

Thanks for the good post. I'm reminded of a paper by the philosopher Elizabeth Anderson that you might find interesting . It's about  how epistemic injustice  (harm or unfairness done to a person in their capacity as a source of knowledge)  is not just a transactional phenomenon between individuals, but is instantiated in social structures too. And responses to these injustices may need to be structural.

The particular relevance to EAs might be that while each member may be epistemically virtuous (e.g. not allowing ethnic or racial biases to affect their judgments of others' claims), particular structures might still be objectionable. There's another paper (that I thought was by Anderson, but can't find!) that talks about the epistemic benefits of getting diverse input. 

[Edit: Also,  just because I'm currently reading it and it is somewhat relevant, it's worth noting Hans Rosling's broad summary of  changes in the global population distribution:
currently there is 1 billion people in Europe, 1 billion in the Americas, 1 billion in Africa, and 4 billion in Asia. In 2100, it is predicted that there will still be 1 billion in Europe and 1 billion in the Americas, but 4 billion in Africa and 5 billion in Asia.]

Comment by BenStewart on Working in the U.S. helped us get more money to international recipients · 2021-09-16T04:19:30.127Z · EA · GW

Thanks for this post! When I learned last year that GiveDirectly was including U.S. recipients I admit my naive reaction was disappointment. It's great to hear it has worked out so well, and also a useful example where I failed to appreciate the complexity of a situation.

Comment by BenStewart on Open and Welcome Thread: November 2020 · 2020-11-10T10:37:41.275Z · EA · GW

Hey there! Nice to meet you. Send me a message if you want to chat more

Comment by BenStewart on Open and Welcome Thread: November 2020 · 2020-11-09T11:33:37.773Z · EA · GW

Hi, I'm Ben, currently a medical student at the University of Sydney. I did a double-degree undergraduate  in philosophy, international relations, and neuroscience. 

I've spent my MD doing bits and bobs in global health research and advocacy, and spent some time as an intern on the Bio team at the Future of Humanity Institute, Oxford. Next year I'll be spending some time at Vaxxas, a vaccine nanopatch company, finishing my MD, and figuring out my next steps.

Comment by BenStewart on Have you ever used a Fermi calculation to make a personal career decision? · 2020-11-09T11:24:50.043Z · EA · GW

Thanks for asking this! I'm looking forward to reading discussion of it.  I feel similar to you, I think. I'm trying to decide between career options in global health, health security, and  global catastrophic biological risk reduction (GCBR). There's a lot of different inputs, both personal and external, but one aspect I've struggled with is the tension between being mostly convinced of the arguments for GCBR work (and trusting the many smart people convinced by them) and feeling the probabilities of me making a difference on low-probability/high-consequence events are too small when multiplied together. 

Regardless of whether the math 'works out' in a  Fermi calculation of a GCBR career (whether by me or others), it still feels sort of 'thin' to base a major career change on. 

Here's an example to try to capture the feeling of thinness (or fragility): it seems plausible that a few papers (or blog posts) might come along that are devastatingly clever, featuring arguments or evidence I hadn't thought of, that showed with high confidence that the risk of synthetic pandemics is extremely low (this specific example might not be plausible, but it captures my psychology at least). If that came along after I'd spent 20ish years narrowly focused on synthetic pandemic risk, without major transferable career capital, I'd feel like I'd made a mistake (if not ex ante, at least ex post).

The reduction in importance wouldn't necessarily have to be dramatic for it to be consequential for individual career choices . If, given my previous experience, it's a close race between my options in terms of projected ethical impact, a more minor reduction could reveal my eventual choice was actually a distant second. Also, the reduction wouldn't necessarily have to deflate the entire problem - arguments might have revealed that  people similar to me in the field actually has vanishingly little (or negative) impact on the issue. 

Traditional career choices seem to be based more on personal preference , social connection, tradition, or chance. And there's plenty wrong with those approaches. One positive however is that those ways of making choices are more resistant to this kind of intellectual deflation by others. 

Of course, there's also plenty of regret and uncertainty in traditional careers and career-choices.  It's not clear to me whether this feeling of 'thinness' is just a bias I need to work through, or is actually tracking something important. And, like you say, it's not clear what we should do otherwise.