Posts

How to dissolve moral cluelessness about donating mosquito nets 2022-06-08T07:12:27.247Z
Reflections on Star Trek Strange New Worlds S1 Episode 1 2022-06-02T04:22:02.309Z
Is it possible to submit to the 80,000 Hours Jobs Board? 2021-03-16T22:24:03.560Z
Solving the moral cluelessness problem with Bayesian joint probability distributions 2021-02-28T09:17:58.459Z
Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" 2021-02-25T20:06:02.045Z
Have you ever used a Fermi calculation to make a personal career decision? 2020-11-09T09:34:53.092Z
Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children 2020-02-26T10:04:05.623Z

Comments

Comment by ben.smith on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-14T20:34:51.366Z · EA · GW

Good post!

I think in an ideal world, you could confidently say that "there is value in trying to interact with AI researchers outside of the AI Alignment bubble", not only so you can figure out cruxes and better convince them, but actually because you might learn they are right and we are wrong. I don't know whether you believe that, but it seems not only true but also follows very strongly from our movements epistemic ideals about being open-minded to follow evidence and reason where it leads.

If you felt that you would get pushback on suggesting that there's an outside view where AGI Alignment cause area sceptics might be right, I hope you are wrong, but if there are many other people who feel that way, it indicates some kind of epistemic problem in our movement.

Any time we're in a place where someone feels there's something critical they can't say, even when speaking in good faith, to best use evidence and reason to do the most good, that's a potential epistemic failure mode we need to guard against.

Comment by ben.smith on A hypothesis for why some people mistake EA for a cult · 2022-06-09T18:36:00.885Z · EA · GW

A key characteristic of a cult is a single leader who accrues a large amount of trust and is held by themselves and others to be singularly insightful. The LW space gets like that sometimes, less so EA, but they are adjacent communities. 

Recently, Eliezer wrote

The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so.  Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try.  I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle).  The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.  I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this.  That's not what surviving worlds look like.

I don't necessarily disagree with this analysis, in fact, I have made similar observations myself. But the social dynamic of it all pattern-matches to cult-like, and I  think that's a warning sign we should be wary of as we move forward. In fact, I think we should probably have an ongoing community health initiative targeted specifically at monitoring signs of group-think and other forms of epistemic failure in the movement.

Comment by ben.smith on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T16:55:12.227Z · EA · GW

Ord (2020) listed climate change as an x-risk. Though, on reflection, he may have said that 1/1000 was an absolute upper bound and he thought the actual risk was lower than that.

I have a hard time understanding stories not mediated through climate change or resource shortage (which seems closely linked to climate change, in that many resource limits boil down to carbon emissions) about how population growth in Africa could lead to higher existential risk--particularly in a context where global population seems like it will hit a peak and then decline sometime in the second half of the 21st century. Most of the pathways I can imagine would point to lower existential risk. If the starting point is that bednet distribution leads to lower existential risk, there isn't really a dilemma, and so that case seemed less interesting to analyse. So that's probably one reason I saw more value in starting my analysis with the climate change angle.

However, there are probably causal possibilities I've missed. I'd be interested to hear what you think they might be. I do think someone should try to examine those more closely in order to try and put reasonable probabilistic bounds around them.

I certainly don't think the analysis above is complete. As I said in the post, the intent was to demonstrate how we could "dissolve" or reduce some moral cluelessness to ordinary probabilistic uncertainty using careful reason and evidence to evaluate possible causal pathways. I think the analysis above is a start and a demonstration that we can reduce uncertainty through reasoned analysis of evidence. But we'd definitely need a more extended analysis to act. Then, we can take an expected value approach to work out the likely benefit of our actions.

Comment by ben.smith on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T14:25:45.313Z · EA · GW

You're right that I didn't discuss it much. Perhaps I should have.

I have a head model that world per capita net GHG emissions will begin to decline at some point before 2050, and reach net zero some time between 2050 and 2100. The main relevance for population here was that higher population would increase emissions. But once the world reaches net zero per capita emissions, additional people might not produce more emissions.

I think it's quite plausible that population decline due to economic growth induced in 2022 won't show up for a couple of generations--potentially after we reach net zero. So I didn't include it in the model. If I had done, we'd get a result more in favour of donating bednets.

Comment by ben.smith on New cause area: Violence against women and girls · 2022-06-06T16:25:44.524Z · EA · GW

Good post, well done! Cost effectiveness analysis was done well. There are a couple things I'd like to see in the cost effectiveness that could further enhance the argument. First, I don't have a good handle on the cost per DALY through well-known givewell interventions like AMF or the equivalent for direct giving, and it would be good to see that compared (comparison with other health variables might be helpful).

Second, if the sources are strictly measuring medical and health outcomes of reduced violence, the true magnitude of the benefit could actually be quite a bit more, because plausibly there are additional well-being benefits not captured by a pure medical analysis.

You have mentioned economic benefits in other parts of the report so I suppose it would be helpful to capture that in the analysis of specific cause areas too.

That said, cost per DALY of $52-78 sounds reasonably good at least?

Comment by ben.smith on Who wants to be hired? (May-September 2022) · 2022-06-02T04:48:01.121Z · EA · GW
Location: Eugene, OR
Remote: OK
Willing to relocate: Yes
Skills: Data science, machine learning, visualization, neuroimaging, survey design, research
Résumé/CV/LinkedIn: https://docs.google.com/document/d/1fR9cKbCY8HLFmKdxf-o5ujBmQc7Pqzhdzq2We5PyJVo/edit
Email: benjsmith@gmail.com
Notes: Interested in AGI Safety
Comment by ben.smith on Friends or relatives in Oregon? Please let us know! Updates & actions to help Carrick win · 2022-05-16T18:59:07.329Z · EA · GW

I volunteered over the weekend on the campaign door-knocking. the team are hardworking and largely made up of EAs. If anyone can help on the phone banks before the day ends then I expect it would have a substantial EV impact.

Comment by ben.smith on Announcing the Future Fund · 2022-05-03T16:31:59.083Z · EA · GW

I put in an application on 21st March, but haven't yet heard back. Are some applications still being processed, or should I assume this is either a negative response or that I must have made some mistake in submitting?

Comment by ben.smith on We're announcing a $100,000 blog prize · 2022-04-26T17:43:45.772Z · EA · GW

Perhaps an enterprising blogger could start an interview-format blog, where they interview EA authors of those "internal discussions and private documents" and ask them to elucidate their ideas in a way suitable for a general audience. I think that would make for a pretty neat and high-value blog!

Comment by ben.smith on How to become an AI safety researcher · 2022-04-12T17:39:17.630Z · EA · GW

Interesting post Peter, really appreciate this and got a lot of useful ideas. While trying to assign the appropriate weight to the perspectives here it was useful for me to see where I've been consistent with these success stories and where I might have area to make up.

I wonder if it's worth following up this very useful qualitative work with a quantitative survey?

Comment by ben.smith on "Long-Termism" vs. "Existential Risk" · 2022-04-07T05:38:44.531Z · EA · GW

Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:

  • Transformative AI will happen likely happen within 10 years, or 30
  • There's a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)

It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.

If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1/3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation "convince me 1% matters" can look a lot like just explaining longtermism to someone.

Comment by ben.smith on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2022-03-20T21:14:51.022Z · EA · GW

Some great points, and you've got me thinking again, honestly. I'll concede that if the GDP impact or human life impact were quite a bit different, and they absolutely could be, I'd be...at least thinking a lot harder about this.

I guess my central point was that you cannot argue that CC should not be a significant factor deciding on having children or not (if you care for total happiness), without arguing whether having children is something that will effectively exacerbate CC in the long run or not. And I think you were trying to do that.

That a fair criticism. Trying to sum up, I think the point I'm trying to get across (poorly expressed in my OP, I have to say) is that 

(1) one should (under a total view of happiness) include the enjoyment one's potential child will get out of life in the calculations

(2) the enjoyment one's potential child will get out of life is almost certainly still positive, and 

(3) to make a new person's existence net-negative, the marginal impact of climate change of an extra person would have to be large to outweigh the total utility of an extra person living, say 40-80 well-being adjusted life-years. While we can all see the impact of climate change as a whole is large, that is the combined impact of 8 billion people; the individual impact of each marginal person is much smaller than the WALYs they experience through existing.

On my understanding of impacts, I had thought (2) and (3) would be uncontroversial given the evidence. Thus, I mainly wanted to point out the analytical argument outlined in the previous paragraph, and that would be enough. But now you've told me true GDP impact could be much greater than 10%, I'm much less certain about that! I guess you are right at least that the debate is "messy".

Do you have any sources you can recommend that contain more reliable estimates of (a) GDP impact, (b) human life impact, or (c) long-run exacerbation where things become "overwhelmingly negative"? All of that would concern me, particularly the long-run overwhelmingly-negative scenario.

I understand this is getting into an entirely new argument I didn't make originally, so appreciate if you don't want to stray, but at some point, I think the "climate cost" to grow the population by some amount is the lesser of the mitigation of their carbon footprint by other means, or the actual effects of their carbon footprint. That makes the assumption that "we" (whoever the imagined "we" is) will choose the lesser cost option, which is problematic, but on the other hand, I'm not sure how much moral responsibility you can build into the choice to have a child if a less impactful alternative to mitigation exists which society as a whole chooses not to pursue.

Comment by ben.smith on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2022-03-20T00:33:13.612Z · EA · GW

Interesting, appreciate your reply! I think you raised a couple of concerns:

  1. Bringing an additional child into the world results in them essentially taking from a limited resource (the finite share of carbon emissions that can be captured, mitigated, or tolerated), reducing the resource available for everyone else.
  2. It's plainly wrong to argue life won't be substantially worse than today

Have I understood your argument right?

I think (1) is complicated. Even if it's true that bringing an additional child into the world results in less for everyone else, the primary beneficiary isn't the parents of the child, but the child themselves (although this depends on whether you take a "total view" or "person-affecting" view of population ethics. It's true under the total view, which is my own perspective. If you take the person-affecting view, you could disagree). The key point I was trying to make in my post is the benefits accruing to that one child are greater than the total sum of harm that additional child does by existing and producing a carbon footprint. I think other commenters were right to say I haven't made a strong affirmative case, but at least, I'd appeal to you to consider whether the calculations need to be done.

I'll attempt a brief calculation, though. I don't necessarily stand by these figures, but my point is that (from a consequentialist point-of-view) doing a calculation like this is important for understanding whether anti-natalism is a good response to climate change.

The largest impact of climate change on human beings in expectation seems to be forcing people out of their homes and communities to migrate, possibly across thousands of miles to different countries. Many will die of famine, thirst, or other acute problems, but all have their lives uprooted. Understanding the number of people this will impact is difficult, but the best estimate I can find is roughly 200 million. If this scales linearly with the number of people in the world, roughly 8 billion now, then for every 40 new people in the world, we'll have 1 new climate refugee. Is it worth coming into the world if you have a one in forty chance of being a climate change refugee, or causing someone else to be? Of course no one can actively make that choice, but we can make that choice for someone "in expectation" if we're in a position to decide whether to bring them into the world. To me a 1 in 40 chance of a bad outcome is worth a 39 out of 40 chance of a good outcome.

But even though that still seems a worthwhile gamble, in reality, I think the situation is much, much less dire than that. The impact of climate change won't scale linearly, because as we get more people, we'll spend more resources on carbon capture and transitioning to a zero emission economy. This does impose costs on people, but the sacrifice of driving a bit less, or spending a bit more money on solar panels, or other forms of getting to carbon zero, seem less of a sacrifice than not existing at all. This isn't completely obvious, because the burden is across the whole of society, but I'll have to leave that exercise for the future.

For the second point (2): people have done their best to work out the economic impact of climate change. The best indications are in the range of 2-10% of world GDP. On average, the US and other developed economies grow about 1-2% a year, or 10-20% a decade. So the impacts of climate change, and responding to it, will cost us a decade of growth and rise in living standards,. But, overall, it seems like living standards will still be higher in future than they are now, even accounting for the impact of climate change.

Comment by ben.smith on The Future Fund’s Project Ideas Competition · 2022-03-07T08:37:11.224Z · EA · GW

Building on the above idea...

Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world

A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.


 

Comment by ben.smith on The Future Fund’s Project Ideas Competition · 2022-03-07T07:41:46.737Z · EA · GW

Group psychology in space

Space governance

When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.

Comment by ben.smith on The Future Fund’s Project Ideas Competition · 2022-03-07T07:40:50.731Z · EA · GW

Fund publicization of scientific datasets

Epistemic institutions
 

Scientific research has made huge strides in the last 10 years towards more openness and data sharing. But it is still common for scientists to keep some data proprietary for some length of time, particularly large datasets that cost millions of dollars to collect like, for instance, fMRI datasets in neuroscience. More funding for open science could pay scientists when their data is actually used by third parties, further incentivizing them to make data not only accessible but useable. Open science funding could also facilitate the development of existing open science resources like osf.io and other repositories of scientific data. Alternatively, a project to systematically catalogue scientific data available online–a “library of raw scientific data” could greatly expand access and use of existing datasets.

Comment by ben.smith on Geoengineering Research · 2022-01-17T21:51:05.596Z · EA · GW

Has there been any advancement on Open Phil's thinking about Geoengineering in the 8 years since this report?

Comment by ben.smith on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2021-03-30T21:05:14.948Z · EA · GW

I was a bit worried about some possible methodological issues with the GiveWell measures of life satisfaction. I looked into the data, and the issue doesn't completely undermine the result, but I think having looked closely I am now moderately less convinced that the negative spillover effect observed is a real problem.

Some measures of "life evaluation" use a technique like the Cantril Ladder; this is often used as a measure of happiness or subjective well-being (e.g., in the World Happiness Report 2021). In the words of that report, the Cantril ladder question "asks respondents to think of a ladder, with the best possible life for them being a 10, and the worst possible life being a 0. They are then asked to rate their own current lives on that 0 to 10 scale".

When measuring the "negative spillover" effects discussed above in Section 6,  the Cantril would be an inappropriate measure to use. That's because, when respondents think of the ladder and imagine the "best possible life" and the "worst  possible life", many are likely to anchor on to exemplars that are salient/at hand, like, for instance, the distribution of people in their local community.

Imagine two people in a community, Andrew and Bob. You give Andrew $100 and Bob $0, and then ask Bob to rate his own life on a 0 to 10 scale. Bob might think of Andrew, who just got $100 for nothing, and think of that as particularly good. His idea of how good life can get just got a little bit higher. (If you think this is silly, rather than imagining, I don't know, Elon Musk or some other fantastically privileged person, keep in mind that respondents probably have a few seconds to answer this questionnaire and what is salient is very important. for more discussion on the importance of salience in questionnaire respondents, consult the work of psychologist Norbert Schwarz at USC.) So Bob would be relatively lower on the ladder and he'd give himself a lower life evaluation score. But it isn't clear this really reflects any of Bob's subjective experience of happiness from day to day, other than times when he's asked to rate himself on Cantril ladders. That would depend on him actually making those subjective comparisons and feeling bad about them on a regular basis.

Fortunately, the GiveWell study did not use a Cantril Ladder. As explained by Haushofer, Reisinger, and Shapiro (2015), they used four well-being measures:

 the “happiness” and “life satisfaction” questions from the World Values
Survey; the total score on the Center for Epidemiologic Studies Depression scale (CESD) (Radloff
1977); and total score on the Perceived Stress Scale (PSS)(Cohen, Kamarck, and Mermelstein 1983).

The happiness and life satisfaction from the World Values Survey can be viewed here and are:

  1. "Taking all things together, would you say you are very happy, rather happy, not very happy, or not at all happy?"
  2. "All things considered, how satisfied are you with your life as a whole these days? Using this card on which 1 means you are “completely dissatisfied” and 10 means you are “completely satisfied” where would you put your satisfaction with your life as a whole?"

Of the four well-being measures, the negative spillover result relates to the Life Satisfaction question specifically. It is possible that when answering (2) a respondent might use the same kind of social comparison processes that Bob used in the example above. But importantly, these weren't suggested to the respondent by the surveyer. If the respondent had used social comparison, they did so unprompted, and it's not so hard to imagine they do that often in their life in a way that affects their emotional affect.

Nevertheless, I'd be even more convinced if an effect on the happiness question had been observed, and the fact that the study observed a negative spillover result on Life Satisfaction and not Happiness does suggest that perhaps social comparison is occurring for Life Satisfaction in a way that doesn't seem to impact on the Happiness measure. If you think that Life Satisfaction tells us something additional to the Happiness measure about basic intrinsic hedonic utils, you should probably still be concerned about the negative spillover. If you think that Life Satisfaction is only important to the extent that it affects self-reported happiness, you should be cautious about interpreting the result of the negative spillover.

What the survey didn't do, because it's very expensive and hard, and require respondents to at least be able to text on a cellphone, is to measure basic momentary positive and negative affect through a method like Ecological Momentary Assessment. I'd be interested in a future study looking at that and seeing whether we observe (1) any effect of the GiveWell intervention on momentary positive and negative affect and (2) whether there are negative spillover effects there.

Comment by ben.smith on What are your main reservations about identifying as an effective altruist? · 2021-03-30T20:34:39.458Z · EA · GW

I'm the same. I'm a "member" and even a "community leader" in the "EA movement", and happy to identify as such. But calling yourself an "Effective Altruist" is to call yourself an "altruist", at least in the ears of someone who isn't familiar with the movement. I think it will sound morally pretentious or self-aggrandizing. Generally the label of "altruist" should be given to an individual by others, not claimed, if it should ever be applied to describe a specific individual, which actually seems a bit weird regardless of whoever is bestowing the label.

Comment by ben.smith on Is it possible to submit to the 80,000 Hours Jobs Board? · 2021-03-16T22:24:26.019Z · EA · GW

The job in question, for those curious:

 


PostDoc Position on The Science of Well-Being at Yale

 

Dr. Laurie Santos in the Department of Psychology at Yale University is seeking a Postdoctoral Research Associate to start by June 1, 2021. The ideal candidate will have a PhD in Psychology, Cognitive Science, Behavioral Science, or a related field; research interests in positive psychology; a strong background in statistics and data science; and experience working with adolescents and adults in school settings. This is a one-year appointment with possibility of renewal for additional years based on mutual agreement and University approval. 

 

The position is part of a broad grant-funded initiative launched in 2020 to develop and test instructional programming on the science of well-being for a number of different populations: high school students (especially those from rural and low-income schools), teachers, and parents. The position will involve developing research studies to evaluate the impact of these instructional resources, as well as the option to develop other research projects on the science of well-being more broadly. The successful candidate is expected to (1) lead the evaluation and research components of this initiative, scientifically assessing whether these resources improve participant mental health and overall flourishing; (2) consult with the course development team as an in-house subject matter expert; and (3) work closely with our partner institutions, including high schools, nonprofits, universities, and professional organizations.  

 

Desired Skills and Qualifications: 

  • PhD in Psychology, Cognitive Science, Behavioral Science, or related field 
  • Research experience and publications in positive psychology, as well as education, social psychology, behavioral change, or related domains 
  • Extensive experience in statistical and data analytic techniques 
  • Research, teaching and/or clinical experience with adolescent populations 
  • Experience collaborating with middle/high school teachers and/or administrators 
  • Experience navigating and manipulating very large datasets from diverse sources 
  • Expertise in relevant technologies (eg. Qualtrics, Excel, Google Sheets, Tableau, R, SQL, Python, Tableau) 
  • Some experience with psychology course design and development 
  • Comfort working both independently and collaboratively on a small cross-functional team of experts and non-experts 
  • Mission-driven and a team player 
  • Basic working knowledge of the US education system 

 

Applicants should send an email to Laurie Santos at laurie.santos@yale.edu with the subject line “Postgraduate Research Associate Application” and include the following items. 

  1. CV 
  2. Links to portfolio and/or relevant work samples 
  3. Statement of why you are interested in this role and why you are the right person for it (max 300 words) 

 

Yale University is an affirmative action/equal opportunity employer.  We especially encourage women, members of minority groups, persons with disabilities, and covered veterans to apply. 

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-14T21:09:58.843Z · EA · GW

Which distribution would you use? Why the particular weights you've chosen and not slightly different ones?

 

I think you just have to make your distribution uninformative enough that reasonable differences in the weights don't change your overall conclusion. If they do, then I would concede that the solution to your specific question really is clueless. Otherwise, you can probably find a response.

come up with a probability distribution for the fraction of heads over 1,000,000 flips.

Rather than thinking of directly of appropriate distribution for the 1,000,000 flips, I'd think of a distribution to model  itself.  Then you can run simulations based on the distribution of  to calculate the distribution of the fraction of 1000,000 flips. , and then we need to select a distribution for  over that range.

There is no one correct probability distribution for p because any probability is just an expression of our belief, so you may use whatever probability distribution genuinely reflects your prior belief.  A uniform distribution is a reasonable start. Perhaps you really are clueless about p, in which case, yes, there's a certain amount of subjectivity about your choice. But prior beliefs are always inherently subjective, because they simply describe your belief about the state of the world as you know it now. The fact you might have to select a distribution, or set of distributions with some weighted average, is merely an expression of your uncertainty. This in itself, I think, doesn't stop you from trying to estimate the result.

I think this expresses within Bayesian terms the philosophical idea that we can only make moral choices based on information available at the time;  one can't be held morally responsible for mistakes made on the basis of the information we didn't have.

Perhaps you disagree with me that a uniform distribution is the best choice. You reason thus: "we have some idea about the properties of coins in general. It's difficult to make a coin that is 100% biased towards heads. So that seems unlikely". So we could pick a distribution that better reflects your prior belief. Perhaps a suitable choice might be  with a truncation at 0.5, which will give the greatest likelihood of  just above 0.5, and a declining likelihood down to 1.0.

Maybe you and i just can't agree after all that there is still no consistent and reasonable prior choice you can make, and not even any compromise. And let's say we both run simulations using our own priors and find entirely different results and we can't agree on any suitable weighting between them. In that case, yes, I can see you have cluelessness. I don't think it follows that, if we went through the same process for estimating the longtermist moral worth of malaria bednet distribution, we must have intractable complex cluelessness about specific problems like malaria bednet distribution. I think I can admit that perhaps, right now, in our current belief state, we are genuinely clueless, but it seems that there is some work that can be done that might eliminate the cluelessness.

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-14T04:28:28.736Z · EA · GW

A good point.

There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.

We couldn't have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty - the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.

The ongoing pandemic is a harder example - the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren't pandemics), which seems like a reasonable approximation.

I don't know how much of a realistic solution that would be in practice.

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-13T01:45:32.931Z · EA · GW

Thanks! That was helpful, and my initial gut reaction is I entirely agree :-)

Have you had an opportunity to see how Hillary Greaves might react to this line of thinking? If I had to hazard a guess I imagine she'd be fairly sympathetic to the view you expressed.

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-12T22:16:01.385Z · EA · GW

There is an argument from intuition that carry some force by Schoenfield (2012) that we can't use a probability function:

(1) It is permissible to be insensitive to mild evidential sweetening.
(2) If we are insensitive to mild evidential sweetening, our attitudes cannot be represented by a probability function.
(3) It is permissible to have attitudes that are not representable by a probability function. (1, 2)

...

You are a confused detective trying to figure out whether Smith or Jones committed the crime. You have an enormous body of evidence that to evaluate. Here is some of it: You know that 68 out of the 103 eyewitnesses claim that Smith did it but Jones' footprints were found at the crime scene. Smith has an alibi, and Jones doesn't. But Jones has a clear record while Smith has committed crimes in the past. The gun that killed the victim belonged to Smith. But the lie detector, which is accurate 71% percent of time, suggests that Jones did it. After you have gotten all of this evidence, you have no idea who committed the crime. You are no more confident that Jones committed the crime than that Smith committed the crime, nor are you more confident that Smith committed the crime than that Jones committed the
crime.

...

Now imagine that, after considering all of this evidence, you learn a new fact: it turns out that there were actually 69 eyewitnesses (rather than 68) testifying that Smith did it. Does this make it the case that you should now be more confident in S than J? That, if you had to choose right now who to send to jail, it should be Smith? I think not.

...

In our case, you are insensitive to evidential sweetening with respect to S since you are no more confident in S than ~S (i.e. J), and no more confident in ~S (i.e. J) than S. The extra eyewitness supports S more than it supports ~S, and yet despite learning about the extra eyewitness, you are no more confident in S than you are in ~S (i.e. J).

 

 Intuitively, this sounds right. And if you went from this problem trying to understand solve the crime on intuition, you might really have no idea. Reading the passage, it sounds mind-boggling.

On the other hand, if you applied some reasoning and study, you might be able to come up with some probability estimates. You could identify the conditioning of P(Smith did it|an eyewitness says Smith did it), including a probability distribution on that probability itself, if you like. You can identify how to combine evidence from multiple witnesses, i.e.,  P(Smith did it|eyewitness 1 says Smith did it) & P(Smith did it|eyewitness 2 says Smith did it), and so on up to 68 and 69. You can estimate the independence of eyewitnesses, and from that work out how to properly combine evidence from multiple eyewitnesses.

And it might turn out that you don't update as a result of the extra eyewitness, under some circumstances. Perhaps you know the eyewitnesses aren't independent; they're all card-carrying members of the "We hate Smith" club.  In that case it simply turns out that the extra eye-witness is irrelevant to the problem; it doesn't qualify as evidence, so it it doesn't mean you're insensitive to "mild evidential sweetening".

I think a lot of the problem here is that these authors are discussing what one could do when one sits down for the first time and tries to grapple with a problem. In those cases there's so many undefined features of the problem that it really does seem impossible and you really are clueless.

But that's not the same as saying that, with sufficient time, you can't put probability distributions to everything that's relevant and try to work out the joint probability.

 

----

Schoenfield, M. Chilling out on epistemic rationality. Philos Stud 158, 197–219 (2012).

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-12T01:59:30.550Z · EA · GW

> Hope this helps.

It does, thanks--at least, we're clarifying where the disagreements are.

If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I'm missing something. It doesn't seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about  coming up with a second-order probability distribution over their set of priors . 

All you need to do to come up with that meta-probability distribution is to have some information about the relative value of each item in your set of probability functions. If our conclusion for a particular dilemma turns on a disagreement between virtue ethics, utilitarian ethics, and deontological ethics, this is a difficult problem that people will disagree strongly on. But can you even agree that these each bound, say, to be between 1% and 99% likely to be the correct moral theory? If so, you have a slightly informative prior and there is a possibility you can make progress. If we really have completely no idea, then I agree, the situation really is entirely clueless. But I think with extended consideration, many reasonable people might be able to come to an agreement.

Upon immediately encountering the above problem, my brain is like the mug: just another object that does not have an expected value for the act of giving to Malaria Consortium. Nor is there any reason to think that an expected value must “really be there”, deep down, lurking in my subconscious.

I agree with this. If the question is, "can anyone, at any moment in time, give a sensible probability distribution for any question", then I agree the answer is "no". 

But with some time, I think you can assign a sensible probability distribution to many difficult-to-estimate things that are not completely arbitrary nor completely uninformative.  So, specifically, while I can't tell you right now about the expected long-run value for giving to Malaria Consortium, I think I might be able to spend a year or so understanding the relationship between giving to Malaria Consortium and long-run aggregate sentient happiness, and that might help me to come up with a reasonable estimate of the distribution of values.

We'd still be left with a case where, very counterintuitively, the actual act of saving lives is mostly only incidental to the real value of giving to Malaria Consortium, but it seems to me we can probably find a value estimate.

About this, Greaves (2016) says,

averting child deaths has longer-run effects on population size: both because the children in question will (statistically) themselves go on to have children, and because a reduction in the child mortality rate has systematic, although difficult to estimate, effects on the near-future fertility rate. Assuming for the sake of argument that the net effect of averting child deaths is to increase population size, the arguments concerning whether this is a positive, neutral or a negative thing are complex.

And I wholeheartedly agree, but it doesn't follow from the fact you can't immediately form an opinion about it that you can't, with much research, make an informed estimate that has better than an entirely indeterminate or undefined value.

EDIT: I haven't heard Greaves' most recent podcast on the topic, so I'll check that out and see if I can make any progress there.

EDIT 2: I read the transcript to the podcast that you suggested, and I don't think it really changes my confidence that estimating a Bayesian joint probability distribution could get you past cluelessness.

So you can easily imagine that getting just a little bit of extra information would massively change your credences. And there, it might be that here’s why we feel so uncomfortable with making what feels like a high-stakes decision on the basis of really non-robust credences, is because what we really want to do is some third thing that wasn’t given to us on the menu of options. We want to do more thinking or more research first, and then decide the first-order question afterwards.

Hilary Greaves: So that’s a line of thought that was investigated by Amanda Askell in a piece that she wrote on cluelessness. I think that’s a pretty plausible hypothesis too. I do feel like it doesn’t really… It’s not really going to make the problem go away because it feels like for some of the subject matters we’re talking about, even given all the evidence gathering I could do in my lifetime, it’s patently obvious that the situation is not going to be resolved.

My reaction to that (beyond I should read Askell's piece) is that I disagree with Greaves that a lifetime of research could resolve the subject matter for something like giving to Malaria Consortium. I think it's quite possible one could make enough progress to arrive at an informative probability distribution. And perhaps it only says "across the probability distribution, there's a 52% likelihood that giving to x charity is good and a 48% probability that it's bad", but actually, if the expected value is pretty high, it's still strong impetus to give to x charity.

I still reach the problem where we've arrived at a framework where our choices for short-term interventions are probably going to be dominated by their long-run effects, and that's extremely counterintuitive, but at least I have some indication.

Comment by ben.smith on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-04T20:03:53.923Z · EA · GW

Her choice to use multiple, independent probability functions itself seems arbitrary to me, although I've done more reading since posting the above and have started to understand why there is a predicament.

Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That'll give you a single probability density function on which you can base your decision.

If you start with a set of several probability functions, with each representing a set of beliefs, then calculating their joint probability would require sampling randomly from each function according to some distribution specifying how likely each of the functions are. It can be done, with the proviso that you must have a probability distribution specifying the relative likelihood of each of the functions in your set.

However, I do worry the same problem arises in this approach in a different form. If you really do have no information about the probability of some event, then in Bayesian terms, your prior probability distribution is one that is completely uninformative. You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an "imprecise credence".

But I think the good news is that many times, your priors are not so imprecise that you can't assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can't calculate expected long-term consequences for actions.

I do remain worrying, with Greaves, that GiveWell's approach of assessing direct impact for each of its potential causes is woefully insufficient. Instead, we need to calculate out the very long term impact of each cause, and because of the value of the long-term future, anything that affects the probability of existential risk, even by an infinitesimal amount, will dominate the expected value of our intervention.

And I worry that this sort of approach could end up being extremely counterintuitive. It might lead us to the conclusion that promoting fertility by any means necessary is positive, or equally likely, to the conclusion that controlling and reducing fertility by any means necessary is positive. These things could lead us to want to implement extremely coercive measures, like banning abortion or mandating abortion depending on what we want the population size to be. Individual autonomy seems to fade away because it just doesn't have comparable value. Individual autonomy could only be saved if we think it would lead to a safer and more stable society in the long run, and that's extremely unclear.

And I think I reach the same conclusion that I think Greaves has, that one of the most valuable things you can do right now is to estimate some of the various contingencies, in order to lower the uncertainty and imprecision on various probability estimates. That'll raise the expected value of your choice because it is much less likely to be the wrong one.

Comment by ben.smith on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-02-27T22:00:33.265Z · EA · GW

Thanks for your remarks. I'm looking forward to her full article being published, because I agreed that as it is, she's been pretty vague.  The full article might clear up some of the gaps here.

From what you and others have said, the most important gap seems to be "why we should not be consequentialists", which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:

  1. EAs want to do "the most good possible".
  2. Ensuring more systemic equality and justice is good.
  3. We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it's welfare-neutral.
  4. If you want to do "the most good" then you will need to do things that ensure systemic equality and justice, too (from 3).
  5. Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
  6. You can't quantify systemic equality and justice.
  7. Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.

Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn't improve long-term well-being, it's not actually good. I suppose I believe that, but I'm unsure about it.

Comment by ben.smith on Reducing long-term risks from malevolent actors · 2021-01-20T20:18:10.149Z · EA · GW

I think the main solution is to develop strong and resilient institutions. Areas for improvement could be:

  • Distributing power over more individuals rather than less
  • Making office-holding unappealing for people with narcissistic or sadistic intentions or tendencies by increasing penalties for abuses of office
  • More transparency in government to make it harder to abuse the office
  • More checks and balances
  • Educating the electorate and building a healthier society so that people don’t want to elect a narcissist
Comment by ben.smith on Idea: the "woketionary" · 2020-12-11T03:30:12.035Z · EA · GW

James Lindsay has already created something like this, except he is very much "anti-woke" and his dictionary reflects his perspective. https://newdiscourses.com/translations-from-the-wokish/

Comment by ben.smith on Idea: the "woketionary" · 2020-12-11T03:29:56.021Z · EA · GW

James Lindsay has already created something like this, except he is very much "anti-woke" and his dictionary reflects his perspective. https://newdiscourses.com/translations-from-the-wokish/

Comment by ben.smith on AMA: "The Oxford Handbook of Social Movements" · 2020-11-24T21:08:05.650Z · EA · GW

Hi Michael, I was searching for demandingness discussion on the forum here and found your comment.

Are you are aware of any discussion on this before or since your comment?

One recent article is: https://faculty.wharton.upenn.edu/wp-content/uploads/2020/02/Effectiveness-and-Demandingness.pdf which claims "EAs must endorse the view that well off people have at least fairly demanding unconditional obligations" to donate money to effective charities.

It's not my prior view at all. I think the most good will be done by people partaking in activities that are not particularly demanding at all (e.g., AGI Alignment research, plant-based meat research, well-being research, etc) rather than giving a substantial portion of income or making other demanding sacrifices. In order for the EA community to incentivize or show approval of such activity, people willing to do that research should be welcomed into the EA community whether or not they take a giving pledge or partake in any other more demanding activities. 

But...those are just my private half-baked thoughts to date. I'd be interested in a conversation on this topic.

Comment by ben.smith on Introducing Probably Good: A New Career Guidance Organization · 2020-11-09T21:56:26.586Z · EA · GW

Awesome, will love to have you! I'll message you direct with a couple of details.

Comment by ben.smith on Introducing Probably Good: A New Career Guidance Organization · 2020-11-08T21:56:10.425Z · EA · GW

Sounds like a great attempt to fill a very salient gap! We will be discussing your project at the EA Auckland meetup tomorrow night (Tuesday 6.30pm utc+13). Let me know if you have any interest in zooming into chat.

http://meetu.ps/e/JwPYk/tJw1V/d

Comment by ben.smith on Life Satisfaction and its Discontents · 2020-10-01T02:44:55.569Z · EA · GW

Right now, the field is focusing on doing its empirical work better - the "open science" movement. I think that social scientists do engage in what we call "theoretical" work, but it is generally simply theories about how things empirically work (e.g., if religion is unique in its ability to produce high eudamonia for a large number of people, how can we conceptualize it as a eudamonia-producing system? Or which systems in the brain are responsible for production of pain experience; how is physical pain related to other forms of emotional pain?).

A fair number of us are probably logical positivists to a degree, in that we don't want to go near a theoretical question with no empirical implications. That is a real shame. But to me, it just seems like theoretical values questions are outside of the domain of "social science" and in the domain of "humanities". And one good reason to continue specialising/compartmentalizing like that is that many social scientists are just crap at formulating a clearly-articulated logical argument (try to read a theory in a psychology paper in the latter half of the Intro where they formulate hypotheses from their theory; compare the level of logical rigor and clarity with that from your philosophy papers). Collaborations between philosophers and psychologists are great (have you listened to Very Bad Wizards by Tamler Sommers and David Pizzaro? I only cite a podcast because honestly, I can't think of actual research project collaborations) and collaborations should happen more, but honestly, it's just difficult for me to even conceive of a psychologist trying to answer the question "what really matters more: eudamonia or net positive and negative affect?" because it seems to me at that point they're doing humanities, not science.

I suppose there's a whole history of that too; BF Skinner's 'behavioral turn' really focused the field on what we can measure to the exclusion of anything that can't be measured; it took a few decades just for the field to creep into thinking about things that could be in principle measured, or only indirectly measured (the 'cognitive turn') let alone thinking about entirely non-measurable values questions like "what ultimate moral end should we prefer?" Prior to Skinner, there was Freud and Jung and related theorists who did do theory, but I am not sure it was very good or useful theory.

To focus what I am trying to say: is there something we could gain from social scientists (particularly moral psychologists) theorising more about values that is unique or distinct from or would add to what philosophers (particularly moral philosophers) are already doing?

Comment by ben.smith on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-28T04:29:00.649Z · EA · GW

I second this question. Intuitively, your argument makes sense and you have something here.


But I would have more confidence in the conclusion if a False Discovery Rate correction was applied. This is also called a Benjamini-Hochberg procedure (https://en.wikipedia.org/wiki/False_discovery_rate#Controlling_procedures).

In R, the stats package makes it very easy to apply the false discovery rate correction to your statistics - see https://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html. You would do something like

p.adjust(p, method = "fdr", n = length(p))

where p is a vector/list of all 55 of your uncorrected p-values from your t-tests.

Comment by ben.smith on Life Satisfaction and its Discontents · 2020-09-27T01:15:16.434Z · EA · GW

Yes you're right.

I will try a slightly different claim that links neuropsychology to moral philosophy then. If you think maximizing well-being is the key aim of morality, and you do this with some balance of positive and negative affect, then I predict your balance of positive and negative affect at least as an empirical matter will change your ideal number of people to populate the Earth and other environments with in the total view.

Maybe it's too obvious: if we're totally insensitive to negative affect, then adding any number of people who experience any level of positive affect is helpful. If we're insensitive to positive affect then total view would lead to advocating the extinction of conscious life (would Schopenhauer almost have found himself endorsing that view if it was put to him?). And there would be points all along the range in the middle that would lead to varying conclusions about optimal population. It might go some way to making total view seem less counterintuitive.

Comment by ben.smith on Life Satisfaction and its Discontents · 2020-09-26T05:58:35.565Z · EA · GW

A few random thoughts from a researcher with a background in psychology research:

  • One driver for preferences for LSTs or eudaimonia frameworks for SWB is an intuition that solely focusing our well-being concerns on happiness or affect would lead us to conclude that happiness wireheading as a complete and final solution, and that's intuitively wrong for most people.
  • Because psychologists are empiricists, they don't spend too much time worrying about whether affect, life satisfaction, or eudamonia are more important in a philosophical or ethical sense. They are more concerned about how they can measure each of these factors, and how environmental (or behavioral or genetic) factors might be linked to SWB measures. To the extent there is psychological literature on the relative value of SWB measures, I think most of it is simply just trying to justify that it is worth measuring and talking about eudamonia at all, as eudamonia is probably the least accepted of the three SWB measures.
  • Working out the relative importance of SWB measures seems to me to be solely a question of values, for moral philosophy and not psychology, so I am glad that you, as a moral philosopher, are considering the question!
  • Finally, a bit of an aside, but another area where I would like to see more moral philosophers, psychologists, and neuroscientists talking is the relative importance of positive vs. negative affect. From a neuropsychological point of view, positive and negative affect are qualitatively different. Often, for convenience, researchers might measure a net difference between them, but I think there are very good empirical reasons that they should be considered incommensurable. All positive affect shares certain physical neuroscientific characteristics (almost always nucleus accumbens activity, for instance) but negative affect activates different systems. If these are really incommensurable, again we need to look to moral philosophers to be think about which is more important. This could be important for questions in moral philosophy (e.g., prior existence vs. total view) and in EA particularly: a strong emphasis on the moral desirability of positive affect might lead us towards a total view (because more people means more total positive affect) whereas balancing negative and positive affect could lead us towards a prior existence view (fewer people means less negative affect but also less positive affect), and a strong focus on avoidance of negative affect could even lead to a preference for the extinction of sentient life.
Comment by ben.smith on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-05-10T00:37:44.243Z · EA · GW

Yes, I tend to think that any one individual's impact on the world around them probably balances out roughly neutral.

So I don't use the argument that your own children might do a lot of good for the world and therefore you should raise children. That seems too speculative. And so the more known direct impact of having children on your own happiness and their happiness balances out the very speculative, almost entirely uninformed prior of the indirect effects having children might lead to.

Where you have a clear idea of a high and direct impact career that would be difficult to pursue were you to have children, then yes that might win out. Again, direct impacts are important, indirect impacts I think are so speculative that they probably don't count for much.

As for earning to give, this is another challenge to my argument. I am sceptical that someone who really wants to have children will be happy in the long term sacrificing that for earning to give and this I'm sceptical that their commitment will be sustained and thus it may not be particularly impactful anyway, vs some compromise between personal desires and earning to give that is sustainable over decades.

That's pretty speculative on my part but maybe borne out by observations made by 80k on people who enter morally neutral, high impact careers just to earn to give.

Comment by ben.smith on Physical theories of consciousness reduce to panpsychism · 2020-05-10T00:13:49.823Z · EA · GW

It's worth checking out this very much ongoing twitter thread with Lamme about related issues.

https://mobile.twitter.com/VictorLamme/status/1258855709623693325

Comment by ben.smith on Physical theories of consciousness reduce to panpsychism · 2020-05-09T07:13:12.799Z · EA · GW

I arrived here from Jay Shooster's discussion about the EA community's attitude to eating animals.

I wasn't aware of the current scientific consensus about consciousness; this article was a good primer on the state of the field for me in terms of which theories are preferred. I do like your though and I think it's an interesting challenge or way to approach thinking about consciousness in machines. I've typed out/deleted this reply several times as it does make me re-evaluate what I think about panpsychism. I believe I like your approach and think it is useful for thinking about consciousness at least in machines, but am not sure that "panpsychism" as a theory adds much.

Psychological or neurological theories of consciousness are implicitly prefaced on studying human or non-human animal systems. Thus, though they reckon with the cognitive building blocks of consciousness, there's less examination of just how reductive your system could get and still have consciousness. Whether you're taking a GWT, HOT, or IIT approach, your neural system is made up of millions of neurons arranged into a number of complex components. You might still think there needs to be some level of complexity within your system to approach a level of valenced conscious experience anything like that which you and I are familiar. Even if there's no arbitrary "complexity cut-off", for "processes that matter morally" do we care about elemental systems that might have, quantitatively, a tiny, tiny fraction of the conscious experience of humans and other living beings?

To be a bit more concrete about it (and I suspect you agree with me on this point): when it comes to thinking about which animals have valenced conscious experience and thus matter morally, I don't think panpsychism has much to add - do you? To the extent that GWT, HOT, or IIT ends up being confirmed through observation, we can then proceed to work out how much of each of those experiences each species of animal has, without worrying how widely that extends out to non-living matter.

And then proceeding squarely on to the question of non-living matter. Even if it's true that neurological consciousness theories reduce to panpsychism, we can still observe that most non-living systems fail to have anything but the most basic similarity to the sorts of systems we know for a fact are conscious. Consciousness in more complex machines might be one of the toughest ethical challenges for our century or perhaps the next one, but I suspect when we deal with it, it might be through approaches like this, which attempt to identify building blocks of consciousness and see how machines could have them in some sort of substantive way rather than in a minimal form. Again, whether or not an electron or positron "has consciousness" doesn't seem relevant to that question.

Having said that, I can see value in reducing down neurological theories to their simplest building blocks as you've attempted here. That approach really might allow us to start to articulate operational definitions for consciousness we might use in studying machine consciousness.

Comment by ben.smith on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-26T19:12:23.665Z · EA · GW

Thanks. This is a challenging response to reply to. (3) risks "proving too much" but it seems like a valid argument on its face.

Comment by ben.smith on Ask Me Anything! · 2019-08-15T18:06:19.341Z · EA · GW

I've been trying to evaluate career decisions about studying psychology and neuroscience. Do you think that studying motivation from a neuroscientific perspective is an effective way to contribute to AI alignment work? Do you think that-considering the scale of mental illnesses such as anxiety of depression-doing work on better understanding anxiety and depression is also highly effective?