next page (older posts) →
William_MacAskill · 2011-11-25T05:00:04.000Z · score: 0 (0 votes) · comments (2)
More space per chicken is just one of the requirements. Probably the most important requirement is to use higher welfare breeds, which are generally grow more slowly. But there are more requirements regarding lighting, enrichment, etc. You can see the full ask in the European Chicken Commitment. Asks for other regions are similar and can be seen here.khorton on Cash prizes for the best arguments against psychedelics being an EA cause area
Semantic point: I can't see any way that psychedelics are a 'cause area'. Either they're one of many possible interventions in the cause area of mental health, or they're one of many interventions in the cause area of the long term future, or possibly both. The psychedelics are means to an end, not an end in themselves.evan_gaensbauer on EA Still Needs an Updated and Representative Introductory Guidebook
Thanks for the feedback. That sound reasonable. I wrote the OP because this was a resolvable issue that lots of people disputed over EA that appeared to be left unfinished. There are other introductory guidebooks for effective altruism for different causes, etc. So that there isn't a general guidebook right now that satisfies different relevant parties in EA doesn't seem like a huge problem. Michael Chen pointed out multiple major problems with one article in the current EA Handbook 2.0. They're significant mistakes that the article needs changed for it to hold up. I figure for the EA Handbook to deliver its message with integrity, it has to do that for all the articles in the EA Handbook. Since most of the articles were initially written as blog posts, I expect there are other holes in each which with hindsight we could point out. It's just that the articles in the EA Handbook 2.0 may not have been as professionally written as published books or scholarly articles by effective altruists, which is a quality we should ostensibly aspire to if an introductory book to EA is about EA putting its best foot forward to people new to EA.
Siebe Rozendal suggested an updated version of Doing Good Better. I thought this would be too much work, but it seems like it might be less work to update DGB than it would be to update the EA Handbook, since that poses multiple difficulties. I thought that would require Will doing most to all of the work to update DGB himself, but The Life You Can Save (the organization) has worked with Singer to update the book of the same name. Jon Behar, who works for The Life You Can Save, explains it here. It's a new edition 8 years later, so there must have been a lot to change. So, the CEA could do something similar where Will works with them to update DGB. CEA could consult with TLYCS (the organization) or work with them in some capacity to replicate the process they've used with Singer to update TLYCS (the book).
I honestly think it might be more tractable and more effective to update DGB than the EA Handbook 2.0. If that's the case, given that DGB is written more as an intro to EA as well, and it's more popular, I imagine some EAs would be willing to donate time and/or money to see an updated version of DGB happen.
Is that something you think Will and/or the CEA wold consider?larks on Jade Leung: Why Companies Should be Leading on AI Governance
First of all, thanks to whoever is posting these transcripts. I almost definitely would never have watched the video!
One is the conceptual argument that states are the only legitimate political authorities that we have in this world, so they're the only ones who should be doing this governance thing. ...
Now all of those things are true.
I think this is considerably more controversial than you assume. While it has been a few years since I studied political philosophy, my understanding is that philosophers have largely given up on the classical problem of political authority - justifying why governments a unique right to coerce people, and why people have an obligation to obey specifically because a government said so. All the attempted justifications are ultimately rather unsatisfying. It seems much more plausible that governments are justified if/when they pass good laws that protect people's rights and improve welfare - i.e. the morality of laws justifies the government, rather than the government justifying the morality of the laws. But this is obviously rather contingent, and doesn't suggest that states are in any way the only legitimate source of political authority.
For more discussion of this, I recommend Michael Huemer's excellent The Problem of Political Authority. There's also a Standford Encyclopedia of Philosophy article.
The authority of a company, for example, plausibly comes from something like their market power and the influence on public opinion. And you can argue about how legitimate that authority is
Here I think you are understanding the potential legitimacy of the influence of a private company. Their justification comes not from market power, but from people freely choosing to buy their products, and the expertise they demonstrate in effectively meeting this demand. To give a mundane example, a major shipping company would have justification in providing major input into international port standardization rules by virtue of their expertise in shipping; expertise which had been implicitly endorsed by everyone who chose to hire them for shipping services.
From what I understand, effect size is one of the better ways to predict whether a study will replicate. For example, this paper found that 77% of replication effect sizes reported were within a 95% prediction interval based on the original effect size.
As a spot check, you say that brain training has massive purported effects. I looked at the research page of Lumosity, a company which sells brain training software. I expect their estimates of the effectiveness of brain training to be among the most optimistic, but their highlighted effect size is only d = 0.255.
A caveat is that if an effect size seems implausibly large, it might have arisen due to methodological error. (The one brain training study I found with a large effect size has been subject to methodological criticism.) Here is a blog post by Daniel Lakens where he discusses a study which found that judges hand out much harsher sentences before lunch:
If hunger had an effect on our mental resources of this magnitude, our society would fall into minor chaos every day at 11:45. Or at the very least, our society would have organized itself around this incredibly strong effect of mental depletion... we would stop teaching in the time before lunch, doctors would not schedule surgery, and driving before lunch would be illegal.
However, I think psychedelic drugs arguably do pass this test. During the 60s, before they became illegal, a lot of people kind of were talking about how society would reorganize itself around them. And forget about performing surgery or driving while you are tripping.
The way I see it, if you want to argue that an effect isn't real, there are two ways to do it. You can argue that the supposed effect arose through random chance/p-hacking/etc., or you can argue that it arose through methodological error.
This is the only comment this user has ever written, and their profile looks very spammy. I wonder if spammers have discovered that posting flamebait is a good way to get people to visit their website...david_moss on EA Survey 2018 Series: Cause Selections
Thanks for your stimulating questions and comments Ishaan.
one might conclude that "climate change" and "global poverty" are more "mainstream" priorities, where "mainstream" is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?
This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)
Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more "mainstream"
We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.
We can examine donation data from the general population however. In the UK we see the following breakdown:
As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.
(For example, "College Professors" might be representative of opinions that are both more mainstream and more hegemonic within a certain group)
I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.
Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.
If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I'm sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?
I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.
Off-topic but what do broiler chicken campaigns look like typically? I know for hens it's cage-free, is it just more space per chicken for broilers?jpaddison on EA Meta Fund Grants - March 2019
The link to the rethink priorities team 404s. It should be this: https://www.rethinkpriorities.org/our-teammilan_griffes on Cash prizes for the best arguments against psychedelics being an EA cause area
Got it. (And thanks for factoring in kindness!)
However, even assuming that the unknown quantities are probably positive, this doesn't tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.
There hasn't been very much research on psychedelics for "well" people yet, largely because under our current academic research regime, it's hard to organize academic RCTs for drug effects that don't address pathologies.
The below isn't quite apples-to-apples, but perhaps it's helpful as a jumping-off point.
CFAR's 2015 longitudinal study found:
Life satisfaction increased by d = 0.17 (t(131) = 2.08, p < .05). [effect attributed to attending a CFAR workshop]
Carhart-Harris et al. 2018, a study of psilocybin therapy for treatment-resistant depression, found:
Relative to baseline, marked reductions in depressive symptoms were observed for the first 5 weeks post-treatment (Cohen’s d = 2.2 at week 1 and 2.3 at week 5, both p < 0.001)... Results remained positive at 3 and 6 months (Cohen’s d = 1.5 and 1.4, respectively, both p < 0.001).
Not apples-to-apples, because a population of people with treatment-resistant depression is clearly different than a population of CFAR workshop participants. But both address a question something like "how happy are you with your life?"
Even if you add a steep discount to the Carhart-Harris 2018 effect, the effect size still appears to be quite large – let's assume that 90% of the treatment effect is an artifact of the study due to selection effects, small study size, and factors specific to having treatment-resistant depression.
Assuming a 90% discount, psilocybin would still have an adjusted Cohen's d = 0.14 (6 months after treatment), roughly in the ballpark of the CFAR workshop effect (d = 0.17).