next page (older posts) →
William_MacAskill · 2011-11-25T05:00:04.000Z · score: 1 (1 votes) · comments (2)
I currently have a daily focusmate for 2 hours. I prefer longer focustmate than 1 hour, and with the same person. So if anyone is interested in having a recurrent session from 1 to 8 hours, let me know.maxcarpendale on What was the first being on Earth to experience suffering?
There is also the excellent book length treatment of the subject, The Ancient Origins of Consciousness.rohinmshah on Antitrust-Compliant AI Industry Self-Regulation
Planned summary for the Alignment Newsletter:
One way to reduce the risk of unsafe AI systems is to have agreements between corporations that promote risk reduction measures. However, such agreements may run afoul of antitrust laws. This paper suggests that this sort of self-regulation could be done under the “Rule of Reason”, in which a learned profession (such as “AI engineering”) may self-regulate in order to correct a market failure, as long as the effects of such a regulation promote rather than harm competition.ramiro on Is it possible, and if so how, to arrive at ‘strong’ EA conclusions without the use of utilitarian principles?
In the case of AI, if AI engineers self-regulate, this could be argued as correcting the information asymmetry between the AI engineers (who know about risks) and the users of the AI system (who don’t). In addition, since AI engineers arguably do not have a monetary incentive, the self-regulation need not be anticompetitive. Thus, this seems like a plausible method by which AI self-regulation could occur without running afoul of antitrust law, and so is worthy of more investigation.
[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]
I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.
It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).
My own personal and sketchy take here would be something like:
To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).
To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.
To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare functionmati_roy on Mati_Roy's Shortform
I had a friend post on Facebook (I can't find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful.
Of course, it's probably a combination of factors, but I wonder which are the major ones.
Also, even if the hypothesis is true, I wouldn't expect people to know the source of their belief.
I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can't just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.
I know I'm a bit late to this topic, but at Giving Green (www.idinsight.org/givinggreen) we are trying to answer specifically this problem. We're building on excellent previous work (like that at Let's Fund and Founder's Pledge) to do a comprehensive analysis on giving, investment, and volunteer options to fight climate change. The work is still very early, but there is a lot coming in the pipeline so stay tuned. For now, we have a few recommendations in the offset market.edoarad on A bill to massively expand NSF to tech domains. What's the relevance for x-risk?
The bill also aims at building a DARPA-like funding institution within NSF.
I'm quite excited by this. Anyone has more information about it?max_daniel on Collection of good 2012-2017 EA forum posts
Thanks, this is a great contribution!
(As an aside, I think it would be valuable to have a similar list highlighting the best posts from Paul Christiano's Rational Altruist blog. They are all from 2014 or older.)briantan on Marcus Davis: Rethink Priorities — empirical research on neglected causes
We gathered this information into an interactive table.
The link to the table leads to a Wikipedia page on backpropagation. Could this be corrected? Thanks!michaela on You have more than one goal, and that's fine
Thanks for this post. I think it provides a useful perspective, and I've sent it to a non-EA friend of mine who's interested in EA, but concerned by the way that it (or utilitarianism, really) can seem like it'd be all-consuming.
I also found this post quite reminiscent of Purchase Fuzzies and Utilons Separately [LW · GW] (which I also liked). And something that I think might be worth reading alongside this is Act utilitarianism: criterion of rightness vs. decision procedure [EA · GW].