Good critique, my main conclusion is that redwood seems reasonable overall and not far out of line from other ai safety orgs. Benchmarked against non-ai safety orgs, I would have my usual critique that redwood (and other longtermist orgs) seems unreasonably expensive for reasons I don't quite understand. Does salary really make that big a difference in attracting talent? If that is the case, what does that say about our community's values?
In any case, remember that every org has issues. When listing every issue an org has in a row it can give an impression of things being worse than they really are. Would love a similar critique be made of the organization I co-founded once we grow to a similar size. More critique is good for the community.
We should be able to write scathing criticisms without getting mad at each other. We need to be able to read criticisms and not go completely ham and want to see the org and everyone associated guillotined.
Hi Tom welcome to the forum!
I read the policy paper a few weeks ago and found it super interesting, that said I had some questions over what implementing marginal aid would look like in practice.
One would need to ensure national institutions take up the cost-effective programs aid now would opt not to fund, no? I suspect this coordination would be pretty difficult. Alone getting well-resourced western governments and multilaterals to spend aid cost-effectively has been a continuous fight fought by organizations such as CGD, I don't see why we should expect an under-resourced government to make much better decisions.
I think an interesting follow up study could be to interview some of the people involved in negotiations on health service funding. Why did they opt to earmark for cost-effectiveness? Had they not earmarked, what do they expect the government would have funded instead? Had they funded marginally, do they expect the government would have taken up the cost-effective program themselves?
I worry that in practice it would just result in fewer cost-effective programs being funded. That said I can see there being a place for marginal aid in some funding of government health services.
EDIT: just looked at the pdf again and saw there was a section on implementation in practice, so apparently I didn't read all of it! Apologies if you already answered some of the questions in that section, will give it a read later today.
Very happy with the changes, especially with the performance improvements on mobile.
makes sense, agree completely
I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EA's in general should think way less about who said what and focus much more on whether the arguments themselves hold water.
I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example:
[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,
Could to me be rephrased to
Why I believe <stance on topic> is incorrect.
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org's thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90/10 than 10/90.
Sure, unfortunately GPT-4 doesn't seem to save the chat histories properly, but the most recent three by memory (topics obfuscated):
Write out paragraph showing how <intervention> will help <target country> <target org's priorities>.
Failure: GPT replies bloated text that makes the argument, but is too weasle-worded. Would be more work to rewrite than just do from scratch.
Format following into list with:
[messy content I had copied from website including the names and occupations along with other html stuff between]
Success: GPT replied with all names in the right format easy to copy paste into google sheets.
What are top ten newspapers in <target country> ranked by political influence
Success: GPT replied with reasonable looking top ten list including a description of their political orientation
One I often find myself asking and getting great answers to is:
Write sheets function that <does thing I need to do>
I also often use gpt to get brainstorms started.
My org is trying to achieve <thing>, list ten ways we could go about this.
As a side note, I've just written a shortform about how I believe more people should be integrating new AI tools into their workflows. for people worried about giving data and money to microsoft, I think offsetting is likely a great way to ensure you capture the benefits which I expect to be higher than the price of the offset
Spreadsheets are in many ways a force-multiplier of all other work that one does. For that reason I am very happy to have invested significant time into becoming good at utilizing spreadsheets in the work I do.
Over the past months, I’ve increasingly started using GPT in my workflow and am starting to see it as a tool that similarly to spreadsheets can make one better across a vide variety of tasks.
It wasn’t immediately useful however! It was only with continuous practice that it started generating actual value.
It took me a while to get good at noticing when some task I was doing could be sped up by involving GPT, but especially for brainstorming or listing things it does in seconds what would take me hours. I highly recommend investing time it takes to get it into your workflow. It takes time to build an intuition of what it can and cannot do well.
For example, my org spent some hours creating a list of organizations that currently attempt to influence aid spending in our target country. I asked GPT what organizations we had missed and in seconds was able to add an additional 15 organizations onto the list we had overlooked.
The amount of tasks we can outsource to AI will only increase going forward, and I think those who invest time into getting good at utilizing the new wave of AI tools will be able to multiply productivity significantly and will be at an advantage over those who don't.
Thank you for writing this, even after hearing your perspective I still can’t let go of the same feeling you initially described, that surely people wouldn’t just make up arbitrary lies to hate someone.
I wonder to what extent untruths are exacerbated by telephone games. The whole Elon musk emerald mine nonsense, for example, seems to be uttered mostly by people who don’t know any better rather than by people intentionally trying to distort the truth.
Yes to some variant of this.
a simple directory of services and prices seems sufficient, no need to a platform which charges commission and unnecessarily complicates things. These features are needed for non-ea work to make up for a lack of trust, but unnecessary here
I think there's two separate dynamics at play here:
I think we could do more to avoid punishing opinions perceived wrong. An example of punishing behavior at play is my own comment two days ago. I made it while being too upset for my own good and lashed out at someone for making a completely reasonable point.
I don't blame the user I replied to for wanting an anonymous account when that is the response they can expect.
Secondly, I suspect that people are vastly overrating just how much anybody cares about who says what on the forum.
While I understand why someone making a direct attack on some organization or person might want to stay anonymous, most things I see posted from anonymous accounts seem to just be regular opinions that are phrased with more hostility than the mean.
It's a bit weird to me why somebody would think that a few comments on the EA forum would do all that much to ones reputation. At ea globals, at most, I've had a few people say "oh you are the one who wrote that snakebite post? Cool!" and that's about it.
It all feels very paranoid to me. I'm way too busy worrying about whether I look fat in this t-shirt to notice or care that somebody wrote a comment that was too woke/anti-woke.
Maybe there's a bunch of people smarter than me who think my opinions are mid and now think less of me, but like, they were going to realize that after speaking to me for five minutes anyways.
I think you are right and I overreacted.
I don't think I suggested that? Forgive me if the original post was phrased poorly, I wrote it in some fourty minutes.
My point with mentioning 10,000 lives lost was to operationalize the question of whether it becomes a serious pandemic.
Thank you, that's very reassuring to hear. It can be hard to tell whether an issue is being overlooked or not
I think this is a great idea (needs exceptional execution to succeed however), if done well it could tie into CEA's current project of creating more cause-area specific groups.
In both fields I worked (European AI policy and now international aid policy) I have benefited greatly from informal groups and networks. Some of the most valuable connections I have made have come almost through pure coincidence, which suggests there is room for improvement.
This is absolutely not how I'm going to go about dealing with it.
If I were on their side and somebody at any point responded to my concerns with a trivializing reminder that rape and abuse, in fact, happens in every community, I would nope out immediately.
I appreciate that this comment is trying to be helpful, but I feel a responsibility to point out that this is outright harmful advice.
EDIT: Sorry, I phrased myself with unnecessary meanness. To be clear the reason this, in my opinion, is poor advice is not because the arguments themselves are wrong. The reason is that what matters in good communication is to signal an understanding of the counterpart's concerns, and even if these arguments are right they send the wrong signal.
This was incredibly upsetting for me to read. This is the first time I've ever felt ashamed to be associated with EA. I apologize for the tone of the rest of the comment, can delete it if it is unproductive, but I feel a need to vent.
One thing I would like to understand better is to what extent this is a bay area issue versus EA in general. My impression is that a disproportionate fraction of abuse happens in the bay. If this suspicion is true, I don't know how to put this politely, but I'd really appreciate it if the bay area could get its shit together.
In my spare time I do community building in Denmark. I will be doing a workshop for the Danish academy of talented highschool students in April. How do you imagine the academy organizers will feel seeing this in TIME magazine?
What should I tell them? "I promise this is not an issue in our local community"?
I've been extremely excited to prepare this event. I would get to teach Denmark's brightest high schoolers about hierarchies of evidence, help them conduct their own cost-effectiveness analyses, and hopefully inspire a new generation to take action to make the world a better place.
Now I have to worry about whether it would be more appropriate to send the organizers a heads up informing them about the article and give them a chance to reconsider working with us.
I frankly feel unequipped to deal with something like this.
As much as I dislike their marketing (I'm clearly not the target audience), I don't think it requires much imagination to see why open phil may have gone ahead with the grant.
see for example this event they put on:
The event was widely covered and Obama himself tweeted about the event. If they came to open phil with some similar idea, intended to make catastrophic risks salient to a wide audience I can see why they would seriously consider funding it.
Open phil aren't stupid. If they are doing something seemingly stupid, they probably have information we don't.
a low effort fix would just be to write a parenthesis after the claim specifying that the nepotism isn't between open phil and the grant receiver
That was my suspicion too. Similarly to how I can pretty easily find photos and video from the past that aren't photoshopped, I suspect it won't be all that difficult to collect text either.
To what extent could this be implemented as an addition to the internet archive?
I think there's a good chance you're the first to really look into this. If you did a well written review and evaluation of the work, I'm sure people would read it.
My uninformed prior is skeptical. The concept of a metacrisis seems pretty sus to me
that's one heck of an endorsement!
It seems like there is a general trend for public health interventions to look insanely cost-effective wrt. to daly's per dollar. I'd be curious to see a more detailed meta-review of this type of intervention, as they all are likely to share the same pitfalls if there are any.
Government policies I think are harmful tend have some easily measurable upside, at the cost of a much larger but difficult to measure downside. (ie. by making it more costly for firms to fire, employers respond by being much more cautious and discriminatory in their hiring practices)
These public health interventions seem to follow the pattern of an easily quantifiable upside and a difficult to measure downside, which leads me to worry if we are getting mugged by what is measurable.
There has been a tremendous amount of discussion and conflict in the past months over the state of Effective Altruism as a movement. For good reason too. SBF, someone I was once proud to highlight as a shining example of what EA had to bring, looks to have have committed one of history's largest instances of fraud. I would be concerned if we weren't in heated debates over what lessons to take from this!
After spending a few too many hours (this week has not been my most productive) reading through much of the debate, I noticed something: I'm still proud to call myself an effective altruist and my excitement is as high as ever.
If all of EA disappeared tomorrow, I would continue on my merry way trying to make things better. I would continue spending my free time trying to build a community in Denmark of people interested in spending their time and resources to do good as effectively as possible.
What brought me to EA was an intrinsic motivation to do as much good as possible, and nothing any other effective altruist can do is going to change this motivation.
I'm happy to consider anyone who shares that objective to be a friend, even if we don't on the specifics of how exactly one should go about trying to do the most good. "Doing the most good" is a pretty nebulous concept after all. I would find it pretty weird if we all agreed completely on what that implies.
- We're all on the same team
- We're all just human beings try to do the best we can
- We're all acting on imperfect information
I think some of us owe FLI an apology for assuming heinous intentions where a simple (albeit dumb) mistake was made.
I can imagine this must have been a very stressful period for the entire team, and I hope we as a community become better at waiting for the entire picture instead of immediately reacting and demanding things left and right.
get absolutely dunked on, Will!
In all seriousness, thank you to the many people who contributed with posts and comments in 2022 and a thank you to the forum team for the hard work you put in. Whatever our community's flaws (and there are many!), there are few other places online with a similar breadth of topics discussed in as much depth.
Barring the meta-posts, time spent procrastinating on the EA forum in 2022 was for me time well spent.
We have not done a formal survey yet (unsure if we will, very time intensive!) so I can't guarantee there wasn't a bias in the economists we were able to interview and economists who are supportive of cash-benchmarking. That said, the takeaway from our conversations has been that cash-benchmarking looks promising but unproven.
There are a number of questions on cash-benchmarking that there aren't yet clear answers to. As the DFID¹ article states, aid is spent on a wide range of objectives many of which are difficult to meaningfully benchmark against cash. We're currently evaluating country portfolios to get a sense of what type and percentage of projects can be meaningfully benchmarked and cross-compared.
There are also a great deal of details surrounding implementation, without clear consensus either. The goal of our report is not to create the most compelling case for cash-benchmarking, but to accurately summarize the research, collect the case-studies, and hopefully arrive at a set of general recommendations for how to implement cash-benchmarking well.
My best guess is that most issues facing cash-benchmarking are very solvable, but if that turns out to be wrong then we'll have to find something different to advocate for.
¹ Note that DFID has since been replaced with the FCDO, so the article may no longer reflect the UK government's current position.
This looks phenomenal. I think there's a lot of lessons to learn from the do-focused skillset of strategy and management consultancies. Wishing you the best of luck!
Thank you for your thoughtful comment Stephen!
We are incubated through Charity Entrepreneurship, and have gotten our seed funding through their network.
When deciding on our pilot project, we did briefly look into the cost-effectiveness of climate mitigation projects funded by development aid and it was difficult to conclude much with high confidence. While it won't be our initial focus, we would be very excited to see increased measurement and the creation of better metrics for aid projects related to climate change and mitigation.
Traditional aid has benefited tremendously from the increased focus on measurement and evaluation in recent decades. I think it's especially important we don't forget the lessons learned as we start to carry out aid projects in new domains.
Additionally I would add that it is not a depreciative asset and can be sold again at a later date, returning the money spent. Of course you have to deduct the counterfactual returns of investing that money, but my intuition is generally that owning land is a fine investment if it saves you from paying rent.
Man this is one of the best posts I've ever read on the forum. Extremely educational while remaining very engaging (rare to find both). Thank you for writing this, I hope you'll do similar write-ups for other research you do!
What age range are you intending for the book to be for? I look forward to reading it with my niece when she is old enough :)
But why exactly should I help those in the community who believe that the moral thing to do when someone is on their knees is to curb stomp them while yelling “I should have been admitted to EAG 2016!”? Why should I expose myself further by doing ambitious things (No I don’t mean fraud- that’s not an ambitious thing that’s a --- criminal--- thing) when if I fail people are going to make everything worse by screaming “I told you so” to signal that they never would have been such a newb? Yeah. No. The circle I’m drawing around who is and is not in my community is getting dramatically redrawn. This is not because one person or company made a series of very bad decisions, it's because so many of your actions are those of people I will not invest in further and who I don't want anywhere near my life or life’s work.
This paragraph really resonated with me. I suspect many people whom we would benefit greatly from having in our community are turned off because of they got the same feeling you articulated here.
I'm finding difficult to articulate why I think this is, but let me attempt:
When I've been at my least productive, I find myself falling into a terrible zero-sum mindset of actively searching for things that are unjust or unfair. My thoughts often take the shape of something like:
Why do influential EA's only care about <thing they think is important> and not <thing I think is important> ?
'If EA was less <elitist/nepotistic> and more <democratic/open/whatever> then my pet cause would get the attention it deserves! '
On the other hand, when I'm at my most productive and fully immersed projects that matter to me, I don't ever find myself thinking those thoughts. I'm too focused on actually getting things done and producing surplus to care about how others spend their time and resources.
In this mindset I'm incredibly optimistic and I intuitively feel that any problem solvable if I put my mind to it. In the former mindset, everything seems doomed to fail and I want to sneer at anyone who thinks otherwise.
These mindsets feel very distinct, and it's very clear the latter is highly conducive to success and the former is actively harmful. If somebody with the latter mindset gets their first impression of EA from people with the former, I don't blame them for bailing.
Thank you for writing this. It's barely been a week, take your time.
There's been a ton of posts on the forum about various failures, preventative measures, and more. As much as we all want to get to the bottom of this and ensure nothing like this ever happens again, I don't think our community benefits from hasty overcorrections. While many of the points made are undoubtedly good, I don't think it will hurt the EA community much to wait a month or two before demanding any drastic measures.
EA's should probably still be ambitious. Adopting rigorous governance and oversight mechanisms sometimes does more harm than good. Let's not throw out the baby with the bathwater.
I'm still reflecting and am far from having fully formed beliefs yet, I am confused about just how many strong views there have been expressed on the forum. Alone correctly recalling my thoughts and feelings around FTX before the event is difficult. I'm noticing a lot of finger pointing and not a lot of introspection.
I don't know about everyone else, but I'm pretty horrified at just how similar my thinking seems to have been to SBF's. If a person who seemingly agreed with me on so many moral priorities was capable of doing something so horrible, how can I be sure that I am different?
I'm going to sit with that thought for a while, and think about what type of person I want to strive to be.
Good question, I've created a manifold market for this:
I wouldn't conclude much from the future fund withholding funds for now. Even if they are likely in the clear, freezing payments until they have made absolutely sure strikes me as a very reasonable thing to do.
My only worry will be that there will be more things posted in a short time than anyone will have time to read. I'm still working my way through all the cause area reports. Some system to distribute the posts out to prevent fatigue might be warranted for events like these and future writing contests.
You can only spend your resources once. Unless you argue that there is a free lunch somewhere, any money and time spent by UN inevitably has to come from somewhere else. Arguing that longtermist concerns should be prioritized necessarily requires arguing that other concerns should be de-prioritized.
If EA's or the UN argue that longtermism should be a priority, it's reasonable for the authors to question from where those resources are going to come.
For what it's worth I think it's a no-brainer that the UN should spend more energy on ensuring the future goes well, but we shouldn't pretend that it's not at the expense of those who currently exist.
In the early 2000's when climate change started seriously getting onto the multilateral agenda, there were economists like Bjørn Lomborg arguing that we instead should spend our resources on cost-effective poverty alleviation.
He was widely criticized for this, for example by Michael Grubb, an economist and lead author for several IPCC reports, who argued:
To try and define climate policy as a trade-off against foreign aid is thus a forced choice that bears no relationship to reality. No government is proposing that the marginal costs associated with, for example, an emissions trading system, should be deducted from its foreign aid budget. This way of posing the question is both morally inappropriate and irrelevant to the determination of real climate mitigation policy.
Yet today, much (if not most) multilateral climate mitigation, is funded by countries' foreign aid budgets. The authors of this article, like Lomborg was almost two decades ago, are reasonable to worry that multilateral organizations adopting new priorities comes at the expense of the existing.
I believe we should spend much more time and money ensuring the future goes well, but we shouldn't pretend that this isn't at the expense of other priorities.
To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.
If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.
This might be the correct call, but I think it's a reasonable thing to disagree with.
Thank you, this is an excellent post. This style of transparent writing can often come across as very 'ea' and gets made fun of for its idiosyncrasies, but I think it's a tremendous strength of our community.
I would advise you to shorten the length of the total application to around one fourth of what it is currently. Focus on your strong points (running a growing business, strong animal welfare profile) and leave out the rest. The weaker parts of your application water down the parts which are the strongest.
Admissions are always a messy process and good people get rejected often. A friend of mine who I'm sure will go on to become a top-tier ai safety engineer, got rejected from eag, because there isn't a great way to convey this information through an application form. Vetting people at scale is just really difficult.
Thanks for writing this Jonas. As someone much below the lesswrong average at math, I would be grateful for a clarification of this sentence:
Provided , and are independent when
What does and refer to here? Moreover is it a reasonable assumption, that the uncertainties of existential risks are independent? It seems to me that many uncertainties run across risk types, such as chance of recovery after civilisations collapse.
For anyone interested in pursuing this further Charity Entrepreneurship is looking to incubate a charity working on road traffic safety.
Their report on the topic can be found here: https://www.charityentrepreneurship.com/research
Thanks for giving everyone the opportunity to provide feedback!
I'm unsure how I feel about the section on global poverty and wellbeing. As of now, the section mostly just makes the same claim over and over that some charities are more effective than others, without much rigorous discussion around why that might be.
There's a ton of great material under the final 'differences in impact' post that I would love to see as part of the main sequence. Right now, I'm worried that people new to global health and development will leave this section feeling waay overconfident about how sure we are about all of this charity stuff. If I was a person with experience working in the aid sector and decided to go through the curricula as it is, I think I would be left thinking that EAs are way overconfident despite barely knowing a thing about global poverty.
Here is an example of a potential exercise you could include that I think might go a long way to convey just how difficult it is to gain certainty about this stuff:
Read and evaluate two RCT's on vaccine distribution in two southern Indian states. What might these RCT's tell us about vaccine distribution in India? Have the reader try to assess which aspects of these RCT's will generalise to the rest of India and which aspects won't. They could for example make predictions (practicing another relevant EA skill!) on the results of an RCT for a northern Indian state.
You only have to do one deep dive on a topic to gain an appreciation for how little we know.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can't help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
Yeah, you're right.
That puts EA in an even better light!
"While the rest of the global health community imposes its values on how trade-offs should be made, the most prominent global health organisation in EA actually surveys and asks what the recipients prefer."
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I'm often a bit perplexed as to how quick people are to jump from 'nearly everyone dies' to 'literally everyone dies'. Similarly I'm sympathetic to the point that it's difficult to imagine particularly compelling scenarios where AI doesn't radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didn't predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldn't have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.