Searching online, I believe he gave the talk at EA Summit 2013, back when EA community-building was much more volunteer-based and didn't have much in the way of formal organization.
As for Torres, my secondhand impression was a combination of a) believing that EA-types don't have social justice-style concerns enough weight compared to the overwhelming importance of the far future, and b) personally feeling upset/jilted that he was rejected from a number of EA jobs.
b) personally feeling upset/jilted that he was rejected from a number of EA jobs.
That feels very uncharitable.
I understand you probably have insider knowledge, but in the linked article he mentions strong disagreements with ideas like:
Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. [Consequently,] it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
Which it's something I see many people can have problems with.
There are plenty of people that don't like (parts of) longtermism for reasons similar to those in the article, and I don't think most of them are bitter because they have been rejected from EA jobs or are SJW.
Edit: since this is getting a lot of downvotes, I just want to clarify that I do think that quote is a strawman of some longermist ideas. But I do think we should be charitable of critics' motivations and at least mention the ones they would agree with.
OP asked a question about Torres specifically. I gave them my personal subjective impression of the best account I have about Torres' motivations. I'm not going to add a "and criticizing EA is often a virtuous activity and we can learn a lot from our critics and some of our critics may well be pure in heart and soul even if this particular one may not be" caveat to every one of my comments discussing specific criticisms of EA.
Phil isn't an unknown internet critic whose motivations are opaque; he is/was a well known person whose motivations and behaviour are known first-hand by many in the community. Perhaps other people have other motivations for disliking longtermism, but the question OP asked was about Phil specifically, and Linch gave the Phil specific answer.
Yeah, but who is speaking here? Beckstead? I don't know any "Beckstead"s. Phil Torres is claiming that The Longtermist Stance is "we should prioritise the lives of people in rich countries over those in poor countries", even though I've never heard EAs say that. At most Beckstead thinks so, though that's not what Beckstead said. What Beckstead said was provisional ("now seems more plausible to me") and not a call to action. Torres is trying to drag down discourse by killing nuance and saying misleading things.
Torres' article is filled with misleading statements, and I have made longer and stronger remarks about it here. (Even so I'm upvoting you, because -6 is too harsh IMO)
Yes the article is indeed full of strawmen and misleading statements. But (not knowing anything about Torres) I felt the top comment was strongly violating the principle of charity when trying to understand the author's motivations.
I think the principle of charity is very important (especially when posting on a public forum), and saying that someone's true motivations are not the ones they claim should require extraordinary proof (which maybe is the case! I don't know anything about the history of this particular case).
Extraordinary proof? This seems too high to me. You need to strike the right balance between diagnosing dishonesty when it doesn't exist and failing to diagnose it when it does. Both types of errors have serious costs. Given the relatively high prevalence of deception among humans (see e.g. this book), I would be very surprised if requiring "extraordinary proof" of dishonesty produced the best consequences on balance.
"Looking forward to having a conversation today with people about...how we can make the world a better place in the years and decades and centuries ahead. ... I thought it would be valuable where I ... go through some thoughts I have for ... why I think it's so important to be pushing the frontiers of technology in certain ways, both in a for-profit and non-profit context"
"My claim is there are only four possible charts you can come up with" for how technological development will proceed over time [I'm not sure what he means exactly; I certainly think the real-life outcome may well look more messy than the ones he presented, but at least I agree that the cyclic one is incorrect. Says he likes exponential growth.]
He suggests globalization has been overemphasized over technological development. "The question I always like to pose is, how can we go about developing the developed world."
"maybe we can no longer have globalization continue without technological progress at this point"
"[by 2030 we may be] losing a consensus even in a place like China for globalization, and we're close to a breakpoint in places like Brazil, Turkey [...] probably a very different paradigm is going to be needed in the decade ahead"
"with the founders fund and with some of the nonprofit things that we've done" he's been trying to reverse recent trends via increasing technological progress.
"we can think about shaping a future in which there's more technological progress"
he presents a dichotomy between "technology" meaning "going from 0 to 1", i.e. making something new, and "globalization" meaning "1 to N", i.e. spreading existing technology around the world. Says almost all nonprofits today (i.e. 2013) are focused on globalization in this sense, not technology.
"we have an educational system where we believe that all truth is collective and that the true answers are the answers that everybody knows to be true, whereas if you're going from 0 to 1, you kind of come up with a truth that nobody else knows at that point yet, and [there's] always this question about how do you explain this and [...] pull people in when you're trying to do something that's very, very new... I think there are... a number of features in our society that have made it unusually hostile to this idea of going from 0 to 1"
What should we do to go back to an optimistic, definite future? We should ask what things are valuable, that we can actually do, that others are not doing. He uses this as an interview question and finds that most people find it very hard to answer. Says nonprofits should be looking for answers to this question.
"good education...probably looks like something where everybody gets educated in a different way that's unique to them as a person"
aging/dying is a topic on which "there's more psychological denial than any other topic"; there are many bad arguments against life extension; "every myth on this planet teaches us that the meaning of life is death... [this area] strikes me as grossly underfunded"
AI "seems plausible in the next few decades" ... "probably the biggest 0 to 1 thing would be to get something like generalized artificial intelligence. It would change the world in ways that are more radical than we could imagine." [This is kind of an odd comment: he doesn't say that this immense change would be good thing or a dangerous thing, just that it would be radical.]
Summary: Peter Thiel spoke at the EA Summit conferences organized in both 2013 and 2014 but was the singular keynote speaker at the 2014 conference. The other affiliation Thiel had with EA at the time was through Leverage Research and the Machine Intelligence Research Institute, two EA-affiliated non-profit organizations to which Thiel had been a major donor for multiple years. Leverage Research was one of the main organizations sponsoring the 2013 and 2014 conferences. Thiel has not donated to MIRI since at least 2015 due to a difference of perspective on the likely impact of advanced AI on the long-term future. Thiel is presently still a major donor to Leverage Research but that organization itself has not self-identified as an EA-aligned organization since at least 2018/19.
You are correct from your other comment [EA(p) · GW(p)] that Peter Thiel was only one of multiple keynote speakers at the 2013 Effective Altruism Summit. He was the keynote speaker at the 2014 EA Summit.
The "EA Summits" were a series of EA conferences organized in 2013, 2014 and 2018. For the Summits in 2013 and 2014, multiple organizations were the sponsor for the event but the primary one was Leverage Research. Leverage was the only organization which organized and sponsored the EA Summit in 2018. The EA Global series of conferences was initiated as a more formal series of conferences managed by the Centre for Effective Altruism (CEA) as EA began growing more into a global movement in its own right.
From 2014 and before, Thiel's relationship to EA was through him being a repeat, major donor to both Leverage and the Machine Intelligence Research Institute (MIRI). After 2014, any direct affiliation Thiel had with EA ceased as was initiated by himself. AI alignment was the cause in EA that was the primary attraction for Thiel. He ceased donating to MIRI after he came to disagree with the relatively pessimistic perspective on the impact advanced artificial intelligence would have on the long-term future.
Thiel has continued donating to Leverage as an organization independent of EA after he otherwise ceased donating to any other EA-affiliated organizations. Leverage Research explicitly stopped self-identifying as an EA-aligned organization from 2018/19 onward. Geoff Anders, the founder and executive director of Leverage Research, is on the record as Peter Thiel still being a major, repeat donor to the organization as of 2021/22.
Depends if you like Peter Thiel. I don't know much about him, but his support for Trump was a big turnoff for me. I'm not sure why he wrote the techno-optimistic book "Zero to One" and then decided to plonk down his cash on a pathological liar bullshitter whose greatest concern seemed to be keeping Mexicans and Muslims out of the U.S. ... but he did get a tax cut out of it.
Anyway, Phil Torres is cheating here by saying "longtermists are directly associated with a Trump supporter in 2013!" when Trump did not run for president until 2015.
He seems to have gotten his money's worth on Trump now that Trump's endorsement of Mithril Capital principal J.D. Vance is about to propel him into the Senate. Thiel Foundation president Blake Masters might end up in the same spot in Arizona.