Posts

The next decades might be wild 2022-12-15T16:10:06.131Z
Announcing AI safety Mentors and Mentees 2022-11-23T15:21:13.423Z
Disagreement with bio anchors that lead to shorter timelines 2022-11-16T14:40:18.269Z
Some advice on independent research 2022-11-08T14:46:19.652Z
Lessons learned from talking to >100 academics about AI safety 2022-10-10T13:16:38.392Z
What success looks like 2022-06-28T14:30:37.358Z
Announcing Epoch: A research organization investigating the road to Transformative AI 2022-06-27T13:39:16.475Z
What is the right ratio between mentorship and direct work for senior EAs? 2022-06-15T10:56:25.491Z
EA needs to understand its “failures” better 2022-05-24T14:24:57.377Z
How many EAs failed in high risk, high reward projects? 2022-04-26T12:31:26.047Z
EA retreats are really easy and effective - The EA South Germany retreat 2022 2022-04-14T12:09:55.873Z
AI safety starter pack 2022-03-28T16:05:33.914Z
EA should learn from the Neoliberal movement 2022-03-22T16:14:58.097Z
Where would we set up the next EA hubs? 2022-03-16T13:37:21.242Z
There should be an AI safety project board 2022-03-14T16:08:48.523Z
I want to be replaced 2022-02-01T14:45:18.513Z
Should GMOs (e.g. golden rice) be a cause area? 2022-01-31T13:42:02.948Z
How to write better blog posts 2022-01-25T12:29:12.271Z
EA Analysis of the German Coalition Agreement 2021–2025 2022-01-24T13:25:14.388Z
AI acceleration from a safety perspective: Trade-offs and considerations 2022-01-19T09:44:41.299Z
What is the role of Bayesian ML for AI alignment/safety? 2022-01-11T08:07:15.573Z
EA megaprojects continued 2021-12-03T10:33:53.467Z
When to get off the train to crazy town? 2021-11-22T07:30:15.497Z
[Discussion] Best intuition pumps for AI safety 2021-11-06T08:11:17.165Z
Constructive Criticism of Moral Uncertainty (book) 2021-06-04T06:04:01.189Z
Should Chronic Pain be a cause area? 2021-05-18T11:31:46.655Z
Carrots, not sticks - What I learned from introducing people to EA 2021-02-12T08:17:42.284Z
Thoughts on Personal Finance for Effective Altruists 2021-01-29T12:48:34.732Z
Machine Learning and Effective Altruism 2021-01-16T08:46:44.340Z
How much (physical) suffering is there? Part II: Animals 2021-01-10T12:48:14.180Z
How much (physical) suffering is there? Part I: Humans 2021-01-10T12:47:37.302Z

Comments

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2023-01-05T15:44:09.515Z · EA · GW

Usually just asking a bunch of simple questions like "What problem is your research addressing?", "why is this a good approach to the problem?", "why is this problem relevant to AI safety?", "How does your approach attack the problem?", etc. 

Just in a normal conversation that doesn't feel like an interrogation. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-21T15:27:52.668Z · EA · GW

Interesting perspective. Hadn't thought about it this way but seems like a plausible scenario to me. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-18T08:05:35.515Z · EA · GW

Interesting. Let's hope they are right and we are able to replace fossils with renewables fast enough. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-17T13:57:10.013Z · EA · GW

I don't have any strong opinions on that. There is a good chance I'm just uninformed and the IEA is right. My intuition is just that countries don't like it if their energy gets more expensive, so they'll keep digging for coal, oil or gas as long as renewables aren't cheaper.

Comment by mariushobbhahn on The next decades might be wild · 2022-12-16T12:17:30.826Z · EA · GW

No, I think there is a phase where everyone wished they had renewables but they can't yet get them so they still use fossil fuels. I think energy production will stay roughly constant or increase but the way we produce it will change slower than we would have hoped. 

I don't think we will have a serious decline in energy production. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-16T08:40:26.078Z · EA · GW

I think narrow AIs won't cause mass unemployment but more general AIs will. I also think that objectively that isn't a problem at this point anymore because AIs can do all the work but I think it will take at least another decade that humans can accept that. 

The narrative that work is good because you contribute something to society and so on is pretty deeply engrained, so I guess lots of people won't be happy after being automated away. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-16T08:37:38.354Z · EA · GW

I'm bullish on Solar+storage. But I think that it will take a while to adapt the grid, so I think it will take at least a decade before we can even think about phasing out fossils. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-16T08:35:52.575Z · EA · GW

I think narrow AIs won't cause massive unemployment but the more general they get, the harder it will be to justify using humans instead of ChatGPT++

I think education will have to change a lot because students could literally let their homework be done entirely by ChatGPT and get straight A's all the time. 

I guess it's something like "until class X you're not allowed to use a calculator, and then after that you can" but for AI. So it will be normal that you can just print an essay in 5 seconds similar to how you can do complicated math that would usually take hours on paper in 5 seconds on a calculator. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-15T21:34:55.068Z · EA · GW

Yeah. I kept the post mostly to AI but I also think that other technological breakthroughs are a possibility. Didn't want to make it even longer ;)

I think you could write more of these stories for other kinds of disruptions and I'd be interested in reading them.

Comment by mariushobbhahn on The next decades might be wild · 2022-12-15T21:32:39.830Z · EA · GW

Thanks for pointing out the mistake. I fixed the "century" occurrences. 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-15T18:57:53.859Z · EA · GW

Yeah. I thought about it. I wasn't quite sure how to structure it. I guess I'm not used to writing "story-like" texts. Most of my other writing is just a glorified bullet-point list ;) 

Comment by mariushobbhahn on The next decades might be wild · 2022-12-15T18:03:52.816Z · EA · GW

Fair. I'll go over it and explain some of the technical concepts in more detail. 

Also, I also expect many people who are familiar with the latest discussions in AI to have longer timelines than this. So the intended audience is not just people who aren't familiar with the field. 

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-11T08:04:14.608Z · EA · GW

I'm obviously heavily biased here because I think AI does pose a relevant risk. 

I think the arguments that people made were usually along the lines of "AI will stay controllable; it's just a tool", "We have fixed big problems in the past, we'll fix this one too", "AI just won't be capable enough; it's just hype at the moment and transformer-based systems still have many failure modes", "Improvements in AI are not that fast, so we have enough time to fix them". 

However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks. 

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-11T07:40:02.956Z · EA · GW

I don't think these conversations had as much impact as you suggest and I think most of the stuff funded by EA funders has decent EV, i.e. I have more trust in the funding process than you seem to have.  

I think one nice side-effect of this is that I'm now widely known as "the AI safety guy" in parts of the European AIS community and some people have just randomly dropped me a message or started a conversation about it because they were curious.

I was working on different grants in the past but this particular work was not funded. 

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-11T07:35:45.998Z · EA · GW

I think it's a process and just takes a bit of time. What I mean is roughly "People at some point agreed that there is a problem and asked what could be done to solve it. Then, often they followed up with 'I work on problem X, is there something I could do?'.  And then some of them tried to frame their existing research to make it sound more like AI safety. However, if you point that out, they might consider other paths of contributing more seriously. I expect most people to not make substantial changes to their research though. Habits and incentives are really strong drivers". 

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-10T21:10:19.792Z · EA · GW

I have talked to Karl about this and we both had similar observations. 

I'm not sure if this is a cultural thing or not but most of the PhDs I talked to came from Europe. I think it also depends on the actor in the government, e.g. I could imagine defense people to be more open to existential risk as a serious threat. I have no experience in governance, so this is highly speculative and I would defer to people with more experience. 

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-10T17:25:12.831Z · EA · GW

Reflects my experience!

The resources I was unaware of were usually highly specific technical papers (e.g. on some aspect of interpretability), so nothing helpful for a general audience.

Comment by mariushobbhahn on Lessons learned from talking to >100 academics about AI safety · 2022-10-10T14:52:41.781Z · EA · GW

Probably not in the first conversation. I think there were multiple cases in which a person thought something like "Interesting argument, I should look at this more" after hearing the X-risk argument and then over time considered it more and more plausible. 

But like I state in the post, I think it's not reasonable to start from X-risks and thus it wasn't the primary focus of most conversations. 

Comment by mariushobbhahn on Eliminate or Adjust Strong Upvotes to Improve the Forum · 2022-09-02T09:27:36.085Z · EA · GW

I thought about the topic a bit at some point and my thoughts were

  • The strength of the strong upvote depends on the karma of the user (see other comment)
  • Therefore, the existence of a strong upvote implies that users that have gained more Karma in the past, e.g. because they write better or more content, have more influence on new posts.
  • Thus, the question of the strong upvote seems roughly equivalent to the question "do we want more active/experienced members of the community to have more say?"
  • Personally, I'd say that I currently prefer this system over its alternatives because I think more experienced/active EAs have more nuanced judgment about EA questions. Specifically, I think that there are some posts that fly under the radar because they don't look fancy to newcomers and I want more experienced EAs to be able to strongly upvote those to get more traction.
  • I think strong downvotes are sometimes helpful but I'm not sure how often they are even used. I don't have a strong opinion about their existence. 
  • I can also see that strong votes might lead to a discourse where experienced EAs just give other experienced EAs lots of Karma due to personal connections but most people I know use their strong upvotes based on how important they think the content is and not by how much they like the author. 
  • In conclusion, I think it's good that we give more experienced/active members that have produced high-quality content in the past more say.  I think one can discuss the size of the difference, e.g. maybe the current scale is too liberal or too conservative. 
Comment by mariushobbhahn on The case for Green Growth skepticism and GDP agnosticism · 2022-08-14T13:21:33.042Z · EA · GW

OK, thanks for the clarification. Didn't know that. 

Comment by mariushobbhahn on The case for Green Growth skepticism and GDP agnosticism · 2022-08-14T08:02:13.663Z · EA · GW

I agree that wind and solar could lead to more land use if we base our calculations on the efficiency of current or previous solar capabilities. But under the current trend, land use will decrease exponentially as capabilities increase exponentially, so I don't expect it to be a real problem. 

I don't have a full economic model for my claim that the world economy is interconnected but stuff like the supply-chain crisis, or Evergreen provided some evidence in this direction. I think this was not true at the time of the industrial revolution but is now. 

I think it really depends on which kind of environmental constraint we talk about and also how strong the link of that is to GDP in rich nations. If there is a convincing case, I'd obviously change my mind, but for now, I feel like we can address all problems without having to decrease GDP. 

Comment by mariushobbhahn on The case for Green Growth skepticism and GDP agnosticism · 2022-08-13T19:28:32.062Z · EA · GW

Thanks for the write-up. I upvoted because I think it lays out the arguments clearly and explains them well but I disagree with most of the arguments. 

I will write most of this in more detail in a future post (some of them can already be seen here) but here are the main disagreements:
1. We can decouple way more than we currently do: more value will be created through less resource-intensive activities, e.g. software, services, etc. Absolute decoupling seems impossible but I don't think the current rate of decoupling is anywhere near the realistically achievable limits. 
2. Renewables are the main bottleneck: The energy per dollar for solar has decreased exponentially over the last 10 years and there is no reason it should not continue; the same is true for lithium-ion batteries. The technology is ready (or will be within the next decade) and it seems to be mostly a question of political will to change. Once renewable energy is abundant most other problems seem to be much easier to solve, e.g. protecting biodiversity if you don't need the space for coal mines. 
3. The global economy is interconnected: It is very hard, if not impossible to stop growth in developed countries but keep growth in developing countries. Degrowth in the West most likely implies decreased growth in the developing world, which I oppose. 
4. More growth is required for a stable future path: Most renewable technology has been developed by rich nations. Most efficiency gains in tech have been downstream effects from R&D in rich nations. If we want to get 1000x more efficient green tech, it will likely come from rich countries that pay their scientists from public taxes. In general, many solutions to problems pointed out by degrowthers require a lot of money. A bigger pie means a bigger public R&D budget and more money to spend, e.g. on better education or national parks. 
5. My vision of the future:  I don't think we can scale to infinite value with finite resources. There clearly is a limit at some point but I don't think we have reached it yet. I want to strive toward a world that could host 100B inhabitants powered by solar, hydrogen and nuclear. People live in dense cities with good public transport. People mostly stopped eating meat and vegetarianism drastically reduced land use and problems of pollution. Many problems that exist in the West today are solved in the future, e.g. the infant death rate is not 0.001 (like it is today in the West) it should be 0! I just can't see why the current level of GDP is optimal for some reason and I think we should aim to grow GDP AND solve other problems (and the two are not mutually exclusive or GDP is even necessary for the other). 
6. GDP growth in the west is not a major goal for  EA anyway: I agree with the fact that GDP growth in already rich countries should not be a major goal for EAs. We should aim to solve global problems, many of which are in less developed countries and we should prevent x- and s-risks. Most of these goals are mostly independent of GDP in rich countries.  However, on the margins, I think more GDP in rich countries probably makes it easier to achieve EA goals, e.g. more GDP means a bigger budget for pandemic prevention. Furthermore, I think it would be bad for EAs to support degrowth both because it seems less relevant than other problems and because I just don't think the arguments are true (as described above).

I will publish a slightly more details version of the above arguments and link it here so that you can engage with them more properly. Thank you, once again, for presenting the arguments for degrowth in this clear and non-judgemental way such that people can engage with them on the object level. 

Comment by mariushobbhahn on AI safety starter pack · 2022-08-11T07:51:20.165Z · EA · GW

Added it. Thanks for pointing it out :) 

Comment by mariushobbhahn on I want to be replaced · 2022-07-04T16:31:45.964Z · EA · GW

I think that while this is hard, the person I want to be would want to be replaced in both cases you describe. 
a) Even if you stay single, you should want to be replaced because it would be better for all three involved. Furthermore, you probably won't stay single forever and find a new (potentially better fitting) partner.
b) If you had very credible evidence that someone else was not hired who is much better than you, you should want to be replaced IMO. But I guess it's very implausible that you can make this decision better since you have way less information than the university or employer. So this case is probably not very applicable in real life.

Comment by mariushobbhahn on AI safety starter pack · 2022-07-01T06:39:36.965Z · EA · GW

Great. Thanks for sharing. I hope it increases accountability and motivation!

Comment by mariushobbhahn on What success looks like · 2022-06-29T06:16:58.932Z · EA · GW

No, it's a random order and does not have an implied ranking.

Comment by mariushobbhahn on What success looks like · 2022-06-29T06:04:20.124Z · EA · GW

I don't think "dealing with it when we get there" is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I'd prefer to reduce the risk as much as possible nonetheless. 

Comment by mariushobbhahn on What success looks like · 2022-06-28T21:10:04.515Z · EA · GW

I'm not sure why this should be reassuring. It doesn't sound clearly good to me. In fact, it sounds pretty controversial. 

Comment by mariushobbhahn on What success looks like · 2022-06-28T20:03:26.575Z · EA · GW

I think this is a very important question that should probably get its own post. 

I'm currently very uncertain about it but I imagine the most realistic scenario is a mix of a lot of different approaches that never feels fully stable. I guess it might be similar to nuclear weapons today but on steroids, i.e. different actors have control over the technology, there are some norms and rules that most actors abide by, there are some organizations that care about non-proliferation, etc. But overall, a small perturbation could still blow up the system. 

A really stable scenario probably requires either some very tough governance, e.g. preventing all but one actor from getting to AGI, or high-trust cooperation between actors, e.g. by working on the same AGI jointly. 

Overall, I currently don't see a realistic scenario that feels more stable than nuclear weapons seem today which is not very reassuring. 

Comment by mariushobbhahn on What success looks like · 2022-06-28T19:55:16.731Z · EA · GW

Yes, that is true. We made the decision to not address all possible problems with every approach because it would have made the post much longer. It's a fair point of criticism though. 

Comment by mariushobbhahn on What success looks like · 2022-06-28T16:38:04.718Z · EA · GW

We thought about including such a scenario but decided against it. We think it might give the EA community a bad rep even if some people have already publically talked about it. 

Comment by mariushobbhahn on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-16T09:01:48.377Z · EA · GW

thanks for the link!

Comment by mariushobbhahn on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-15T13:48:42.514Z · EA · GW

Agree. I guess most EA orgs have thought about this. Some superficially and some extensively. If someone who feels like they have a good grasp on these and other management/prioritization questions, writing a "Basic EA org handbook" could be pretty high impact. 
 

Something like "please don't repeat these rookie mistakes" would already save thousands of EA hours. 

Comment by mariushobbhahn on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-15T13:46:52.556Z · EA · GW

Right. There are definitely some helpful heuristics and analogies but I was wondering if anyone took a deep dive and looked at research or conducted their own experiments. Seems like a potentially pretty big question for EA orgs and if some strategies are 10% more effective than others (measured by output over time) it could make a big difference to the movement. 

Comment by mariushobbhahn on AI safety starter pack · 2022-06-15T13:44:51.416Z · EA · GW

Nice. It looks pretty good indeed! I'll submit something in the near future. 

Comment by mariushobbhahn on Are there English-speaking meetups in Southeast Germany? · 2022-06-13T15:37:33.038Z · EA · GW

We have a chapter in Tübingen: https://eatuebingen.wordpress.com/

We speak English when one or more person has a preference for it which is most of the time. 

Comment by mariushobbhahn on New cause area: bivalve aquaculture · 2022-06-13T15:35:28.116Z · EA · GW

Why would you fund bivalve rather than fully plant-based alternatives to meat? I guess you could also replace bivalves with plant-based alternatives, right? 

Comment by mariushobbhahn on What's the causal effect of a PhD? · 2022-06-05T11:55:01.954Z · EA · GW

Since starting a Ph.D. myself, I have updated towards "a Ph.D. is much less useful than I thought" and I usually recommend people not to start one in most instances. However, I think there are some things that a Ph.D. teaches you. 
a) Really deeply understand some topic: Spending thousands of hours reading papers, doing some math or coding something means that you are one of a few people globally who have a good understanding of a topic. This can be useful if your topic is useful but also for instrumental reasons. For example, I find it much easier now to dive into a new topic because I feel like it is possible to learn it even if will take some time. 
b) Working on your own: This might not be true for every Ph.D. student but for a lot of them. Most of the time, you will work on your own. You will get some supervision and collaborate on some projects but for your first-author papers, you will have to carry the responsibility and do most of the work. During the first year of my Ph.D., I got much more comfortable thinking about a problem even if I couldn't ask anyone for help. This seems like a good skill when you work on the frontier of a field. 
c) A sad but probably true framing: I now think of PhDs as "We throw a smart person at a hard problem and see what happens". It will almost certainly feel bad and slow and insufficient. But the person will learn a bunch of things that might be valuable. The person might also break and burn out, so it's a tough trade-off. 
d) It's your only entry to academia: There are a few exceptions but most professors have a Ph.D. If you intend to become a professor, you probably need to do a Ph.D. 

The BIG PROBLEM with PhDs (at least in my opinion) is that you can learn most of these skills in other settings as well but with less suffering. Therefore, I would always recommend people to apply to research jobs in industry unless they really want to take the hard route through the Ph.D. If In general, I think you need a strong reason to want to do a Ph.D. and the default should be not doing one even if you intend to work in a research position eventually.  

Comment by mariushobbhahn on EA needs to understand its “failures” better · 2022-05-24T14:54:22.608Z · EA · GW

Thanks for the pointer. I hadn't seen it at the time. Will link to it in the post.

Comment by mariushobbhahn on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-14T09:08:21.190Z · EA · GW

I think I'm sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It's not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn't get funding from the FTX foundation, OpenPhil or the LTFF. The internal bars for funders still seem to be hard to cross and I expect this to hold for a while. 
c) I'm not sure how the grifters would accumulate power and steer the movement off the rails. Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power. Overall, I don't see a strong mechanism by which the grifters rise to power without either stopping being grifters or blowing their cover. Maybe you could expand on that.  I think the company analogy that you are making is less plausible in an EA context because (I believe) people update stronger on negative evidence. It's not just some random manager position that you're putting at risk, there are lives at stake. But maybe I'm too naive here. 

Comment by mariushobbhahn on How many EAs failed in high risk, high reward projects? · 2022-04-26T18:28:39.448Z · EA · GW

Thanks for sharing. 
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn.  I can understand, though, that this is a very tough ask in your current position. 

Comment by mariushobbhahn on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-26T06:59:26.491Z · EA · GW

That sounds very reasonable. Thanks for the swift reply.

Comment by mariushobbhahn on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-25T21:19:33.368Z · EA · GW

Hi, are PhD students also allowed to submit? I would like to submit a distillation and would be fine with not receiving any money in case I win a prize. In case this complicates things too much, I could understand if you don't want that. 

Comment by mariushobbhahn on EA Forum's interest in cause-areas over time and other statistics · 2022-04-10T20:41:00.151Z · EA · GW

Thanks for the write-up. If you still have the time, could you increase the font sizes of the labels and replace the figures? If not, don't worry but it's a bit hard to read. It should take 5 minutes or so. 

Comment by mariushobbhahn on AI safety starter pack · 2022-03-31T11:39:56.313Z · EA · GW

There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board

Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around. 

Until we have something more official, projects will likely only be accessible through these informal channels. 

Comment by mariushobbhahn on Where would we set up the next EA hubs? · 2022-03-28T07:32:44.899Z · EA · GW

I think this is true for EA orgs but 
a) Some people want to contribute within the academic system
b) Even EA orgs can be constrained by weird academic legal constraints. I think FHI is currently facing some problems along these lines (low confidence, better ask them). 

Comment by mariushobbhahn on EA should learn from the Neoliberal movement · 2022-03-22T15:04:11.404Z · EA · GW

Thanks for pointing that out. Now updated!

Comment by mariushobbhahn on EA should learn from the Neoliberal movement · 2022-03-22T15:01:06.842Z · EA · GW

Fair, I'll just remove the first sentence. It's too confusing. 

Comment by mariushobbhahn on EA should learn from the Neoliberal movement · 2022-03-22T15:00:33.305Z · EA · GW

I think most EAs would agree with most of the claims made in the "what neoliberals believe in" post. Furthermore, the topics that are discussed on the neoliberal podcast often align with the broader political beliefs of EAs, e.g. global free trade is good, people should be allowed to make free choices as long as they don't harm others, one should look at science and history to make decisions, large problems should be prioritized, etc. 

There is a chance that this is just my EA bubble. Let me know if you have further questions. 

Comment by mariushobbhahn on EA should learn from the Neoliberal movement · 2022-03-22T14:55:21.162Z · EA · GW

Fair point. Just to clarify, my post is mostly about the NEOLIBERAL PROJECT and not about the neoliberal thinkers.