Posts

Why we're not founding a human-data-for-alignment org 2022-09-27T20:14:00.547Z
[April fool's post] Proposal to assign careers by birthdate 2022-04-01T04:49:58.953Z
What's the best machine learning newsletter? How do you keep up to date? 2022-03-25T14:36:45.543Z
So you want to be a charity entrepreneur. Read these first. 2022-01-26T15:53:11.925Z
An update in favor of trying to make tens of billions of dollars 2021-10-14T20:57:35.660Z
First vs. last name policies? 2021-09-11T10:37:41.874Z
Resources on the expected value of founding a for-profit start-up? 2021-04-05T20:38:59.717Z
How to share the basic concept of EA on social media? (Facebook in my case) 2020-04-16T18:43:48.353Z

Comments

Comment by Mathieu Putz on Open EA Global · 2022-09-01T14:19:37.388Z · EA · GW

I'm not sure what my general take is on this, I think it's quite plausible that keeping it exclusive is net good, maybe more likely good than not. But I want to add one anecdote of my own which pushes the other way.

Over the last two years, while I was a student, I made two career choices in part (though not only) to gain EA credibility:

  • I was a group organizer at EA Munich (~2 hours a week)
  • I did a part-time internship at an EA org (~10 hours a week)

Both of these were fun, but I think it's unlikely that they were good for my career or impact in ways other than gaining EA credibility. I think one non-trivial reason EA credibility was important to me was that I wanted to keep being admitted to things like EAG (maybe more than I admitted to myself in my explicit reasoning at the time).

Having said that, I think EA credibility has also been important to my career in other ways, notably to receive grants, so it's not clear that this was bad on net.

It might also be that these were unnecessary or ineffective ways of gaining EA credibility --- I don't know what the admissions team cares about. Regardless, I think it's an update that this is part of what led me to make choices that I otherwise might not have made (though quite plausibly I would have made them anyway).

Comment by Mathieu Putz on Stuff I buy and use: a listicle to boost your consumer surplus and productivity · 2022-06-01T05:37:03.019Z · EA · GW

This is so useful! I love this kind of post and will buy many things from this one in particular.

Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).

Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?

Comment by Mathieu Putz on Proposal: Impact List -- like the Forbes List except for impact via donations · 2022-05-30T21:24:54.835Z · EA · GW

This seems really exciting!

I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.

So I think conditional on thinking this is a good idea at all, this may be an unusually good funding opportunity for smaller earning-to-givers. Unfortunately, the flip-side is that fundraising for this may be somewhat harder than for other EA projects.

Comment by Mathieu Putz on Why Helping the Flynn Campaign is especially useful right now · 2022-05-11T19:37:39.712Z · EA · GW

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Comment by Mathieu Putz on Why Helping the Flynn Campaign is especially useful right now · 2022-05-11T19:37:21.455Z · EA · GW

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Comment by Mathieu Putz on Why Helping the Flynn Campaign is especially useful right now · 2022-05-11T06:30:55.030Z · EA · GW

Hey, interesting to hear your reaction, thanks.

I can't respond to all of it now, but do want to point out one thing.

And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.

I think this isn't accurate.

Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for organizations that get a lot of funding from big EA donors, but it's still their own salary which they are free to spend however they like. As an aside, probably in most cases the funding of these orgs will currently still come from OpenPhil (who give away Dustin Moskovitz's and Cari Tuna's wealth), rather than FTX Future Fund (who give away SBF's wealth among others).

I think it's important that for the most part, this is money that not-crazy-rich Americans could have spent on themselves, but chose to donate to this campaign instead.

Comment by Mathieu Putz on Why Helping the Flynn Campaign is especially useful right now · 2022-05-10T11:03:41.667Z · EA · GW

If you're wondering who you might know in Oregon, you can search your Facebook friends by location:

Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see "Friends of Friends".

I assume that will miss a few, so it's probably worth also actively thinking about your network, but this is probably a good low-effort first start.

Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here's a map.

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2022-05-09T10:39:53.093Z · EA · GW

Very glad to hear this, thanks!!

Comment by Mathieu Putz on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2022-05-08T15:03:34.640Z · EA · GW

Thanks for writing this!

I believe there's a small typo here:

The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.

Shouldn't the difference be (P_y−P_n)M−N ?

Comment by Mathieu Putz on New forum feature: Map of Community Members · 2022-05-04T15:58:09.857Z · EA · GW

This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them --- I'd expect many of them to be in a similar situation.

Comment by Mathieu Putz on Using TikTok to indoctrinate the masses to EA · 2022-05-01T20:33:33.179Z · EA · GW

Congrats on your success!

Comment by Mathieu Putz on Decomposing Biological Risks: Harm, Potential, and Strategies · 2022-04-30T22:16:35.211Z · EA · GW

I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!

Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfully causing a global pandemic worse than natural ones is not easy. (Lowered my concern in relative terms that is --- I still think this risk is unacceptably high and prevention measures should be taken. I don't want to rely on terrorists being incompetent.) This suggests both that terrorist groups may not pursue bioterrorism even if it were the best way to achieve their goals and that they may not be able to execute well on such a difficult task. Hence, without having thought about it too much, I think I might rate the risks from non-state actors somewhat lower than you do (though I'm not sure, especially since you don't give numerical estimates --- which is totally reasonable). For instance, I'm not sure whether we should expect risks of GCBRs caused by non-state actors to be higher than risks of GCBRs caused by state actors (as you suggest).

Comment by Mathieu Putz on Effectiveness is a Conjunction of Multipliers · 2022-04-11T19:35:20.323Z · EA · GW

Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.

I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.

Comment by Mathieu Putz on Effectiveness is a Conjunction of Multipliers · 2022-04-11T15:59:00.559Z · EA · GW

I agree that superlinearity is way more pronounced in some cases than in others.

However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.

Comment by Mathieu Putz on "Long-Termism" vs. "Existential Risk" · 2022-04-08T14:22:35.618Z · EA · GW

I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.

Comment by Mathieu Putz on [April fool's post] Proposal to assign careers by birthdate · 2022-04-01T14:28:42.547Z · EA · GW

Exactly my plan! Of course, this was 100% on purpose!

Comment by Mathieu Putz on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-30T15:08:05.366Z · EA · GW

Super helpful, thanks for your answer!

Comment by Mathieu Putz on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-30T15:07:06.313Z · EA · GW

Very glad to have helped!

Comment by Mathieu Putz on Effectiveness is a Conjunction of Multipliers · 2022-03-26T02:45:28.650Z · EA · GW

Great post, thanks for writing it! This framing appears a lot in my thinking and it's great to see it written up! I think it's probably healthy to be afraid of missing a big multiplier.

I'd like to slightly push back on this assumption:

If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours

First, I agree with other commenters and yourself that it's important not to overwork / look after your own happiness and wellbeing etc.

Having said that, I do think working harder can often have superlinear returns, especially if done right (otherwise it can have sublinear or negative returns). One way to think about this is that the last year of one's career is often the most impactful in expectation, since one will have built up seniority and experience. Working harder is effectively a way of "pulling that last year forward a bit" and adding another even higher impact year after it. I.e. a year that is much higher-impact than your average year, hence the superlinearity.

Another way to think about this is intuitively. If Sam Bankman-Fried had only worked 20% as hard, would he have made $4 billion instead of $20 billion? No. He would probably have made much much less. Speed is rewarded in the economy and working hard is one way to be fast.

This makes the multiplier from working harder bigger than you would intuitively expect and possibly more important relative to judgment than you suggest.

(I'm not saying everyone reading this should work harder. Some should, some shouldn't.)

Edited shortly after posting to add: There's also a more straightforward reason that the claim "judgment is more important than dedication" is technically true but potentially misleading: one way to get better judgment is investing time into researching thorny issues. That seems to be what Holden Karnofsky has been doing for a decent fraction of his career.

Comment by Mathieu Putz on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-25T18:20:29.364Z · EA · GW

This is great, thanks!

Comment by Mathieu Putz on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-25T14:45:10.122Z · EA · GW

(I accidentally asked multiple versions of this question at once.

This was because I got the following error message when submitting:

"Cannot read properties of undefined (reading 'currentUser')"

So I wrongly assumed the submission didn't work.

@moderators)

Comment by Mathieu Putz on $100 bounty for the best ideas to red team · 2022-03-23T12:16:24.457Z · EA · GW

Make the best case against: "Some non-trivial fraction of highly talented EAs should be part- or full-time community builders." The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1/40 of you attracting one you-equivalent would be break even. Arguably that's too optimistic and the true break-even point is somewhat bigger than 1/40; maybe 1/10. But that seems prima facie very doable in a full-time year. Hence, a non-trivial fraction of highly-talented EAs should do community building.

(I have a few arguments against the above reasoning in mind, but I believe listing them here would be against the spirit of this question. I still would be genuinely interested to see this question be red-teamed.)

Comment by Mathieu Putz on The Future Fund’s Project Ideas Competition · 2022-03-08T10:22:48.849Z · EA · GW

EA Hotel / CEEALAR except at EA Hubs

Effective Altruism

CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)

Comment by Mathieu Putz on The Future Fund’s Project Ideas Competition · 2022-03-02T19:41:49.622Z · EA · GW

Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)

Economic Growth, Effective Altruism

Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?


Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.

My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medications unless "needed".

So I'd be interested to see a large-scale, longterm RCT (randomized controlled trial) that investigated these issues. I'm unsure about exactly how to do this. One straightforward example would be having two randomized groups, giving the substance to one of them for X months/years, and seeing whether that group has higher earnings after that period. Ideally, the study participants would perform office jobs, rather than manual labor (since that is where most of the value would come from); perhaps even especially cognitively demanding tasks, such as research or trading. In the case of research, metrics such as the number of published articles or number of citations would likely make more sense than earnings.

One could also check health outcomes, probably incl. mental health. Multiple substances or different dosing-regimes could be tested at once by adding study arms.

Notes:
- One of the reasons I would most care about this might be improving the effectiveness of people working to prevent X-risks, but I'm not sure whether that fits neatly into any of your categories (and whether that's intentional).
- I'm not at all sure whether this is a good idea, but tried to err on the side of over-including since that seems productive while brainstorming; I haven't thought about this much.
- It may be that such studies exist and I just don't know about them (pointers?).
- It may be impossible to get this approved by ethics boards, though hopefully in some country somewhere it could happen?

Comment by Mathieu Putz on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-11T07:44:58.910Z · EA · GW

Thanks for this! I think it's good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I'll try this.

I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism EA currently has, it seems to be pretty good at attracting smart people and might be worth preserving). Would be interesting to see some experimentation, perhaps some EA group could try this?

Comment by Mathieu Putz on So you want to be a charity entrepreneur. Read these first. · 2022-01-27T21:23:50.586Z · EA · GW

Thanks for saying that!

Comment by Mathieu Putz on The phrase “hard-core EAs” does more harm than good · 2022-01-08T07:48:17.319Z · EA · GW

I like "(very or most) dedicated EA". Works well for (2) and maybe (4).

Comment by Mathieu Putz on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2022-01-06T21:05:23.189Z · EA · GW

From the perspective of a grant-maker, thinking about reduction in absolute basis points makes sense of course, but for comparing numbers between people, relative risk reduction might be more useful?

E.g. if one person thinks AI risk is 50% and another thinks it's 10%, it seems to me the most natural way for them to speak about funding opportunities is to say it reduces total AI risk by X% relatively speaking.

Talking about absolute risk reduction compresses these two numbers into one, which is more compact, but makes it harder to see where disagreements come from.

It's a minor point, but with estimates of total existential risk sometimes more than an order of magnitude apart from each other, it actually gets quite important I think.

Also, given astronomical waste arguments etc., I'd expect most longtermists would not switch away from longtermism once absolute risk reduction gets an order of magnitude smaller per dollar.

Edited to add: Having said that, I wanna add that I'm really glad this question was asked! I agree that it's in some sense the key metric to aim for and it makes sense to discuss it!

Comment by Mathieu Putz on List of EA funding opportunities · 2022-01-06T09:50:48.036Z · EA · GW

What about individual Earning To Givers?

Is there some central place where all the people doing Earning To Give are listed, potentially with some minimal info about their potential max grant size and the type of stuff they are happy to fund?

If not, how do ETGers usually find non-standard funding opportunities? Just personal networks?

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-12-23T18:11:14.997Z · EA · GW

Hey Sean, thanks so much for letting me know this! Best of luck whatever you do!

Comment by Mathieu Putz on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T10:12:09.508Z · EA · GW

I assume those estimates are for current margins? So if I were considering whether to do earning to give, I should use lower estimates for how much risk reduction my money could buy, given that EA has billions to be spent already and due to diminishing returns your estimates would look much worse after those had been spent?

Comment by Mathieu Putz on What Small Weird Thing Do You Fund? · 2021-11-27T09:44:59.745Z · EA · GW

Great question! Guarding against pandemics do advocacy for pandemic prevention and need many small donors due to legal reasons for some of their work. Here's an excerpt from their post on the EA Forum:

While GAP’s lobbying work (e.g. talking to members of Congress) is already well-funded by Sam Bankman-Fried and others, another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. U.S. campaign contribution limits require that this work be supported by many small-to-medium-dollar donors.

I haven't donated yet myself, in part because I did my yearly donations before learning about them. But I also only know very little about the organisation, so this is not an endorsement — it just felt like a very good example of something where small donors could plausibly beat large ones.

https://forum.effectivealtruism.org/posts/Btm562wDNEuWXj9Gk/guarding-against-pandemics

Comment by Mathieu Putz on Announcing my retirement · 2021-11-25T23:19:22.228Z · EA · GW

Thanks so much for looking after possibly my favorite place on the internet!

Comment by Mathieu Putz on When to get off the train to crazy town? · 2021-11-24T23:27:39.986Z · EA · GW

Hey, thanks for writing this!

Strong +1 for this part:

I had conversations along the lines of “I already did a Bachelor’s in Biology and just started a Master’s in Nanotech, surely it’s too late for me to pivot to AI safety”. To which my response is “You’re 22, if you really want to go into AI safety, you can easily switch”.

I think this pattern is especially suspicious when used to justify some career that's impactful in one worldview over one that's impactful in another.

E.g. I totally empathize with people who aren't into longtermism, but the reasoning should not be "I have already invested in this cause area and so I should pursue it, even though I believe the arguments that say it's >>10x less impactful than longtermist cause areas".

I also get the impression that sometimes people use "personal fit for X" and "have already accumulated career capital for X" interchangeably, when I think the former is to a significant degree determined through inate talents. Thus the message "personal fit matters" is sometimes heard as a weaker version of "continue what you're already doing".

Comment by Mathieu Putz on What is most confusing to you about AI stuff? · 2021-11-24T23:05:57.907Z · EA · GW

Here's a couple that came to mind just now.

  1. How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?

  2. Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be worth a try (not saying that would be bad)?

  3. Big labs in the West that kind of target AGI are OpenAI and DeepMind. Others target AGI less explicitly, but inlcude e.g. Google Brain. Are there equivalents elsewhere? China? Do we know whether these exits? Am I missing labs that target AGI in the West?

  4. Finally, this one's kind of obvious, but how large is the risk? What's the probability of catastrophe? I'm aware of many estimates, but this is still definitely something I'm confused about.


I think on all these questions except (3), there's substantial disagreement among AI safety researchers, though I don't have a good feeling for the distributions of views either.

Comment by Mathieu Putz on We need alternatives to Intro EA Fellowships · 2021-11-20T20:55:26.686Z · EA · GW

I agree it's fine if fellowships aren't interesting to already-engaged EAs and I also see why the question is asked --- I don't even have a strong view on whether it's a bad idea to ask it.

I do think though that the fellowship would have been boring to me at times, even if I had known much less about EA. But maybe I'm just not the type of person who likes to learn stuff in groups and I was never part of the target audience.

Comment by Mathieu Putz on We need alternatives to Intro EA Fellowships · 2021-11-19T13:05:43.787Z · EA · GW

Thanks for writing this, I think it's great you're thinking about alternatives!

The way I learned about EA was just by spending too much time on the forum and with the 80k podcast.

Then, I once attended one session of a fellowship and was a little underwhelmed. I remember the question "so can anybody name the definition of an existential risk according to Toby Ord" after we had been asked to read about exactly that — this just seemed like a waste of time. But to be fair, I was also much more familiar with EA at that point than an average fellow. It's very possible that other people had a better experience in the same session.

But I definitely agree there's room for experimentation and probably improvement!

Comment by Mathieu Putz on Can we influence the values of our descendants? · 2021-11-18T09:20:14.209Z · EA · GW

Thanks for writing this up, super interesting!

Intuitively I would expect persistence effects to be weaker now than e.g. 300 years ago. This is mostly because today society changes much more rapidly than back then. I would guess that it's more common now to live hundreds of kilometres from where you grew up, that the internet allows people to "choose" their culture more freely (my parents like EA less than I do), that the same goes for bigger cities etc. Generally advice from my parents and grandparents sometimes feels outdated, which makes me less likely to listen to it — this may always have been true of young generations, but I feel the advice really is more outdated today than it would have been 300 years ago. In short, I would expect to be much more influenced by my grandparents if I were running their farm with basically the same methods.

This is all super speculative of course and I don't have any hard evidence (other than economic growth rates being higher). But do you agree that there may be reasons to expect this effect to have decreased by a nontrivial amount?

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-18T13:05:24.677Z · EA · GW

Hadn't seen that one, interesting!

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-18T12:56:32.649Z · EA · GW

I agree! I've added an edit to the post, referencing your comment.

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-18T12:51:45.497Z · EA · GW

Thanks for pointing this out! Hadn't known about this, though it totally makes sense in retrospect that markets would find some way of partially cancelling that inefficiency. I've added an edit to the post.

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-18T12:40:02.467Z · EA · GW

Thanks for pointing that out! I agree it's notable and have added it to the list. I don't have a strong opinion on how important this is relative to other things on there.

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-18T12:33:31.103Z · EA · GW

Thanks for your comment! Super interesting to hear all that.

And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

Amazing! Glory to you :) I've added this to the post.

Comment by Mathieu Putz on How to use the Forum · 2021-10-18T09:20:05.706Z · EA · GW

Thanks, it's probably better that way!

Comment by Mathieu Putz on An update in favor of trying to make tens of billions of dollars · 2021-10-15T06:35:32.368Z · EA · GW

Thanks a lot for saying this!

Yeah, I wonder about the flexibility as well. At least, "I have good reason to think I could've gone to MIT/ Jane Street..." should go a long way (if you're not delusional).

Comment by Mathieu Putz on How to use the Forum · 2021-10-14T21:29:05.795Z · EA · GW

Are upvotes anonymous or is there a way to view who upvoted your comments / posts? I'm not saying it should be one way or another, just curious.

Comment by Mathieu Putz on First vs. last name policies? · 2021-09-16T12:55:52.759Z · EA · GW

Thanks for adding your opinion!

Yeah, coming from Luxembourg and studying in Germany, I do get the feeling that the norms differ here. I prefer first name norms though, so that's great :)

Comment by Mathieu Putz on First vs. last name policies? · 2021-09-11T17:30:14.793Z · EA · GW

Right! Thanks, I've fixed it!

Comment by Mathieu Putz on First vs. last name policies? · 2021-09-11T17:28:25.654Z · EA · GW

Thanks for your answer! I agree it's strange that these kinds of formalities are still so much of a thing among otherwise egalitarian people.

Comment by Mathieu Putz on First vs. last name policies? · 2021-09-11T17:26:29.701Z · EA · GW

Thanks! That's super helpful.