Posts

Should we discount future people in proportion to the probability of them not existing? 2022-12-17T14:15:28.343Z
Where in the USA should an EA live? 2022-10-26T09:03:21.025Z
What are the best examples of "how to work with me" documents? 2022-10-12T07:08:54.356Z
Are there important things that aren't quantifiable? 2022-09-12T11:49:45.824Z
The (Allegedly) Best Business Books 2022-09-12T00:38:31.762Z
Should EA shift away (a bit) from elite universities? 2022-08-21T09:46:25.098Z
The Reluctant Prophet of Effective Altruism | The New Yorker 2022-08-09T02:11:31.390Z
Recommendations for non-technical books on AI? 2022-07-12T23:23:25.897Z
What is Operations Management? 2022-07-10T01:27:48.138Z
Hiring: How to do it better 2022-05-23T07:45:56.225Z
What should one do when someone insists on actively making life worse for many other people? 2022-05-14T05:59:24.941Z
Joseph Lemien's Shortform 2022-04-30T14:03:24.618Z
Does anything like a "resume book" exist in EA? 2022-01-11T13:34:42.808Z

Comments

Comment by Joseph Lemien (jlemien) on Reflections on applications, rejections and feedback · 2023-03-29T00:58:50.912Z · EA · GW

Hmmmm. I'm wondering what part of the "selecting people for a job" model is transferrable and applicable to the "selecting people for a research program, grant, etc."

In those circumstances, I'm guessing that there are specific criteria you are looking for, and it might just be a matter of shifting away from vibes & gut feelings and towards trying to verbalize/clarify what the criteria are. I'm guessing that even if you won't have performance reviews for these people (like you would with employees), you still have an idea as to what counts as success.

Here is a hypothetical that might be worth exploring (this is very rough and was written in only a few minutes fairly off the top of my head, so don't take t too seriously):

The next cohort for the AI Safety Camp is very large (large enough to be a good sample size for social science research), and X years in the future you look at all the individuals from that cohort to see what they are doing. The goals of AI Safety Camp are to provide people with both the motivation to work on AI safety and the skills to work on AI safety, so let's see A) how many people in the cohort are working on AI safety, and B) how much they are contributing or how much of a positive impact they are having. Then we look at the applications that they submitted X years ago to join AI Safety camp, and see what criteria those applications have that they have.

I'm not good enough at data analysis to be able to pull much info, but there likely would be differences (of course, in reality it would have to be a pretty big sample size in order for any effects to not be overwhelmed by the random noise of life that has happened in the intervening X years). So although this little thought experiment is a bit silly and simplistic, we can still imagine the idea.

Comment by Joseph Lemien (jlemien) on Reflections on applications, rejections and feedback · 2023-03-28T21:06:49.343Z · EA · GW

I think that focusing too much on refining the application process is hubris. I don’t believe anyone is good at this. The best we can do is to give as many people as possible the opportunity to participate and contribute. If you disagree and if you know how to set up a great application process, then please message me and teach me your magic.

Hi there! I think I disagree with you. :) I have some broad ideas about setting up a great application process. I guess a high-level summary would be something like:

  • know what you are looking for
  • know what criteria/traits/characteristics/skill/etc. predict what you are looking for
  • have methods you can use to assess/measure those criteria
  • assess the applicants using those methods

The implementation of it can be quite complicated and the details will vary massively depending on circumstances, but at a basic level that is what it is: know what you are looking for, and measure it. I think this is a lot harder in a small organization, but there are still aspects that can be used.

I don't want anyone to think that I am an expert who knows everything about applications. I'm just a guy that reads about this kind of thing and thinks about this kind of thing. Then in early 2023 I started to learn a bit about organizational behavior and industrial-organizational psychology. But I'd be happy to bounce around ideas if you'd like to have a call to explore this topic more.

Comment by Joseph Lemien (jlemien) on Some problems in operations at EA orgs: inputs from a dozen ops staff · 2023-03-17T22:03:55.046Z · EA · GW

I think there isn't a single term (although I'm certainly not an expert, so maybe someone with a PhD in business or a few decades of experience can come and correct me).

Finance, Marketing, Legal, Payroll, Compliance, and so on could all be departments, divisions, or teams within an organization, but I don't know of any term used to cover all of them with the meaning of "supporting the core work." I'm not aware of any label that is used outside of EA analogous to how "operations" is used within in EA.

Comment by Joseph Lemien (jlemien) on Some problems in operations at EA orgs: inputs from a dozen ops staff · 2023-03-17T15:15:49.743Z · EA · GW

Agreed. One of the things I've struggled with is taking the time to interrogate the task rather than diving into it. Power dynamics and desire to please certainly come into play. I suspect that this is common (although I might merely be victim to a typical mind fallacy).

It shouldn't be a surprise to anyone that having clarity about the task (priority, dependencies, etc.), allows better work to be done. But I think that many employees, especially people with relatively little work experience, struggle with it.

Comment by Joseph Lemien (jlemien) on Some problems in operations at EA orgs: inputs from a dozen ops staff · 2023-03-17T15:09:36.565Z · EA · GW

The term “Operations” is not used in the same way outside EA

I agree that this is weird. In EA operations is something like "everything that supports the core work and allows other people to focus on the core work," while outside of EA operations is the core work of a company. Although I wish that EA hadn't invented it's own definition for operations, at this point I don't see any realistic options for it changing.

Comment by Joseph Lemien (jlemien) on Write a Book? · 2023-03-16T02:35:50.552Z · EA · GW

I would love to read a book written by you. I've enjoyed many of your blog posts.

Aside from my own reading preferences, I think it would be very nice to have a book written about EA  ideas (broadly described) by someone who is not a philosophy professor, and which focuses more on the mundane aspects of everyday life, rather than distant and abstract moral aspirations.

Comment by Joseph Lemien (jlemien) on Racial and gender demographics at EA Global in 2022 · 2023-03-14T19:04:52.824Z · EA · GW

I've felt something similar. I'm roughly thinking of it as being "actively welcoming" as opposed to being "passively welcoming."

Comment by Joseph Lemien (jlemien) on Joseph Lemien's Shortform · 2023-03-13T02:28:47.272Z · EA · GW

I've been reading a few academic papers on my "to-read" list, and The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else? has a section that made me think about epistemics, knowledge, and how we try to make the world a better place. I'll include the exact quote below, but my rough summary of it would be that multiple studies found no relationship between the presence or absence of highway shoulders and accidents/deaths, and thus they weren't built. Unfortunately, none of the studies had sufficient statistical power, and thus the conclusions drawn were inaccurate. I suppose that absence of evidence is not evidence of absence might be somewhat relevant here. Lo and behold, later on a meta-analysis was done, finding that having highway shoulders reduced accidents/deaths. So my understanding is that inaccurate knowledge (shoulders don't help) led to choices (don't build shoulders) that led to accidents/deaths that wouldn't otherwise have happened.

I'm wondering if there are other areas of life that we can find facing similar issues. These wouldn't necessarily be new cause areas, but the general idea of identify an area that involves life/death decisions, and then either make sure the knowledge is accurate or attempt to bring accurate knowledge to the decision-makers would be incredibly helpful. Hard though. Probably not very tractable.

For anyone curious, here is the relevant excerpt that prompted my musings:

A number of studies had been conducted to determine whether highway shoulders, which allow drivers to pull over to the side of the road and stop if they need to, reduce accidents and deaths. None of these inadequately powered studies found a statistically significant relationship between the presence or absence of shoulders and accidents or deaths. Traffic safety engineers concluded that shoulders have no effect, and as a result fewer shoulders were built in most states. Hauer’s (2004) meta-analysis of these studies showed clearly that shoulders reduced both accidents and deaths. In this case, people died as a result of failure to understand sampling error and statistical power.

Comment by Joseph Lemien (jlemien) on Deconfusion Part 3 - EA Community and Social Structure · 2023-03-11T05:19:45.366Z · EA · GW

I read The Tyranny of Structurelessness because of it being mentioned in this post, and I found it very applicable to EA groups, and to other non-structured groups I've been a part of. I'm not a sociologist, but I enjoy adopting the lens of sociology to look at social psychology and group dynamics. So I wanted to thank you for sharing a reference to something that I found interesting and useful.

Comment by Joseph Lemien (jlemien) on Nick Bostrom should step down as Director of FHI · 2023-03-05T22:14:12.916Z · EA · GW

Peter Wildeford wrote a personal post criticizing the Apology

I want to flat that this link goes to a post written by Shakeel Hashim in his role managing communications for CEA, not a personal post by Peter Wildeford. Could you please either update this link or change the wording?

Comment by Joseph Lemien (jlemien) on A list of EA-relevant business books I've read · 2023-02-23T20:25:38.439Z · EA · GW

I'm so pleased to people referring to my post. :) Thanks!

Comment by Joseph Lemien (jlemien) on A list of EA-relevant business books I've read · 2023-02-22T19:10:37.764Z · EA · GW

many business books offer strongly worded advice based on no empirical data, or second-hand outdated psychology studies, or cherrypicked pop statistics about particular products, ads, or markets

Echoing this, I've also found that many business books are simply variations of "here is what worked for me in this specific situation, which I am now proselytizing as a general rule." I do wish that there were more business books that were explanations of business research, or popularizing of academic papers.

Comment by Joseph Lemien (jlemien) on A list of EA-relevant business books I've read · 2023-02-22T19:07:50.519Z · EA · GW

This is the post that I was planning to write! Good job beating me to it. I've picked up some new recommendations today.

Comment by Joseph Lemien (jlemien) on Joseph Lemien's Shortform · 2023-02-11T05:50:20.715Z · EA · GW

I'm grappling with an idea of how to schedule tasks/projects, how to prioritize, and how to set deadlines. I'm looking for advice, recommending readings, thoughts, etc.

The core question here is "how should we schedule and prioritize tasks whose result becomes gradually less valuable over time?" The rest of this post is just exploring that idea, explaining context, and sharing examples.


Here is a simple model of the world: many tasks that we do at work (or maybe also in other parts of life?) fall into either sharp decrease to zero or sharp reduction in value.

  • The sharp decrease to zero category. These have a particular deadline beyond which they offer no value, so you should really do the task before that point.
    • If you want to put me in touch with a great landlord to rent from, you need to do that before I sign a 12-month lease for a different apartment; at that point the value of the connection is zero.
    • If you want to book a hotel room prior to a convention, you need to do it before the hotel is fully booked; if you wait until the hotel is fully booked, calling to make that reservation is useless.
    • If you want to sharing the meeting agenda to allow attendees to prepare for a meeting, you have to share it prior to the meeting starting.
  • The sharp reduction in value  category. You should do these tasks before the sharp reduction in value. Thus, the deadline is when value is about to sharply decrease.
    • Giving me food falls into the sharp sharp reduction  category, because if you wait until I've I'm already satiated by eating a full meal, the additional food that you give me has far less value than if you had given it to me before my meal.

Setting deadlines for these kinds of tasks is, in a certain sense, simple: do it at some point before the decrease in value. But what about tasks that decrease gradually in value over time?

  • We can label these as the gradual reduction category.
    • Examples include an advertisement for a product that launched today and will be sold for the next 100 days. If I do this task today I will get 100% of it's value, or if I do it tomorrow I will get 99% of it's value, and so on, all the way to last day that will add any value.
    • I could start funding my retirement savings today or tomorrow, and the difference is negligible. In fact, the difference between any two days is tiny. But if I delay for years, then the difference will be massive. This is kind of a "drops of water in a bucket" issue: a single drop doesn't matter, but all together they add up to a lot.
    • Should you start exercising today or tomorrow? Doesn't really matter. Or start next week? No problem. Start 15 years from now? That is probably a lot worse.
    • If you want to stop smoking, what difference does a day make?

Which sort of leads us back to the core question. If the value decreases gradually rather than decreasing sharply, then when do you do the task?

I suppose one answer is to do the task immediately, before it has any reduction in value. But that also seems like it isn't what we actually do. In terms of prioritizing, instead of doing everything immediately,  people seem to push tasks back to the point just before they would cause problems. If I am prioritizing, I will probably try hard to to the sharp reduction in value task (orange in the below graph) before it has the reduction in value, and then I'll prioritize the sharp decrease to zero task (blue in the graph), finally starting on my lowest priority task once the other two are finished. But that doesn't seem optimal, right?

Comment by Joseph Lemien (jlemien) on FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator · 2023-02-03T23:52:22.663Z · EA · GW

Thanks for the links. I'll explore those, and maybe even end up updating an overly conservative perspective on safe withdrawal rates and long retirements.  :)

Comment by Joseph Lemien (jlemien) on FIRE & EA: Seeking feedback on "Fi-lanthropy" Calculator · 2023-01-30T21:45:31.973Z · EA · GW

I think it looks really nice and it easy to use. I like this a lot, especially the emphasis on how investing early can enable to you give increasingly large amounts in the future.

Off the top of my head the only changes I would make would be to either

  • Tweak the safe withdrawal rate info. Either 
    • choose 3.5% as the default safe withdrawal rate rather than 4% (mainly because of how 4% is generally viewed as acceptable for a retirement of 30 years; more info at Early Retirement Now). (edit: 4% works pretty well for a 30-year retirement, but for retirements that last longer than 30 years a 4% rate becomes increasingly risky. Michael Kitces recommends a 3.5% rate for a retirement of 40 or 50 years (about 31 minutes in).)
    • or add in a few words about about the safe withdrawal rate, such as this: 4% is commonly used in the U.S. for retirements lasting up to 30 years, meaning your FI number is equal to 25x your annual expenses. Lower numbers are recommended for longer retirements.
  • Clarify what the DONATION RATE IMPACT TO TOTAL FI NUMBER chart means.
Comment by Joseph Lemien (jlemien) on Karma overrates some topics; resulting issues and potential solutions · 2023-01-30T21:26:35.702Z · EA · GW

I love that you have text describing the images, making this post more accessible to people that use screen readers. It makes me happy to see that accessibility was a considering for you in writing this post.

Comment by Joseph Lemien (jlemien) on Native English speaker EAs: could you please speak slower? · 2023-01-27T23:03:46.798Z · EA · GW

For what it is worth, I've often had different experiences:

  • sometimes I simply didn't hear clearly. Maybe the person wasn't speaking loudly, I wasn't paying attention, or there is a lot of background noise. So the problem isn't one of the sound reading my ears and my brain struggling to process the information, but instead is one of not enough sound reaches my ears.
  • sometimes I didn't didn't understand the words or the phrasing that was being used, and rephrasing the same concept using different words allowed me to understand.

Off the top of my head I can't think of any heuristic (other than observing your conversation partner or asking for clarification) to figure out what the specific point of failure is when someone doesn't understand.

Comment by Joseph Lemien (jlemien) on Native English speaker EAs: could you please speak slower? · 2023-01-27T23:00:11.979Z · EA · GW

I'm a native English speaker who lived in a country with a different language for most of my adult life. There is a huge difference between someone who consciously adapts their speaking and a random person on the street. The difference can be as stark as me understanding 95-99% of what is said and me understanding 20-50% of what is said.

Adapting one's speaking for a non-native speaker is a distinct skill, and (my guess is that) most native English speakers have never even considered it. While things like accent are harder to be aware of and to control, word choice and speed are easy to adapt. For anyone looking for tips, here is what helped me:

  • Speaking at normal pace or at 90-95% of a normal pace was very helpful, as opposed to speaking at an I'm-so-excited-about-this pace.
  • Using words that are common (such as "location" or "building" rather than "venue") were a big help for me. Often when I didn't understand something, it was simply because the speaker used a word I didn't know, but I knew a more common/simple word for the same thing (such as using the word "residence" or "domicile" instead of using the world "house" or "home").
  • Prefacing or giving a subject line. Instead of starting a new topic directly, first say "I'd like to talk about TOPIC."

It can be easy to accidently appear patronizing if you slow down too much or over-enunciate, so try to be aware of the other person and of your social dynamics with the person.

Comment by Joseph Lemien (jlemien) on Slightly against aligning with neo-luddites · 2022-12-30T15:34:22.205Z · EA · GW

I agree that in the abstract a 50:1 benefit:cost ratio sounds great. But it also strikes me as naïve utilitarianism (although maybe I am using that term wrong?). To make it more concrete:

  • If you have a book that you enjoy reading, can I steal it and copy it and share it with 50 of my friends?
  • Is you stealing $100 from me justified if it generates far greater value when you donate that $100 to other people?
  • If we can save 50 lives by killing John Doe and harvesting his organs, does that justify the act?
  • If I can funnel millions or billions of dollars toward highly effective charities by lying to or otherwise misleading investors, does that benefit justify the cost?

These are, of course, simplistic examples and analogies, rather than some sort of rock solid thesis. And this isn't a dissertation that I've thought out well; this is mostly impulse and gut feeling on my part, so maybe after a lot of thought and reading on the topic I'll feel very differently. So I'd encourage you to look at this as my fuzzy explorations/musings rather than as some kind of confident stance.

Any maybe that 50:1 example is so extreme that some things that would normally be abhorrent do actually make sense. Maybe the benefit of pirating an ebook (in which one person has their property stolen and thousands of people benefit from it)  is so large that it is morally justified. So perhaps for my example should have chosen a more modest ratio, like 5:1. 😅

I'll also note that I think I tend to learn a bit toward negative utilitarianism, so I prioritize avoiding harm a bit more than I prioritize causing good. I think this makes me have a fairly high bar for these kinds of the ends justify the means scenarios.

Comment by Joseph Lemien (jlemien) on Slightly against aligning with neo-luddites · 2022-12-30T03:13:12.005Z · EA · GW

In that case, perhaps instead of phrasing it as "utilitarianism should be bounded by deontology" I should have instead phrased it as something along the lines of "a large benefit from this system doesn't justify the harms of creating this system." The general idea that I am trying to gesture toward is that when the piece of art someone created is used in a way that they do not consent to, the use benefiting someone doesn't necessarily make it okay. So while the value might be -1 over here and +50 over there, I (as a layperson, rather than a law maker) don't think that should be used as justification. If the creator gives informed consent, then I think it sounds fine. I know that I would feel really shitty if I spent time making something, sold copies of it, and then found that someone had copied my creation and was distributing variations on it for free.

Perhaps one area where I wasn't clear is that rather than a profession simply fading away (such as those made obsolete by the invention of digital spreadsheets or those made obsolete by the invention of automobiles), the "harm" I am referring to an artists work being copied without his/her permission (or stolen, or used without consent, or pirated). So perhaps I've mis-understood your perspective here. I understood your perspective to be "The value from a really good entertainment generation system would be so large that it would be justified to not pay the artists for their work." But perhaps when you referred to lost income you meant the future of their profession, rather than simply not paying for their work?

Such compensation is often recommended by economists as part of a package for turning Kaldor-Hicks improvements into Pareto improvements, but I have yet to hear of such a proposal from strict deontologists before. Have you? 

No, I have not heard of such a proposal from a strict deontologist. But to my knowledge I've also never had any interaction with a strict deontologist. 😅

EDIT: My views are probably quite influenced by recently learning about Lensa scraping artists work without their consent. If I hadn't learned about that, then I probably wouldn't have even thought about the ethics of what goes into a content generation system.

Comment by Joseph Lemien (jlemien) on Slightly against aligning with neo-luddites · 2022-12-29T13:55:05.333Z · EA · GW

While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.

This seems like an area in which utilitarianism should be bounded by deontology.[1] The reasoning here seems to be roughly "the value of this scenario is so high that I am okay with the harm it causes."

There are options other than artists loose their income by having their work pirated/stolen, copied, and altered, and humanity never unlocks that value. For example, by paying artists for their work.

  1. ^

    Vaguely parallel to how democracy should be bounded by liberalism: 9 people of group A voting to remove the rights of 1 group B person is democratic, which is the simplistic example of how the liberalism of protecting rights is important.

Comment by Joseph Lemien (jlemien) on Some notes on common challenges building EA orgs · 2022-12-05T01:55:28.414Z · EA · GW

I've had vaguely similar thoughts about youth and levels of professional experience, but you have articulated this much better than I have. Thanks for writing this.

I'd be very happy to see a "EA managers" Slack (or some other forum/conversation space/community), and I would be very happy to join.

Comment by Joseph Lemien (jlemien) on Savings as donations · 2022-11-07T18:39:43.418Z · EA · GW

Oh, Australia. I fell prey to the common mistake of "assuming other people people are like me." I know a good deal about personal finance in a USA context, but only parts of that are universal: good chunks of it are particular to a specific national context. The national context matters a lot in personal finance issues.

Your idea of "have a little money that's easily accessible and most of it in a trust" does make sense. Have an 'emergency fund' or 'support myself fund' with enough money for a a year or two of expenses, and then have everything else in a fund that transfers X% into your 'support myself fund' each year (or 1/12th of X% each month). If you do it right, the trust should grow indefinitely, and the inflow to your 'support myself fund' will be larger than your expenses.

I think that I don't have anything particularly wise or useful to write about the whole 'trusting your future self' topic. But I imagine that there are likely personal finance professionals who have done research about that time of thing. It might take some poking around to find it though.

Comment by Joseph Lemien (jlemien) on tobyj's Shortform · 2022-11-07T17:49:24.074Z · EA · GW

I'd be interest to read a post you write regarding illegibility of EA power structures. In my head I roughly view this as sticking to personal networks and resisting professionalism/standardization. In a certain sense, I want to see systems/organizations modernize.

A quote from David Graeber's book, The Utopia of Rules, seems vaguely related: "The rise of the modern corporation, in the late nineteenth century, was largely seen at the time as a matter of applying modern, bureaucratic techniques to the private sector—and these techniques were assumed to be required, when operating on a large scale, because they were more efficient than the networks of personal or informal connections that had dominated a world of small family firms."

Comment by Joseph Lemien (jlemien) on Savings as donations · 2022-11-06T23:12:46.237Z · EA · GW

From my limited knowledge of finance and law, I think that a trust would do everything you are looking for: you put your money into the trust, and the the trust follows particular rules that you set up. Rules such as "give me a 4% distribution annually" and "give all money to X upon my death" would be pretty easy to set up. The idea of borrowing against it might be a bit trickier.

But I think that the advantage of having the money outside of your control is relatively minor, while the disadvantages seem a bit larger. If you really do not trust your future self, then it might be worth it to set up a trust. But in general I would simply recommend putting the money into an instrument like a 401k or an IRA so that you are able to access the money early in case of emergency, but with a financial penalty to motivate you to not touch it. Excess money can be invested into a target date retirement fund in a brokerage account. Overall, I'm not convinced that the idea is worth doing, although I do find the concept interesting.

A very nit-picky note: the study that is famous for suggesting 4% of your assets as sustainable is only intended for a retirement of 30 years: it had a .95 probability of still having 0 or more dollars in it after 30 years. If you plan to live off of your investment for more than 30 years, then  3.5% should serve you pretty well (all the normal caveats apply: allocation matters, sequence of return risk matter, market performance matters, etc.).

Comment by Joseph Lemien (jlemien) on Recruiting Skilled Volunteers · 2022-11-03T23:45:14.330Z · EA · GW

The Intro EA Program might be a good way to get more familiar with some ideas and mental tools/models that are common in EA. Doing Good Better is an introduction to a lot of EA ideas that is fairly easy to read. Scout Mindset would also be a good book to read (less for understanding EA, and more for understanding an approach to the world of figuring out what is true, rather than fighting for what I believe to be true).

If you are in San Francisco (or the greater Bay Area) then it might be feasible for you to meet other EAs in person and get input on how to make your project/effort better.

If you want to adapt some EA-esque practices, then measuring your impact (such as lives saved per 10,000 dollars spent,  or years of incarceration prevented per workshop, or job placements achieved per SOMETHING) could be a good start. It is hard to do monitoring and evaluation well, but I'd encourage you to not let the perfect be the enemy of the good. Once you know your impact and input per unit of impact, then you can compare, optimize, pivot, and so on.

Cause neutrality is a fairly important idea in EA. While I don't think any person is truly and absolutely neutral about causes (we all have some things that resonate with us more, or pet projects that we simply care more about), in my mind the Platonic ideal of an EA would do a pretty good job of setting aside personal biases/connections/preferences and simply do what accomplished the most. I'm certainly not there (I work in HR for crying out loud 😅), but it is an aspirational ideal to strive for.

In general the bar for EA projects is set pretty high. A lot of EAs might look at an electrical engineering training program and think something like:

It is great to help these kids, but for the same amount of money/time/effort as helping these ten kids each learn how to build a boombox, I could help ten other kids get an extra 15 years of healthy life. One of these needs is gonna go unmet regardless (because we have limited resources), so I'm gonna make the tough choice and put my resources in a project that will have a bigger impact (while at the same time desperately wishing that I could fully fund/support both of these projects, because from what I can tell they both make the world a better place).

Comment by Joseph Lemien (jlemien) on Recruiting Skilled Volunteers · 2022-11-03T15:42:35.337Z · EA · GW

The EA community might be an appropriate community if you, but it is hard to say this with any level of confidence since I know so little about your projects, goals, impacts, motives, etc.

A good question to ask yourself: what if you had some evidence showing that your project was having a negligible influence on the result that you wanted? Or even worse, what if it showed that your project had a negative impact? If you would change your behavior based on these facts, then that is an indicator that EA might be a good fit.

Another thing to ask yourself: if your goal is to give people more economic opportunities, is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more "bang for your buck?" Or if you were to find school uniforms for young girls in rural [insert country here], for each dollar spent would that generate more in lifetime earnings than your electrical engineering training program. These are the kinds of questions that an EA might ask himself/herself about a project like yours.

Comment by Joseph Lemien (jlemien) on Where in the USA should an EA live? · 2022-11-02T18:44:24.718Z · EA · GW

While there are definite downsides to living in the Bay Area, many of them are lessened/eliminated for my situation by the fact that I will be working remotely.

I'm a 35-year-old , so I am a bit cautious about moving to an area that is predominantly a "college town." But assuming that you would recommend Berkley for working professionals (or for anyone else who isn't a student), are there any specific neighborhoods or areas of Berkley that you would recommend?

Comment by Joseph Lemien (jlemien) on Where in the USA should an EA live? · 2022-11-01T22:29:56.389Z · EA · GW

I'll be working at Centre for Effective Altruism on the People Operations team.

Comment by Joseph Lemien (jlemien) on Where in the USA should an EA live? · 2022-10-31T16:17:40.615Z · EA · GW

Assuming that it's financially feasible to live in any of these four cities (San Francisco/Berkley, New York, Boston, or Washington DC), how would you prioritize them? Any reasons a person should choose one over another?

Comment by Joseph Lemien (jlemien) on Cultural EA considerations for Nordic folks · 2022-10-14T02:50:42.484Z · EA · GW

This is useful for people outside of Nordic countries also. I'm an American who has spent the past decade outside of the US, and it is strange and interesting to read about the hype, the group living, and the polyamory. I'd love to see guides like this expanded (so that the guide covers different EA communities and their cultures) and deepened (so that it covers more details about each EA community an it's culture).

Comment by Joseph Lemien (jlemien) on Ask (Everyone) Anything — “EA 101” · 2022-10-09T21:32:40.555Z · EA · GW

Why don't we discount future lives based on the probability of them not existing? These lives might end up not being born, right?

I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions.

Comment by Joseph Lemien (jlemien) on We all teach: here's how to do it better · 2022-10-03T06:51:04.336Z · EA · GW

It will take me some time to digest the information in this post, and I've only skimmed through part of it so far, but I want to express my appreciation for providing such a resource-dense post.

Comment by Joseph Lemien (jlemien) on Any recommendation for how to explain EA funds grants to friend? · 2022-09-26T22:32:33.487Z · EA · GW

I imagine that such a post could be quite helpful for other young people who are considering applying for funding, and it could also be helpful for other people to understand more of this "ecosystem." I, for one, would be interested to read your story.

Comment by Joseph Lemien (jlemien) on John Bridge's Shortform · 2022-09-25T22:52:53.462Z · EA · GW

I agree. It seems like a highly impactful thing, with a high level of uncertainty. The normal way of reducing uncertainty is to run small trials. My understanding of this concept from the business world is the idea of Fire Bullets, Then Cannonballs. But (as someone with zero technical competence in AI) I suspect that small trials might simply not be feasible.

Comment by Joseph Lemien (jlemien) on The (Allegedly) Best Business Books · 2022-09-12T11:51:17.137Z · EA · GW

Thanks for mentioning it. I've never heard of that, but it seems like it has some cool stories. I'll add it to my (ever growing) want-to-read list.

Comment by Joseph Lemien (jlemien) on My emotional reaction to the current funding situation · 2022-09-12T00:47:50.650Z · EA · GW

Thanks for writing this. I've had very little in-person interaction with EAs, but even having only read about it on the forum the whole class/wealth/money issue is something that has often made me feel weird, too.

Comment by Joseph Lemien (jlemien) on Who are some less-known people like Petrov? · 2022-09-06T22:06:30.556Z · EA · GW

This is much smaller and much more recent, but Li Wenliang seems to have been somewhat idolized for sharing information about COVID in December 2019. At that point the local government policy on COVID was roughly similar to what policy had been on the SARS epidemic: keep it quiet and don't let people know what is happening. Local police/government officials were not happy that he shared this information. He died from COVID in February 2020. I haven't crunched the numbers, but it seems reasonable that X number of severe illnesses and Y number of deaths ended up not happening because he shared information about COVID.

Comment by Joseph Lemien (jlemien) on Open EA Global · 2022-09-03T00:53:40.827Z · EA · GW

because you can't identify promising people

you'll only get the people who most legibly conform to your current model of what's promising

I want to state my strong agreement with these ideas. It isn't hard to come up with dozens of examples of people who didn't seem particularly impressive and then went on to be much more impressive than any reasonable observer would have expected.

I would also be surprised if EAs (a community of people who think about scope insensitivity, moral cluelessness, and similar ideas that I roughly categorize as "intellectual humility") are able to identify talent confidently in advance.

I'm a bit worried that the current trends are making EAs somewhat insular along the lines of class/socioeconomic status, as the legible things (attending Stanford, doing internships, networking) tend to be things that are strongly correlated with growing up with a wealthy family, as well as being things that are much harder to do/obtain if you grow up without money. I don't have enough information nor have I put enough thought to expand upon this idea, but it is something I'm interested in exploring more.

Comment by Joseph Lemien (jlemien) on Hiring: The Ignored Resource of Rejected EA Job Candidates · 2022-08-30T12:53:18.645Z · EA · GW

I strongly agree that better rejection emails would be helpful. A handful of templates would likely suffice. Heck, "rejection email templates" could even be crowdsourced by the EA community, with a Google Doc listing a dozen or more of the most common reasons why application are rejected, and each of those reasons having two or three polite, respectful, professional, firm messages available to plug-and-play. A rejection email for a candidate for the reason of insufficient professional experience from Organization A doesn't need to be very different than one from Organization B

Last week I applied to a job for which I have several years directly relevant experience, and I received nothing more than an email stating "Thank you very much for your interest... We appreciated the chance to learn more about you and your professional experience. Unfortunately, we won’t be moving forward with your application at this time." This would have stung a lot less if they had said any of the following:

  • although you seem to be qualified for this position, we have chosen to move forward with an applicant that has even stronger qualifications.
  • we want an applicant to start by DATE1, and you indicated that you are not able to start until DATE2.
  • you had type-os in your resume, and based on that we chose to reject your application.
  • we would prefer a candidate that has more experience working with X.
  • we would prefer a candidate that has skill Y.
  • Z is important to us, and this was not demonstrated by your application materials.
  • we didn't see anything in your application to suggest that your commitment to CAUSE meets our expectation.
  • we spoke to some friends of ours about you, and it turns that everyone we spoke to thinks that you suck and you be a horrible addition to our team.[1]

 

  1. ^

    This one is a joke, but only somewhat. Sometimes I think that maybe this is what happens. Considering the level of transparency there is in the process, for all I know this is what happens.

Comment by Joseph Lemien (jlemien) on Open Phil is seeking bilingual people to help translate EA/EA-adjacent web content into non-English languages · 2022-08-24T02:39:45.028Z · EA · GW

Strongly recommend changing the recommendation from "bilingual people" to "translators." While many bilingual people are easily able to do casual translation, someone who is actually trained as a translator and who works as a professional translator will generally do a much better job.

Merely being bilingual doesn't quality someone to be a translator, any more than my ability to use a keyboard qualified me as a stenographer or my ability to ride a bicycle qualifies me to me a bicycle messenger.

Comment by Joseph Lemien (jlemien) on Should EA shift away (a bit) from elite universities? · 2022-08-22T23:20:43.337Z · EA · GW

I agree that it is pretty sloppy/rough. Can you share any suggestions for better proxies?

Comment by Joseph Lemien (jlemien) on Perhaps the highest leverage meta-skill: an EA guide to hiring · 2022-08-22T11:44:33.575Z · EA · GW

If anyone is interested in using the topgrading method, the guy who invented/designed it wrote a whole book about it. It goes into more detail than Who, and if you really want to implement topgrading then it is very helpful.

Comment by Joseph Lemien (jlemien) on Should EA shift away (a bit) from elite universities? · 2022-08-21T23:08:54.486Z · EA · GW

I think that you make a good point. The narrative of "bigger = better" is a vast simplification. Perhaps there are other useful factors in addition to student population that we can look at, such as "% of students majoring in non-profit management, environmental studies, etc." as a rough proxy for the level of "proto-EA-ness" in a student population.

I wonder if there is some good enough to be useful way to evaluate the prevalence of Proto-EAs on a university campus. I'm trying to think of how to create a rough/toy function the of: student population, prevalence of Proto-EAs (as measured by some proxy)... but what other factors would be useful?

If "eliteness" really is a useful metric, then maybe it would make sense to prioritize university outreach to the top X universities, but maybe X should be 30 or 50 or 80 rather than 10.

Comment by Joseph Lemien (jlemien) on A review of the Schwarzman Scholars program for EAs · 2022-08-21T02:54:10.571Z · EA · GW

Thanks for taking the time to type all that out. I really appreciate that you gave thoughtful responses. :)

Comment by Joseph Lemien (jlemien) on A review of the Schwarzman Scholars program for EAs · 2022-08-20T01:00:41.871Z · EA · GW

Sure, I'll share what I've heard and what I suspect. Each of these this is  either second-hand or conjecture about how the Schwarzman program functions.  I don't have any personal experience of the program. I also think that as an American I am probably setting American universities as my default of comparison.

  • General China stuff
    • My first thought is broad and general, applying to all academic programs in China: academic programs in China are often poorly managed (at the administrative/management level) and of low quality. Does the Schwarzman program fall into this trend?
    • There are also general risks of being in China. COVID has been scary and challenging, most foreign students have left, and my impression is that most of them had their scholarship stopped. Many Chinese universities locked students on campus  (or booted them off campus) at various times during 2020 or 2021. I haven't heard how Schwarzman handled the various waves of COVID lockdowns that Beijing has gone through.
    • Programs for foreign students in China often tend to be very insular and disconnected from the rest of the campus. It is common for 4 or 6 or 8 Chinese students share a dormitory room, while foreign students often are perceived as getting special treatment when they have only 1 or 2 students in a room. A lot of foreign students in China mainly interact with other foreign students, with occasional cultural excursions. My guess is that the Schwarzman program is a bit more integrated within the program, but that there is relatively little interaction with people outside Schwarzman.
    • Are foreign students at Schwarzman able to open a bank account and use WeChat Pay, Alipay, and similar apps in China? I imagine that life in China would be quite a bit harder if you aren't able to use all the conveniences that come with mobile payments. Are foreign students able to get a Health Kit?
    • Learning Chinese language at universities in China tends to follow a very "traditional" model of language learning, in which a group of students works their way through a textbook with plenty of not-very-useful words, and in which a lot of class time is spent on the teacher speaking a sentence and the students all repeating it slowly. How were the Chinese language classes?
  • Academic freedom 
    • Academic freedom is an issue at pretty much every institution in China. There are simply some topics which are considered sensitive in China that you can't talk about, study, or discuss.
    • Students at Schwarzman didn't like Donald Trump, but Stephen A. Schwarzman liked Donald Trump a lot, and the staff ask students to not voice anti-Trump opinions while they were part of the Schwarzman program. I understand that it is kind of rude to accept someone's money and then insult their politics, but the institutional response (of just telling students they shouldn't express opinions that run counter to the funder) strikes me as fairly poor.
Comment by Joseph Lemien (jlemien) on A review of the Schwarzman Scholars program for EAs · 2022-08-16T02:16:57.248Z · EA · GW

Could you also mention some of the negatives? It is very Chinese to write a review that only mentions the positive aspects.

I have a few suspicions of negative aspects of the Schwarzman program, but these are from my friends/contacts rather than from my own personal experience. Rather than repeating what I've heard from my own network, I'd like to ask what you think the downsides and the bad parts of this program are.

Comment by Joseph Lemien (jlemien) on EA Organizations Hiring Frustration · 2022-08-11T23:32:07.316Z · EA · GW

I would love to see hiring done better at EA organizations, and if there was some kind of "help EA orgs do hiring better" role I would jump at the chance. Judging and evaluating people is such a raw, human process, and it should be done with some compassion and care.

I've done quite a bit of hiring over the years for non-EA organizations. Generally speaking, any organization that systematically neglects applicants is losing potential talent and damaging their reputation. Part of hiring is convincing the candidate that this is a place they would want to work, and many organizations forget about that.[1]

While this is more conjecture than data, my impression is that many EA organizations are run by people who are young and relatively inexperienced, and they haven't spent a lot of time in the professional world.[2] I don't like how it feels kind of patronizing to write this, but I think that for a lot of people they just don't know any better: they either haven't learned, haven't spent the time to think about it, or haven't yet implemented it.[3]

Regarding the specific issues you mentioned:

  • Organizations not updating applicants on the status of the application is unfortunately common, and usually comes down to a mixture of "it isn't a priority because it doesn't add value" and "we are too busy." Ideally, as soon as a decision either way is made the applicant is informed.
  • Ignoring emails is rare, and I can't think of many situation where as a hiring manager I would consider that acceptable. I do know that when I ran hiring campaigns in the past I found it annoying to get multiple emails from people requesting status updates after two days when I had already told them we would let them know by the following week, and it is quite possible that sometimes more appropriate emails get lost in the pile of inappropriate emails.
  • Showing up late to interviews is a serious problem, and if somebody on my team did that more than once I would have a serious talk with them about how important it is to make a good impression on the applicant. Often, the interviewer is representing the whole organization, so if the interviewer is unprepared or late or gives any other negative first impression to the candidate, then the candidate will develop a negative perception of the whole organization.
  1. ^

    I can't tell you the number of hiring processes I've gone through as an applicant in which there was minimal (or no) effort from the organization to learn about what I want in my next job, or to show my how this role would be enjoyable/fulfilling.

  2. ^

     I think that there are lots of foolish/bad/silly things in the professional world, but one thing that I think has some value is managing impressions and appearing professional: there is a combination of calm/relaxed, competent, attentive/engaged, and friend/warm that makes for a great experience.

  3. ^

    I've had interviews with EA-orgs in which the interviewer appeared slovenly and scatterbrained, in which the interviewer appeared to both not have a clear idea of what the role would be and to not be listening to what I was saying, and in which irrelevant questions were asked. As a guy whose research interests and professional knowledge focuses around hiring processes (and especially around hiring interviews), I found it particularly disappointing.

Comment by Joseph Lemien (jlemien) on 🌏 A hack to solve all of humanity's major problems · 2022-08-11T10:46:41.400Z · EA · GW

I think this is making too strong of a claim based on fairly simplistic arguments. I'd find this whole argument more convincing if instead of the overarching and fairly "fuzzy" label of democracy, if there were more specific traits. I suppose that the more nuanced (and less pithy) version of your argument might be that "the higher a country rates on Freedom and Prosperity Indexes, the less likely it is to have problems A, B, and C."

I'm sorry for being the annoying pedant, but remember that correlation is not causation. Reality is messy and complicated, and while the narrative of "freedom causes wealth" is appealing in it's simplicity, there are many other factors that impact wealth, happiness, and lack of problems.