Posts

Comments

Comment by Charles He on How to PhD · 2021-03-29T00:09:56.574Z · EA · GW

This is so well written, so thoughtful and so well structured.

BE VERY CAREFUL NOT TO GET SUCKED INTO HORRIBLE PUBLISHING INCENTIVES.

This theme or motif has come up a few times. It seems important but maybe this particular point is not 100% clear to the new PhD audience you are aiming for.

For clarity, do you mean:

  1. On an operational or "gears-level", avoid activity due to (maybe distorted) publication incentives? E.g. do not pursue trends, fads or undue authority, or perform busy work that produces publications. Maybe because these produce bad habits, infantilization, distractions.

    or
     
  2. Do not pursue publications because this tends to put you down a R1 research track in some undue way, perhaps because it's following the path of least resistance.

 

 

Also, note that "publications" can be so different between disciplines. 

A top publication in economics during a PhD is rare, but would basically be worth $1M in net present value over their career. It's probably totally optimal to tag such a publication, even in business, because of the signaling value.

Note that my academic school is way below you in academic prestige/rank/productivity. It would be interesting to know more about your experiences at MIT and what it offers.

Comment by Charles He on How much does performance differ between people? · 2021-03-27T18:40:49.199Z · EA · GW

Heyo Heyo! 

C-dawg in the house!

I have concerns about how this post and research is framed and motivated. 

This is because its methods imply a certain worldview and is trying to help hiring or recruiting decisions in EA orgs, and we should be cautious.

Star systems

Like, I think, loosely speaking, I think “star systems” is a useful concept / counterexample to this post. 

In this view of the world, someone’s in a “star system” if a small number of people get all the rewards, but not from what we would comfortably call productivity or performance.

So, like, for intuition, most Olympic athletes train near poverty but a small number manage to “get on a cereal box” and become a millionaire. They have higher ability, but we wouldn’t say that Gold medal winners are 1000x more productive than someone they beat by 0.05 seconds. 

You might view  “Star systems” negatively because they are unfair—Yes, and in addition to inequality, they have may have very negative effects: they promote echo chambers in R1 research, and also support abuse like that committed by Harvey Weinstein.

However, “star systems” might be natural and optimal given how organizations and projects need to be executed. For intuition, there can be only one architect of a building or one CEO of an org.

It’s probably not difficult to build a model where people of very similar ability work together and end up with a CEO model with very unequal incomes. It’s not clear this isn’t optimal or even “unfair”.

So what?

Your paper is a study or measure of performance. 

But as suggested almost immediately above, it seems hard (frankly, maybe even harmful) to measure performance if we don't take into account structures like "star systems", and probably many other complex factors.

Your intro, well written, is very clear and suggests we care about productivity because 1) it seems like a small number of people are very valuable and 2) suggests this in the most direct and useful sense of how EA orgs should hire.

Honestly, I took a quick scan (It’s 51 pages long! I’m willing to do more if there's specific need in the reply). But I know someone is experienced in empirical economic research, including econometrics, history of thought, causality, and how various studies, methodologies and world-views end up being adopted by organizations. 

It’s hard not to pattern match this to something reductive like “Cross-country regressions”, which basically is inadequate (might say it’s an also-ran or reductive dead end).

Overall, you are measuring things like finance, number of papers, and equity, and I don’t see you making a comment or nod to the “Star systems” issue, which may be one of several structural concepts that are relevant.

To me, getting into performance/productivity/production functions seems to be a deceptively strong statement. 

It would influence cultures and worldviews, and greatly worsen things, if for example, this was an echo-chamber.

Alternative / being constructive?

It's nice to try to end with something constructive. 

I think this is an incredibly important area.

I know someone who built multiple startups and teams. Choosing the right people, from a cofounder to the first 50 hires is absolutely key. Honestly, it’s something akin to dating, for many of the same reasons.

So, well, like my 15 second response is that I would consider approaching this in a different way: 

I think if the goal is help EA orgs, you should study successful and not successful EA orgs and figure out what works. Their individual experience is powerful and starting from interviews of successful CEOs and working upwards from what lessons are important and effective in 2021 and beyond in the specific area.

If you want to study exotic, super-star beyond-elite people and figure out how to find/foster/create them, you should study exotic, super-star beyond-elite people. Again, this probably involves huge amounts of domain knowledge, getting into the weeds and understanding multiple world-views and theories of change.

Well, I would write more but it's not clear there’s more 5 people who will read to this point, so I'll end now.

Also, here's a picture of a cat:

Comment by Charles He on Starting a Nonprofit: What to Work on and Why? · 2021-03-22T17:12:24.836Z · EA · GW

Hi Rasmus,

This seems fantastic, both for doing the work itself and sharing it!

I know someone who built multiple orgs, based on this, startups seem to be a dizzying mess, basically you need to do everything at 1000% quality (e.g. early hires and strategic decisions) and have 1000% more tasks than time/energy to do.

This makes writing about the process difficult (it's a surreal situation where it's hard to know what reality is, writing itself can create ideas and structures that may not be Truth).  

It's impressive that you are writing about it!

Now, I'm writing this comment because of what you said here:

Most of the writings are going to be very basic for most of you guys here, as it really is intended for a non-EA audience, but it might still be interesting to take a look behind the scenes.  

I don't know what this means, or more honestly, I disagree with it.

Even if someone has seen 100 startups, they would still benefit from learning about your instance of startup and your experiences, especially shared honestly as you seem to.

Also, in your post you say:

How looking for inspiration in other countries helps shape the idea, building an MVP, getting together the initial team (and how we find unexpected help). There will for sure also be an article on how our egos almost got in the way of the easiest solution to the problem (and how it still might) and what we do to get around it.

 

How do you start a nonprofit? Why should you start a nonprofit? How do you have the biggest impact you can? How do you raise donations? How do you get a bank account? How do you become a registered charity?

It's not clear why the EA community would have advanced skills for building startups, non-profits, MVPs or teams. 

Really, I'm saying is that it could be the opposite, this knowledge is new and valuable (if so, an easy read would be particularly valuable). 

(But maybe I'm wrong, maybe I'm misrepresenting your views, and maybe someone else reading this will correct me.) 

Thanks for your great post!

Comment by Charles He on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-19T01:32:23.175Z · EA · GW

This seems like great points and of course, your question stands.

I wanted to say that most R1 research is problematic for new grads: this is  because of difficulty of success, low career capital, and frankly "impact" can also be dubious. It is also hard to get started. It typically requires PhD, post-doc(s), all poorly paid—contrast with say, software engineering.

My motivation for writing the above is for others, akin to the "bycatch" article—I don't think you are here to read my opinions.

Thanks for responding thoughtfully and I'm sure you will get an interesting answer from Holden.

Comment by Charles He on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-18T17:20:58.560Z · EA · GW

Hi Brian,

(Uh, I just interacted with you but this is not related in any sense.)

I think your are interpreting Open Phil's giving to "Scientific research" to mean it is a distinct cause priority, separate from the others. 

For example, you say:

... EA groups and CEA don't feature scientific research as one of EA's main causes - the focus still tends to be on global health and development, animal welfare, longtermism, and movement building / meta

To be clear, in this interpretation, someone looking for an altruistic career could go into "scientific research" and make an impact distinct from "Global Health and Development" and other "regular" cause areas.

However, instead, is it possible that "scientific research" mainly just supports Open Philanthropy's various "regular" causes? 

For example, a malaria research grant is categorized under "Scientific Research", but for all intents and purposes is in the area of "Global Health and Development". 

So this interpretation, funding that is in "Scientific Research" sort of as an accounting thing, not because it is a distinct cause area.

In support of this interpretation, taking a quick look at the recent grants for "Scientific Research" (on March 18, 2021) shows that most are plausibly in support of "regular" cause areas:

Similarly, sorted by largest amount of grant, the top grants seem to be in the areas of "Global Health", and "Biosecurity".
 

Your question does highlight the importance of scientific research in Open Philanthropy.

Somewhat of a digression (but interesting) are secondary questions:

  • Theories of change related, e.g. questions about institutions, credibility, knowledge, power and politics in R1 academia, and how could these be edited or improved by sustained EA-like funding.
  • There is also the presence of COVID-19 related projects. If we wanted to press, maybe unduly, we could express skepticism of these grants. This is an area immensely less neglected and smaller in scale (?)—many more people will die of hunger or sanitation in Africa, even just indirectly from the effects of COVID-19, than the virus itself. The reason why this is undue is that I could see why people sitting on a board donating a large amount of money would not act during a global crisis in a time with great uncertainty.
Comment by Charles He on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-18T14:20:47.459Z · EA · GW

I don't have the energy to fully engage with these, but maybe we just misunderstand each other in terms of what we define as UI/UX design. To me, and many other UI/UX designers, the UI/UX design is the end-to-end experience of using a website, product, or service, so I think everything I pointed out still falls into the realm of UI/UX design. It's not just about better interactions. And I think content choices / tradeoffs still can be considered part of the UI/UX design.

The control or selection of specific content, especially the choices you illustrated, being under the purview of UX seems improbable. 

It unworkably expands into decisions that are basically always controlled by other parts of the organization (e.g. exec).

To see this another way with examples: we would not accept exec blaming their UX designers for racist or inappropriate content. Similarly, a board would find it ridiculous if a CEO said their "community groups" initiative failed because their UX designer decided it did not belong on the front page.

I know someone who worked adjacent to this space (e.g. hiring and working with the people who hire UX designers). 

Someone presenting a UX design that then comprised of the choices in your upper level comment would risk being perceived to be advancing an agenda.

Comment by Charles He on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-18T02:06:56.201Z · EA · GW

Hi Brian,

Thanks for the thoughtful reply:

There's some comments below. They verge on debate, but I am not trying to be contentious.

Comments on #1-3:

I think your points #1-#3 are more like along the lines of a specific "business choice". Importantly, choices have drawbacks. Promoting one aspect or feature in a limited space is a choice to use a limited resource. 

Based on what you said, it seems like #1 and #2 are important and valuable. If one of EA's core activities are its communities, that should be emphasized and adding it would be an huge improvement. If EA's  contributors are substantially from non-white people, this can't be neglected in photos.

Now, I personally like the idea of promoting communities and genuinely reflecting on the population. However, it also verges into what I might call "politics" or at least non-UX improvements.

Comments on #4:

Overall I think the visual design of CEA and GWWC's current websites are better than the EA website. I think CEA's is really good currently, mainly because of their use of nice photos, especially of people.

Below are the top of these pages and maybe what you are referring to:

The pages are excellent, but also are not what I would call "UX design" as I imagined. 

They use visual principles that I see commonly on many websites made in the last 5 years. 

To try to emphasize this, for a side project, someone I know created a similar page (similar, I think, in every sense, performance, design, and high quality photos) in a few hours and it took off. I might be brutalizing/offending UX designers here.

Also, the main difference in design is simplicity of the elements, in particular CEA design is an extremely simple and effective "landing page". Also, simple, GWWC, presents a strong narrative in a top down scroll. (I might be messing up terms of art.) 

The current EA website is busier, having a few more elements, does not really use scrolling, and has more words. Again, as in my previous comment, it's not clear this is a bad thing and I might prefer it.  
 

Theme

The theme of this comment is that your reply is different than what expected.  I might have expected to learn of a "UX improvement" as some strictly better design choice ("stop using garish colors"), or a better mode of use in some sense ("swiping right on Tinder").

I agree that design (e.g. "minimalism" or something) might help EA and I wanted to learn what this is.

But my bias is to avoid technological solutions unless it's clearly needed. 

Also, if you have a distinct goal, "we need more non-white people in photos as it better reflects and welcomes the actual community", I prefer to just state it instead of risking conflating a distinct objective.

Also, really going off topic here, I would like to know more about your experiences with your ethnicity if you have them (note that technically I might have the same ethnicity as you). 

You seem to have a lot of thoughtful content and this would be an interesting perspective.

Comment by Charles He on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-17T21:21:29.385Z · EA · GW

Hi Brian,

As a good faith question, can you elaborate on your UX interests or concerns about the website or forum, if any? 

It seems your background gives you both strong knowledge and you have repeated a similar question.

Basically, what can the UX do better? What does your ideal UX, or UX improvements look like? On another level, what are the goals it would achieve over the current design?

I use a variety of websites across a number of industries. I think there are drawbacks to slick and trendy websites. To make a point of it, sometimes websites that look old but "just work", can be great. They signal confidence and longevity. EA is far from a brutalist style or something, but the design seems like good "packaging" for the resulting EA experience (which frankly is a lot of reading). 

It seems to work, be approachable and it's not clear to me why it's not "basically optimal".
 

Comment by Charles He on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-17T21:16:20.300Z · EA · GW

Is there an "API" for this forum, in order to access comments, posts and meta data?

If not, what is your perspective about scraping (purposes of performing analysis, extracting content, or use in other ways that might be "near-EA")?

Answers can be operational, personal opinion, legal, etc.

Comment by Charles He on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-16T04:56:54.109Z · EA · GW

Question: 

What would you personally want to do with 10x the current level of funds available? What would you personally like to do with 200x the current level of funds available?

Some context:

There is sentiment that funding is high and that Open Philanthropy is “not funding constrained”. 

I think it is reasonable to question this perspective.

This is because while Open Philanthropy may have access to $15 billion dollars of funds from the main donors, annual potato chip spending in the US may be $7 billion dollars, and bank overdraft fees may be about $11 billion

This puts into perspective the scale of Open Philanthropy next to the economic activity of just one country.

Many systemic issues may only be addressable at this scale of funding. Also, billionaire wealth seems large and continues to grow.

Comment by Charles He on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-16T04:52:24.827Z · EA · GW

Consider the premise that the current instantiation of Effective Altruism is defective, and one of the only solutions is some action by Open Philanthropy.

By “defective”, I mean:

A. EA struggles to engage even a base of younger “HYPS” and “FAANG”, much less millions of altruistic people with free time and resources. Also, EA seems like it should have more acceptance in the “wider non-profit world” than it has.

B. The precious projects funded or associated with Open Philanthropy and EA often seem to merely "work alongside EA". Some constructs or side effects of EA, such as the current instantiation of Longtermism and “AI Safety'' have negative effects on community development.

Elaborations on A:

  • Posts like “Bycatch”, “Mistakes on road” and “Really, really, hard” seem to suggest serious structural issues and underuse of large numbers of valuable and highly engaged people
  • Interactions in meetings with senior people in philanthropy indicates low buy-in: For example, in a private, high-trust meeting, a leader mentions skepticism of EA, and when I ask for elaboration, the leader pauses, visibly shifts uncomfortably in the Zoom screen, and begins slowly, “Well, they spend time in rabbit holes…”. While anecdotal, it also hints perhaps that widespread "data" is not available due to reluctance (to be clear, fear of offending institutions associated with large amounts of funding).

Elaboration on B:

Consider Longtermism and AI as either manifestations or intermediate reasons for these issues:

The value of present instantiations of “Longtermism” and “AI'' is far more modest than they appear. 

This is because they amount to rephrasing of existing ideas and their work usually treads inside a specific circle of competence. This means that no matter how stellar, their activities contribute little to execution of the actual issues.

This is not benign because these activities (unintentionally) are allowing backing in of worldviews that encroach upon the culture and execution of EA in other areas and as a whole. It produces “shibboleths” that run into the teeth of EA’s presentation issues. It also takes attention and interest from under-provisioned cause areas that are esoteric and unpopularized.

 

Aside: This question would benefit from sketches of solutions and sketches of the counterfactual state of EA. But this isn’t workable as this question is already lengthy, may be contentious, and contains flaws. Another aside: causes are not zero-sum and it is not clear the question contains  a criticism of Longtermism or AI as a concern, even stronger criticism can be consistent with say, ten times current funding.

In your role in setting strategy for Open Philanthropy, will you consider the above premise and the three questions below:

  1. To what degree would you agree with the characterizations above or (maybe unfair to ask) similar criticisms?
  2. What evidence would cause you to change your mind to the answer to question #1 (e.g. if you believed EA was defective, what would make disprove this in your mind? Or, if you disagreed with the premise, what evidence would be required for you to agree?)
  3. If there is a structural issue in EA, and in theory Open Philanthropy could intervene to remedy it, is there any reason that would prevent intervention? For example, from an entity/governance perspective or from a practical perspective?
Comment by Charles He on AMA: Lewis Bollard, Open Philanthropy · 2021-02-26T16:04:35.070Z · EA · GW

Since my comment yesterday at 10:14 AM PST, there have been changes to your top level comment.

For example, your question, asking about a safe, global space with three specific goals, did not exist, and you have added a caveat saying that you do not know if these issues are widespread or common.

Other comments have appeared, such as from Daniela Waldhorn, who has described appalling abuse and who has designated current management for illegal and discriminatory practices.

I think my comment was reasonable because it was hard to understand what if any changes could be effected in response to your comment, or frankly, what the underlying situation was/is.

Despite your caveat, based on the comments, this abuse seems appalling and widespread. This appears to be a public issue that affects everyone in this space.

I think, based on some of the things you said about a lack of discussion or engagement related to respected or powerful people, engaging with certain comments here might support the objectives that you may be aiming for.

Also, unfamiliarity with your or Daniela's experiences does not mean personal unfamiliarity with similar experiences outside the space of animal welfare.

Comment by Charles He on AMA: Lewis Bollard, Open Philanthropy · 2021-02-25T18:14:16.730Z · EA · GW

Hi Ula,

Can you describe the problems more specifically and consider the thoughts below?

I think abuse or exploiting someone’s gender, race, or outright sexual misconduct is an abomination.

But what about the perspective that “bad work environments” are a separate, distinct issue from this kind of abuse?

Work environments in any organization can become terrible and this comes from mismanagement or predatory management.

Unfortunately, these issues might be systemic at all nonprofit organizations because management ability and resources are low, and the “business model” is very performative and this can reduce intellectual honesty. Also a key resource is a stream of both passionate and pliable volunteers, who are both difficult to manage and less able to resist abuse.

If this perspective is correct, it could be difficult to solve because these are root causes. For example, even if you could richly fund and staff a few organizations with great difficulty, you cannot police all organizations that would pop up.

I think my thought in my comment is basic and I may lack knowledge of the specific events.

What do you think about what I said?

Comment by Charles He on How to discuss topics that are emotionally loaded? · 2021-02-03T03:30:54.998Z · EA · GW

So I wrote some practical advice below. 

I think the author of the post seems pretty thoughtful and sophisticated, so maybe this is too basic or not what they want. 

But it is hard to be informative as there is a lot going on in possible answer:

  • Are Bob and Alice friends or just acquaintances?
  • Are they discussing views, or are they working on a project together?
  • What are the power dynamics between the two?
  • Are they communicating 1on1, or is there a performative aspect, e.g. Alice or Bob's opinion is dominating a group discussion?

I think Example 4 seems tough, and I'll give one way I would approach this. 

Example 4 - Caring about Animals:

Bob loves animals and has been vegan for many years. He notices that he gets angry that Alice argues that way, even if he can't pinpoint, why he thinks she is wrong. He wishes that Alice would just go with "These were animals once. With experiences. So eating meat is obviously wrong".

I think it is important to avoid conflicts, confrontation or enforcing views about improving animal welfare with people in animal welfare.

There is a long history of conflict between approaches or viewpoints in animal welfare movements, which is wasteful. 

I think animal suffering is so abhorrent that it makes many points of view reasonable. Alice's "rationality" viewpoint can effect change, and Bob's "emotional" viewpoint is understandable if you take animal sentience seriously.

Maybe use a preamble? 

I think one way to begin any emotionally difficult presentation of a strong view is to use a preamble that genuinely accepts the opposing viewpoint before making your own point. 

For Alice, she could say:

"I think that animals are sentient. They have souls. I know Bob knows this.

"I want to talk about [ content such as impact] because I think it can make a bigger impact for these animals, this is...[ begin content ]"

Some comments: 

  • I don't think the example here is perfect. But even this simple preamble as long as it's genuine, seems to help a lot in difficult conversations. Saying something like this is easy once you practice it a few times. Because it's easy to say you can do it under emotional strain.
  • When things are really emotional, using informal language, and directly addressing the person are good strategies if it can be done naturally and genuinely. In the example, it tries to finds common ground (helping animals) and specifically acknowledges what Bob wants or feels.
  • When things are difficult, saying that you are upset explicitly due to the immediate circumstance often works well, and senior and powerful people do this.  You can say, "Look, this is an emotional subject for me and I'm upset about it. Now...[begin preamble]". It's still important to preserve gravitas and decorum, so be brief and state this unemotionally.  It's sometimes OK to be more defensive, and weaken the emotional aspects and use formal words you wouldn't normally say: "I find this untoward and problematic...".

It's possible that giving a preamble or communicating is not practical. For example, you may not even get a chance to make a long comment. 

If this is the case, or if both Alice and Bob are highly emotional, the conversation probably is not going to work and it's better not to do this. 

If there's resentment from past actions or a sense of underhandedness by either party, this makes a preamble or any communication difficult. In these cases I find resolving the issues as a whole impractical, and it's not going to work.

Not everything works, it's OK. You can walk away, or there's other approaches you can take.

I think modesty goes a long way here. This is both in not enforcing your views onto others unless necessary, and also accepting views. 

There's a lot of emotional labor and empathy involved. Your energy is a limited resource. You don't have to do this or other efforts if the other person isn't listening or simply doesn't get it.

Comment by Charles He on (Autistic) visionaries are not natural-born leaders · 2021-01-31T16:58:01.074Z · EA · GW

Uh,

There’s a large, high effort comment chain surrounding the wording of “autism” in the title, but the differences in opinion seemed modest, and both original positions seemed reasonable and humble.

On the other hand, there's a much more substantive issue: two of the leaders mentioned in the post have terrible traits that I don’t think any effective altruists like.

These traits make them bad people and also bad “leaders” in a way that undermines the post.

In fact, the issues are so severe that I have to temper my views and omit details in this comment because one of the people mentioned is manipulative and personally vindictive on social media.

These two people are indeed far more influential and successful than almost any other human being. They’re much smarter than me too.

However, I doubt they are more intelligent or technically able than many effective altruists and their success does not make them good leaders or role models. This is because they are successful and well known because of the outcome of the tech boom, more specifically, financial engineering and ability to use narratives to extract talented labor in this environment.

Even with these tailwinds, these people are so self destructive that their excesses could have crushed them in a slightly different realization of events, such as missing a single key deal.

I don’t know them personally, but I believe I have inside knowledge of their behavior that strongly supports the idea that they are systemically predatory. I also have other information, such as direct accounts from financiers who view the leadership of one as an absolute negative.

By the way, the speaking style of one of these people is so pronounced that it is suspected of being a ploy specifically to extract goodwill for “non-NT” people. This seems plausible to me and makes their use of an example for “autism” succeeding particularly misleading.

There’s dozens of people, and popular figures, such as Bill Gates or Jeff Bezos, would be much better examples for this post (setting aside the apparently large effort needed to drag around social constructs).

I know my comment seems unbecoming, but it is relevant to the author’s point in more than one way.

I would be interested in a counterpoint, but it seems difficult to get an informed opinion except from someone senior in these industries who have experience with senior leadership.

Comment by Charles He on Money Can't (Easily) Buy Talent · 2021-01-25T00:46:50.310Z · EA · GW

Hey, yo, Mark, It’s me, Charles.

What’s up?

So I’ve read this post and there’s a lot of important thoughts you make here.

Focusing on your takeaways and conclusion, you seem to say that earning to give is bad because buying talent is impractical.

The reasoning is plausible, but I don’t see any evidence for the conclusion you make, and there seems to be direct counterpoints you haven’t addressed.

Here’s what I have to say:

It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA

There’s a very direct way to get a sense if earning to give is effective, and that’s by looking at the projects and funds where earning to give goes, such as in the Open Phil and EA Funds grants database.

Looking at these databases, I think it’s implausible for me, or most other people, to say a large fraction of projects or areas are poorly chosen. This, plus the fact that many of these groups probably can accept more money seems to be an immediate response to your argument.

These funds are particularly apt because they are where a lot of earning to give funds go to.

It seems that following your post directly implies that people who earn to give and have donated haven’t been very effective. This seems implausible, as these people are often highly skilled and almost certainly think of their donations carefully. Also, meta comment or criticism is common.

It seems easy to construct EA projects that benefit from monies and purchasable talent

We know with certainty that millions of Africans will die of malnutrition and lack of basic running water. These causes are far greater than say, COVID deaths. In fact, the secondary effects of COVID are probably more harmful than the virus itself to these people.

The suffering is so stark that projects like simply putting up buckets of water to wash hands would probably alleviate suffering. In addition to saving lives, these projects probably help with demographic transition and other systemic, longer run effects that EAs should like.

Executing these projects would cost pennies per person.

This doesn't seem like it needs  unusual skills that are hard to purchase.

Similarly, I think we could construct many other projects in the EA space that require skills like administrative, logistic, standard computer programming skills, outreach and organizational skills. All of these are available, probably by most people reading this post.


It seems implausible that market forces are ineffective

I am not of the “Chicago school of economics”, but this video vividly explains how money coordinates activity.

While blind interpretations of this idea are stupid, it seems plausible that money that causes effective altruistic activities in the same way that buying a pencil does.

Why wouldn’t we say that everyone in an organization and even the supply chain that provides clean water or malaria nets is doing effective altruism?

I also don’t get this section “Talent is very desirable”:

Not quite. Assuming market efficiency, the savings of supplementing an altruistic individual is balanced by lost counterfactual donations. Suppose that Alice currently makes $200,000 a year and is willing to do direct work for $100,000 a year. Hiring Alice seems like a good deal - we get $200,000 of talent for $100,000. However, if Alice would do direct work for half the salary, she should also be willing to donate half her salary. Our savings are balanced by lost donations.

But in my mind, the idea of earning to give is that we have a pool of money and a pool of ex-Ante valuable EA projects. We take this money and buy labor (EA people or non-EA people) to do these projects.

The fact that this same labor can also earn money in other ways, doesn’t create some sort of grid lock, or undermine the concept of buying labor.

So, when I read most of your posts, I feel dumb. 

I read your post “The Solomonoff Prior is Malign”. I wish I could also write awesome sentences like “use the Solomonoff prior to construct a utility function that will control the entire future”, but instead, I spent most of my time trying to find a wikipedia page simple enough to explain what those words mean.

Am I missing something here?


What is Mark’s model for talent?

I think one thing that would help clarify things is a clearly articulated model where talent is used in a cause area, and why money fails to purchase this.

You’re interested in AI safety, of like, the 2001 kind. While I am not right now, and not an expert, I can imagine models of this work where the best contributions would supersede slightly worse work, making even skilled people useless.

For these highest tier contributors, making sure that HAL doesn’t close the pod bay doors, perhaps all of your arguments apply. Their talent might be very expensive or require intrinsic motivation that doesn’t respond to money.

Also, maybe what you mean is another class, of an exotic “pathfinder” or leader model. These people are like Peter Singer, Martin Luther King or Stacey Abrams. It’s debatable, but perhaps it may be true these people cannot be predicted and cannot be directly funded.

However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.