Posts

Comments

Comment by Charles He on Clarifying the Petrov Day Exercise · 2021-09-26T23:52:08.790Z · EA · GW

 

Hey, I got an email with a code and then I entered it. 

What does it do? Is there a prize?

Comment by Charles He on Why I am probably not a longtermist · 2021-09-25T09:12:59.013Z · EA · GW

If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.

I don’t understand. It seems that you could see the value of the long term future being unrelated to the probability of x risk. Then, the more you value the long term future, the more you value improving x risk.

I think a sketch of the story might go: let’s say your value for reaching the best final state of the long term future is "V".

If there's a 5%, 50%, or 99.99% risk of extinction, that doesn’t affect V (but might make us sadder that we might not reach it).

Generally (e.g. assuming that x risk can be practically reduced) it’s more likely you would work on x-risk as your value of V is higher.

It seems like this explains why the views are correlated, “extinction risks are rising so much, we must prioritize them!” and “there is lots of value in the long-term future”. So these views aren't a contradiction.

Am I slipping in some assumption or have I failed to capture what you envisioned?
 

Comment by Charles He on Resources for EA-Adjacent, not "EA-Affiliated" Organizations? · 2021-09-21T01:48:08.998Z · EA · GW

Big disclaimers: I have limited knowledge of the medical field. I do not and can not represent EA or its community. I strongly encourage anyone to reply if I said anything wrong, if for no reason than to usefully update the OP.

Hi OP,

I view your post and thoughtfulness really positively. 

This answer takes an very positive/motivated view of you and focuses on the aspect of getting you funding and support:

 

Short summary 

  1. I think you want to apply for movement building through the EA infrastructure fund.
  2. You want to speak to other senior EAs, especially if they are in the medical field (try to do this and get some friendly buy-in before applying to the fund mentioned above).
  3. Your description sounds like you need and should apply for maybe $1,000-$5,000. Depending on the scale of your org and your apparent success, you can get more funding in the future. 
     

Long answer:

I think there is a source of funding for you from the EA infrastructure fund. 

To explain more about getting this and maybe other funding, I'll start by writing a zoomed out, “low resolution" answer. This says that your chance of funding hinges on a “theory of change” sort of argument with two components: 
 

1) Let’s say the medical group you are in moves to EA. So what? What is the value of bringing your medical group over? Do you actually influence policy, etc? Is there an EA group/intervention/cause area who needs and can benefit from medical knowledge? (The answer is almost certainly yes, for example, newish areas like mental health, lead exposure, air pollution needs strong experts.)

2) How will you use the funding to move the medical group to EA? Will you influence a team of senior docs and the medical profession in your region? Are you conscientious and have a baseline of ability? (You almost certainly do, but no one knows who you are yet.)
 

Here's a more pragmatic, higher resolution, expansion of the above:

To answer 2) you want to present credentials/status (are you a senior resident or senior physician?). Talk about your work, both professionally and also in other movement building roles, especially highlighting a narrative of success (you are personally driving change, not just occupying volunteer positions). Another great way to establish 2) would be to mention other people, especially senior people in EA who are in the medical field that you know already and you work with. 

To answer 1) you need to present a narrative of the value of the medical group (maybe with a little bit of background on the medical field as a whole and how it connects to policy/decision makers) and why this is valuable for EA. You can read forum posts on medical related topics, and get ideas from conversations with other EAs in the medical discipline.

Finally, the way things are playing out, I think EA funders will really like "field leading" people, e.g. the expert in air pollution, fetal alcohol, etc. (or least one of the experts). If you can give a sincere signal that you can merely just increase access to these people for EA, that's probably a useful angle for you. 

 

Other thoughts:

— Reading the payout report for EA funds can help give a sense for what and why funding is given and I think you can find ways to fit in there.

— A very useful pattern for you (even or especially if you literally don’t know any other EA right now) is to get a friendly conversation with an existing EA who is known in the community and is in the medical field. I hope someone will drop by this post and offer to chat, but if not, reach out with cold email/messaging, that’s OK. 

— Another thing to keep in mind is that people often fund projects as an investment in the person, i.e., you personally. This angle could easily work, especially for lower amounts that seem to be implied. Funders may not be sure they can sway a major medical organization, but if they think there is some chance that you can be a leader, they may fund you.

— Finally, for the amounts you implied, especially if lower at $1000, there is a strong chance some other org or person (even one that doesn’t do grant making or is involved in infrastructure) might offer to cover this for you. People might see this as a very low amount and you might get this offer outright, after a warm conversation.
 

Again, anyone reading this, please correct me if I am wrong or misleading. (I wrote this super quickly by the way, so please respond if anything I said was wrong or unclear.)


 

Comment by Charles He on Promising ways to improve the lives of urban rats in South Africa · 2021-09-20T18:49:49.238Z · EA · GW

Fantastic post and research!

I don't know about rats much but had them as pets in elementary school (they are quite affectionate) and they shouldn’t suffer like this.

This may not be in scope of your article, but here’s some questions that might be valuable to answer:
 

1. What is the total population of urban rats at stake in your area of South Africa? E.g. X number of animals and how many die poorly? 

2. In your area, what is the total size of vertebrate urban wildlife and what is the relative number of rats (compared to pigeons, mice, etc.)? 

3. Why choose rats? Any particular reason?

4. Any guesses on the cost effectiveness of interventions? (maybe especially compared to established EA animal welfare interventions?) Cost effectiveness is difficult and important, so even a sketch would be useful, e.g. saying something like "Private conversations with experts suggested there is a sense that policy campaigns can be low cost" is useful.

5. Is there any "flow-through"/learning/"capacity building" value from executing the mentioned interventions on other animal welfare projects, especially worldwide, outside of South Africa?

 

Maybe you know the answer to some of the above, or maybe you have a link to someone who does. Or if you don't think anyone has an answer right now, it’s good to know this too!

I think answers to questions 4 and 5 would be very interesting and helpful to a new org or someone in this space.

Great article, this seems like a really important area!

Comment by Charles He on EA needs consultancies · 2021-09-18T19:23:54.169Z · EA · GW

I can see Jacob's perspective and how Linch's statement is very strong. For example, in developmental econ, in  just one or two top schools, the set of professors and their post-docs/staff might be larger and more impressive than the entire staff of Rethink Priorities and Open Phil combined. It's very very far from playpumps. So saying that they are not truth-seeking seems sort of questionable at least.

At the same time, in another perspective I find reasonable, I think I can see how academic work can be swayed by incentives, trends and become arcane and wasteful. Separately and additionally, the phrasing Linch used originally, reduces the aggressive/pejorative tone for me, certainly viewed through "LessWrong" sort of culture/norms. I think I understand and have no trouble with this statement, especially since it seems to be a personal avowal:

I'm probably not phrasing this well, but to give a sense of my priors: I guess my impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways.

Again, I think there's two different perspectives here and a reasonable person could both take up both or either. 

I think a crux is the personal meaning of the statement being made.

Unfortunately, in his last response I'm replying to, it is now coming off as Jacob is sort of pursuing a point. This is less useful. For example, looking at his responses, it seems like people are just responding to "EA is much more truth seeking than everyone else", which is generating responses like "Sounds crazy hubristic..". 

Instead, I think Jacob could have ended the discussion at Linch's comment here or  maybe asked for models and examples to get "gears-level" sense for Linch's beliefs (e.g. what's wrong with development econ, can you explain?). 

I don't think impressing everyone into a rigid scout mentality is required, but it would have been useful here.
 

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-09-18T18:48:15.308Z · EA · GW

This is a great and deep comment.

I think it’s extremely generous to call my little blurb above an “analysis”. I am not informed and I am not involved in this area of prison or justice reform. 

I’m writing this because I don’t want anyone to “wait” on me or anyone else.

If you are reading this and want to dedicate some time on this cause or intervention, you should absolutely do so!

Again, thanks for this comment.

Comment by Charles He on [Creative writing contest] Blue bird and black bird · 2021-09-17T03:22:39.143Z · EA · GW

This beautiful.

I think you put so many ideas together so well.

This is incredible, fantastic.

Comment by Charles He on Database dumps of the EA Forum · 2021-09-16T23:56:25.538Z · EA · GW

I was trying out OpenAI Codex after getting access this morning and the task of getting EA Forum data seems ideal for testing it out. 

Using Codex, I got code that seems like it downloads plain text comments from the forum at the rate of thousands per minute. 

It's a lot faster than writing the script by hand, and does nice things quickly (see time zone fixing).

I thought it was neat!

(This isn't hardcore code, but in case the OP, or anyone else needs raw forum data, and doesn't feel like writing the snippets, feel free to comment or send a message.)

 

Here's a picture OpenAI interface with the "prompt":

Actual Code (bottom half generated by OpenAI Codex):

"""
1. Get the last 100 comments from this internet forum  - https://forum.effectivealtruism.org/graphql
2. Put the contents and createdAt results of the query into a list called comments
4. Clean up the 'plaintextMainText' field by removing all formatting extra line returns
5. Save to dataframe and adjust to PST (Pacific Time Zones)
"""

import pandas as pd
import json
import requests

# Get the last 100 comments from this internet forum  - https://forum.effectivealtruism.org/graphql

# Use this GraphQL Query
query = """
{comments(input: {terms: {view: "recentComments", limit: 100}}) {
    results {
      _id
      post {
        postedAt
        _id
        title
      }
      user {
        _id
        slug
      }
      contents {
        plaintextMainText
      }
    }
  }
}

"""

# Put the postedAt and plaintextMainTextinto a list called comments, with error handling for missing values

comments = []

for i in range(0,10):
    print(i)
    url = 'https://forum.effectivealtruism.org/graphql'
    headers = {'Content-Type': 'application/json'}
    r = requests.post(url, json={'query': query}, headers=headers)
    data = json.loads(r.text)
    for comment in data['data']['comments']['results']:
        try:
            comments.append([comment['post']['postedAt'], comment['contents']['plaintextMainText']])
        except:
            pass

# Clean up the 'plaintextMainText' field by removing all formatting extra line returns

for comment in comments:
    comment[1] = comment[1].replace('\n', ' ')

# Save to dataframe and adjust to PST (Pacific Time Zones)

df = pd.DataFrame(comments, columns=['postedAt', 'plaintextMainText'])
df['postedAt'] = pd.to_datetime(df['postedAt'])
df['postedAt'] = df['postedAt'].dt.tz_convert('US/Pacific')
df['postedAt'] = df['postedAt'].dt.tz_localize(None)

# Save to csv

df.to_csv('effective_altruism_comments.csv', index=False)

Here's an example of the output.

Notes:

  1. You need to provide a GraphQL query to get Codex to work, it isn't going to figure it out. A tutorial for creating GraphQL is here. I made a simple query to get comments, as you can see above. The interface for GraphQL seems pretty good and you can create queries to get posts and a lot of other information.
  2. I haven't let this run to try to get the "entire forum". API/backend limits or other issues might block this.
  3. I can see there's at least one error in my GraphQL query, I think PostedAt gives a timestamp for the post, not the comment. There's other typos (I spent 10x more time writing this forum comment than writing the instructions for Codex to generate the code!)
Comment by Charles He on More undergraduate or just-graduated students should consider getting jobs as research techs in academic labs · 2021-09-16T19:42:38.481Z · EA · GW

Thanks for the reply! 

I like and I agree with these ideas!

Comment by Charles He on The motivated reasoning critique of effective altruism · 2021-09-16T02:43:14.691Z · EA · GW

Hi,

Thank you for your thoughtful reply. I think you are generous here:

I perceive that many of the issues I've mentioned to be better explained by bias than error. In particular I just don't think we'll see equivalently many errors in the opposite direction. This is an empirical question however, and I'd be excited to see more careful followups to test this hypothesis.

I think you are pointing out that, when I said I think I have many biases and these are inevitable, that I am confusing bias with error. 

What you are pointing out seems right to me.

Now, at the very least, this undermines my comment (and at the worst suggests I am promoting/suffering from some other form of arrogance). I’m less confident about my comment now. I think now I will reread and think about your post a lot more.

Thanks again.

Comment by Charles He on The motivated reasoning critique of effective altruism · 2021-09-15T19:15:20.333Z · EA · GW

I agree with what you said and I am concerned and genuinely worried because I interpret your post as expressing sincere concerns of yours and view your posts highly and update.

At the same time, I have different models of the underlying issue and these have different predictions.

Basically, have you considered the perspective that “some EA orgs aren’t very good” to be a better explanation for the problems? 

This model/perspective has very different predictions and remedies, and some of your remedies make it worse.

 

What does it mean to be "not motivated" or "unbiased"?

I can’t think of any strong, successful movement where there isn’t “motivated” reasoning.

I often literally say that “I am biased towards [X]” and “my ideology/aesthetics [say this]”. 

That is acceptable because that’s the truth. 

As far as I can tell, that is how all people, including very skilled and extraordinary people/leaders reason. Ideally (often?) it turns out the “bias” is “zero” or at least, the “leaders are right”. 

I rapidly change my biases and ideology/aesthetics (or at least I think I do) when updated.

In my model, for the biggest decisions, people rarely spend effort to be “unbiased” or "unmotivated".

It’s more like, what’s the plan/vision/outcomes that I will see fulfilled with my “motivated reasoning”? How will this achieve impact?

 

 

Impractical to fix things by “adding CEA” or undergirding orgs with dissent and positivism 

My models of empiricism says it's hard to execute CEAs well. There isn't some CEA template/process that we can just apply reliably. Even the best CEA or RCTs involve qualitative decisions that have methods/worldviews embedded. Think of the Science/AER papers that you have seen fall apart in your hands. 

Also, in my model, one important situation is that sometimes leaders and organizations specifically need this “motivated” reasoning. 

This is because, in some sense, all great new initiatives are going to lack a clear cost effectiveness formula. It takes a lot of “activation energy” to get a new intervention or cause area going. 

Extraordinary leaders are going to have perspectives and make decisions with models/insights that are difficult to verify, and sometimes difficult to even conceptualize.

CEA or dissent isn’t going to be helpful in these situations.

 

Promoting empiricism or dissent may forestall great initiatives and may create environments where mediocre empiricism supports mediocre leadership.

It seems like we should expect the supply of high quality CEA or dissent to be as limited as good leadership.

I interpret your examples as evidence for my model:

Back in the 90’s I did some consulting work for a startup that was developing a new medical device...Peer review did not discover any of this during the publication process, because each individual estimate was reasonable. When I wrote the paper, I was not in the least bit aware that I was doing this; I truly thought I was being “objective.”

How would we fix the above, besides "getting good"?

As another example, ALLFED may have gotten dinged in a way that demonstrates my concern:

It seems likely that the underlying issues that undermine success on the object level would also make “meta” processes just as hard to execute, or worse.

As mentioned at the top, this isn’t absolving or fixing any problems you mentioned. Again, I share the underlying concerns and also update to you. 

 

Maybe an alternative? Acknowledge these flaws? 

A sketch of a solution might be:

1) Choose good leaders and people
2) Have knowledge of the “institutional space” being occupied by organizations, and have extraordinarily high standards for those that can govern/interact/filter the community 
3) Allow distinct, separate cause areas and interventions to flourish and expect some will fail

This is just a sketch and there’s issues (how do you adequately shutdown and fairly compensate interests who fail, because non-profits and especially meta-orgs often perpetuate their own interests, for good reasons. We can’t really create an "executioner org" or rely on orgs getting immolated on the EA forum).

I think the value of this sketch is that it draws attention to the institutional space occupied by orgs and how it affects the community.

Comment by Charles He on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-14T18:42:05.901Z · EA · GW

I think this is a really well intentioned and thoughtful reply.

However, hiding publication on an internet forum seems technically dubious  (archiving and forum scraping seems common). 

Also, this wasn't the intention, but it does seem to be the same as, well, fraud. My guess is that authors are required to state their story isn't published (and removing it after the fact doesn't alter this state).

I think there is some solution here that should be explored.

Comment by Charles He on Resources on Animal Ethics and Helping Animals · 2021-09-11T01:05:23.616Z · EA · GW

I haven't gone through all of it, but this looks great!

This is a thoughtful amount of material that suggests both compassion and ability to take action on this important cause.

Comment by Charles He on Inspiring others to do good · 2021-09-10T03:08:55.739Z · EA · GW

I don't know anything about this project or its space, but I just wanted to write that I think the positive energy and "just built it" vibe of you and the OP is awesome!

Fantastic!

Comment by Charles He on Buck's Shortform · 2021-09-08T19:03:08.492Z · EA · GW

You seem to be wise and thoughtful, but I don't understand the premise of this question or this belief:

One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it's hard to tell if someone is a good CEO apart from "does their business do well", so they are forced to have their compensation tied closely to business outcomes (instead of something like "does their manager think they are doing a good job"), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.

It's not obvious to me that this information asymmetry exists in EA. E.g. I expect "Buck thinks X is a good group leader" correlates better with "X is a good group leader" than "Buck thinks X will be a successful startup" correlates with "X is a successful startup".

But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply:

For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement. 

I don't think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. "focus"). 

It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org. 

Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.

It seems like there might be a "market failure" in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.

It seems that the now canonized posts Really Hard and Denise Melchin's experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not "deserve" the experiences their described.

[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn't seem plausible/acceptable to me. Note that currently, there is a concerted effort to foster/sweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the "production function" of work in AI/longtermism. However, I think this focus does not apply in the same way to other cause areas.]

Again, as mentioned at the top, I feel like I've missed the point and I'm just beating a caricature of what you said.

Comment by Charles He on A Primer on the Symmetry Theory of Valence · 2021-09-08T00:18:04.946Z · EA · GW

I want to say something more direct:

Based on how the main critic Abby was treated, how the OP replies to comments in a way that selectively chooses what content they want to respond to, the way they respond to direct questions with jargon, I place serious weight that this isn't a good faith conversation.

This is not a stylistic issue, in fact it seems to be exactly the opposite: someone is taking the form of EA norms and styles (maintaining a positive tone, being sympathetic) while actively undermining someone odiously.

I have been in several environments where this behavior is common.

At the risk of policing or adding to the noise (I am not willing to read more of this to update myself), I am writing this because I am concerned you and others who are conscientious are being sucked into this.

Comment by Charles He on A Primer on the Symmetry Theory of Valence · 2021-09-07T04:47:23.038Z · EA · GW

I know that this is the EA forum and it’s bad that two people are trading arch emoticons...but I know I’m not the only one enjoying Abby Hoskin's response to someone explaining her future journey to her. 

Inject this into my veins.

 

Maybe more constructively (?) I think the OP responses have updated others in support of Abby’s concerns.

In the past, sometimes I have said things that turned out not to be as helpful as I thought. In those situations, I think I have benefitted from someone I trust reviewing the discussion and offering another perspective to me.

Comment by Charles He on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-06T23:02:33.782Z · EA · GW

Hi, this is another great comment, thank you!

Note: I think what was meant here was "bearish", not bullish.

I am more bullish about this. I think for distill to succeed it needs to have at least two full time editors committed to the mission.

I think what you're saying is you're bearish or have a lower view of this intervention because the editor/founders have a rare combination of vision, aesthetic view and commitment. 

You point out this highly skilled management/leadership/labor is not fungible—we can't just hire 10 AI practitioners and 10 designers to equal the editors who may have left.

Comment by Charles He on How to get more academics enthusiastic about doing AI Safety research? · 2021-09-04T22:43:15.093Z · EA · GW

From your comment, I just learned that Distill.pub is shutting down and this is sad.

The site was beautiful. The attention to detail, and attention to the reader and presentation were amazing. 

Their mission seems relevant to AI safety and risk.

Relevant to the main post and the comment above, the issues with Distill.pub seem not to be structural/institutional/academic/social—but operational, related to resources and burnout.

This seems entirely fixable by money, maybe even a reasonable amount compared to other major interventions in the AI/longtermist space?

To explain, consider the explanation on the page:

But it is not sustainable for us to continue running the journal in its current form...We also think that it’s a lot healthier for us and frees up our energy to do new projects that provide value to the community.

...

We set extremely high standards for ourselves: with early articles, volunteer editors would often spend 50 or more hours improving articles that were submitted to Distill and bringing them up to the level of quality we aspired to. This invisible effort was comparable to the work of writing a short article of one’s own. It wasn’t sustainable, and this left us with a constant sense that we were falling short. A related issue is that we had trouble setting well-defined boundaries of what we felt we owed to authors who submitted to us.

...

But the truth is that, with us being quite burnt out, our review process has become much slower and more similar to a typical journal. It’s unclear to us whether the value added by our present review process is worth the time costs we impose on authors.

It seems that 50 hours of a senior editor’s time to work on a draft is pretty wild. 

The use of senior staff time like this doesn’t seem close to normal/workable with the resources and incentives on the publication market.

But this can be fixed by hiring junior/middle level ML practitioners and visualization designers/frontend engineers. The salaries are going to be higher than most non-profits but there seems like there is a thick market for these skills.

What doesn’t seem fungible is some of the prestige and vision of the people associated with the project. 

As just one example, Mike Bostock is the creator of D3.js and in visualization is a “field leader” by any standard. 


Maybe this can interest someone way smarter than me to consider funding/rebuilding/restoring Distill.pub as an AI safety intervention.

Comment by Charles He on EA Animal Welfare Fund: Request for Proposals Regarding Scoping Research/Project(s) on Neglected yet Large-Scale Animal Populations · 2021-09-03T21:17:05.835Z · EA · GW

This is fantastic! It seems like these are really important potential research areas. 

Also, if you’re a researcher who wants to help animals, I want to point out something that is not flagged and seems extraordinary:

Your research can be directly implemented by creating a new EA org, and this could happen in as little as 12-24 months. 

This is through the Charity Entrepreneurship org (and others too).

To be specific, talented EAs and other entrepreneurs are specifically recruited and incubated to create orgs to execute these ideas. In the past, these people included those with postgraduate degrees, significant work and startup experience (and also willing to work over 50 hours a week under significant uncertainty). They are given seed funding at 6 figures, and larger amounts are possible after a promising pilot suggests impact. 

This implementation of research ideas described above is not merely an idea, wish or dream—in the last 60 days, there have been teams created to implement animal welfare interventions for underserved animal populations.

Again, just to emphasize and be concrete, it’s possible that starting from now, from zero, you can research an underserved animal population, find a promising way to help them, and this can be implemented in as little as 12-24 months—an actual organization with founders, staff and working to improve the lives of the underserved animals.

 

It's hard not to be in awe. This “pipeline” for research to create an org is incredible. It’s hard to think of any other place where anything like this pipeline exists in academic research, non-profits or other communities.

Comment by Charles He on Frank Feedback Given To Very Junior Researchers · 2021-09-03T00:53:50.521Z · EA · GW

Thanks for the reply. 

I think both your main post and Linch's comment are both very valuable, thoughtful contributions. 

I agree that such direct advice is under supplied. Your experiences/suggestions should be taken seriously and is a big contribution.

I don't have anything substantive to add.

Comment by Charles He on Frank Feedback Given To Very Junior Researchers · 2021-09-01T23:11:24.662Z · EA · GW

This might be a cultural thing but in the UK/US/Canada, a purely negative note from a superior/mentor/advisor  (or even friendly peer) feels really really bad.

I really strongly suggest if you are a leader or mentor, to always end a message on a sincerely positive note. 

So a version of this is also known as a "shit sandwich", and it's not clear to me that it is an effective pattern. In particular, it seems plausible that it only works a limited number of times before people start to notice and develop an aversion to it. I personally find it fairly irritating/annoying.

I think there's a pattern where being pro forma or insincere is really bad. 

But it seems low cost and valuable to add a sincere note saying: 

"I really liked your motivation and effort and I think there's potential from you. I like [this thing about you]...I think you can really help in [this way]."  

Which is what you want right? And believe right? Otherwise why spend time writing feedback.

Mentees and junior people can be pretty fragile and it can really affect them. 

Like, it's not a high probability but there are letters or even phrases that someone will remember for years. 

Comment by Charles He on More undergraduate or just-graduated students should consider getting jobs as research techs in academic labs · 2021-09-01T18:05:09.914Z · EA · GW

I think there are questions about the premise of this post:

  • I’m uncertain about the compensation/signaling/networking value for the research tech role. It's not clear why it offers more returns than available to a few years in industry, even as a non-prestigious, entry level graduate.
  • In addition to the fact that many academic labs are exploitative (as the OP does touch on), I am concerned that even good and kind academic labs can give an off-color work experience/incentives/worldview, as I think they are not quite “real world environments”. I think a technician will get the worst of this while losing a lot of the positives?
  • I don’t have a strong model of how this approach can lead to research that many in the EA community think is most valuable—"lead researchers with field leading potential".

I think this post is great as it departs from previous patterns of EA advice

A lot of EA advice seems to presume that the reader is already a top-tier student or young professional who just needs to have their endless font of potential pointed in the right direction.

Yes, I think a lot of canonical career advice/strategy in EA following this pattern.

Such advice often is missing a lot of content, and particularly lacks operational details.

This falls into the trap where such advice can be uninformative/unmotivating to these top candidates, while at the same time, is useless or even harmful to the large majority of readers.

A great departure from the above pattern is Holden Karnofsky's points in the interview below:
https://80000hours.org/podcast/episodes/holden-karnofsky-building-aptitudes-kicking-ass/

There's a lot more context (very well presented in the link above) but a theme is:

“Be very careful about following career advice at all.”

 

Let’s say that you picked a skill that’s never going to get you a direct-work effective altruism job, but you kicked a bunch of ass, and you know a bunch of other people who kick ass.

So now you have this opportunity to affect people you know, and get them to do a lot of good.

And they are not the people you would know if you hadn’t kicked ass.

I think this post is great

As suggested above, it’s hard to communicate specific strategies/tactics to enter a career, yet this post gives a lot of detailed operational advice. These show a lot of thought, strong models of how to apply this approach, and a lot of real world experience. 

There is a lot of great advice, e.g. against repetitive cookie cutter emails, the value of informal chats, the reasoning against formal programs for the specific candidates involved. 

It has truth and honesty. It’s not afraid to give opinions and in doing so it exposes a lot of surface area, to make productive, object-level disagreements.

Most importantly, it focuses on the much harder challenge of making an impact for 95% of people who might be interested in EA.

Comment by Charles He on Essay Competition on Preparation for Global Food Catastrophes · 2021-09-01T03:28:32.894Z · EA · GW

Hey, this user appears to be a somewhat sophisticated bot that uses NLP and maybe some language model (GPT) functionality. 

They promote essay writing services, which might hint/point at functionality and ethics aligned with these purposes.

They 1) make internally coherent replies on topic that turn out to be empty, 2) maintain a positive, sympathetic and modest persona, 3) appear in emotionally charged topics where confrontation is awkward, and 4) progressively become more explicit in promotion of their services. 

They attracted few downvotes and even one upvote. 

 

Comment by Charles He on Can you control the past? · 2021-08-31T01:24:28.451Z · EA · GW

Ok, so I thought about this more and want to double down on my Objection 1:

Consider the following three scenarios for clarity:

Scenario 1: Two identical, self interested agents play prisoner’s dilemma in your respective rooms, light years apart. These two agents are just straight out of our econ 101 lecture. Also, they know they are identical and self-interested. Ok dokie. So we get the “usual” defect/defect single-shot result. Note that we can have these agents identical, down to the last molecule and quantum effect, but it doesn’t matter. I think we all accept that we get the defect/defect result.

Scenario 2: We have your process or Omega create two identical agents, molecularly identical, quantum effect identical, etc. Again, they know they are identical and self-interested. Now, again they play the game in their respective rooms, light years apart. Again, once I point out that nothing has changed from Scenario 1, I think you would agree we get the defect/defect result.

Scenario 3: We have your process or Omega create one primary agent, and then create a puppet or slave of this primary agent that will do exactly what the primary agent does (and we put them in the two rooms with whiteboards, etc.). Now, it’s going to seem counterintuitive how this puppeting works, across the light-years, with no causation or information passing between agents. What’s going on is that, just as in Newcomb’s boxing thingy, that Omega is exercising extraordinary agency or foresight, probably over both agent and copy, e.g. it’s foreseen what the primary agent will do and creates that over the puppet.

 

Ok. Now, in Scenario 3, indeed your story about getting the cooperate result works, because it’s truly mirroring and the primary agent can trust the puppet will copy as they do. 

However, I think your story is merely creating Scenario  2, and the copying doesn’t go through. 

There is no puppeting effect or Omega’s effect—this is what is biting for Scenario 3. 

To see why the puppeting doesn’t go through, it’s because Scenario 2 is the same as Scenario 1. 

Another way of seeing this is that, imagine in your story in your post, imagine your agent doing something horrific, almost unthinkable, like committing genocide or stroking a cat backwards. Despite both the agent and the copy are able to do the horrific act, and despite the fact that they would mirror eachother, is not adequate for this act to actually happen. Both agents need to do/choose this. 

You get your result, by rounding this off. You point out how tempting cooperate looks like, which is indeed true and indeed human subjects will actually probably cooperate in this situation. But that’s not causality or control.

As a side note, I think this "Omega effect", or control/agency is the root of the Newcomb’s box paradox thing. Basically CDT’s refuse the idea that they are in the inner loop of Omega or in Omega's mind’s eye as they eye the $1000 box, and think they can grab two boxes without consequence. But this rejects the premise of the whole story and doesn’t take Omega's agency seriously (which is indeed extraordinary and maybe very hard to imagine). This makes Newcomb’s paradox really uninteresting.

Also, I read all this Newcomb stuff over the last 24 hours, so I might be wrong.

Comment by Charles He on Announcing the Open Philanthropy Undergraduate Scholarship · 2021-08-30T02:36:11.406Z · EA · GW

Do you want to see a write up by me, not the OP, that gives some structure/rationalization/justification about why this set of universities was chosen?

This is an awkward subject and I think it’s unlikely that you will get a verbose response. 

I am worried that there will be a lack of response, and this might be create a perception  about the objective value of candidates outside this set of universities, despite these beliefs not actually being held strongly by anyone.

Instead of this situation, I'd rather produce a balanced write up, that can still be be kicked around (and maybe get explicitly stomped on by the OP).

Comment by Charles He on Can you control the past? · 2021-08-28T21:15:26.612Z · EA · GW

As a non-decision theorist, here’s some thoughts, well, objections really. 

I think maybe my thoughts are useful to look at because they represent what a “layman” or non-specialist might think in response to your post.

But I am a scrub. If I am wrong, please feel free to just stomp all over what I write. That would be useful and illustrative. Stomp stomp stomp!

To start, I’ll quote your central example for context:

My main example is a prisoner’s dilemma between perfect deterministic software twins, exposed to the exact same inputs. This example that shows, I think, that you can write on whiteboards light-years away, with no delays; you can move the arm of another person, in another room, just by moving your own. This, I claim, is extremely weird...Nevertheless, I think that CDT is wrong. Here’s the case that convinces me most.

Perfect deterministic twin prisoner’s dilemma: You’re a deterministic AI system, who only wants money for yourself (you don’t care about copies of yourself). The authorities make a perfect copy of you, separate you and your copy by a large distance, and then expose you both, in simulation, to exactly identical inputs (let’s say, a room, a whiteboard, some markers, etc). You both face the following choice: either (a) send a million dollars to the other (“cooperate”), or (b) take a thousand dollars for yourself (“defect”).


But defecting in this case, I claim, is totally crazy. Why? Because absent some kind of computer malfunction, both of you will make the same choice, as a matter of logical necessity. If you press the defect button, so will he; if you cooperate, so will he. The two of you, after all, are exact mirror images. You move in unison; you speak, and think, and reach for buttons, in perfect synchrony. Watching the two of you is like watching the same movie on two screens.

To me, it’s an extremely easy choice. Just press the “give myself a million dollars” button! Indeed, at this point, if someone tells me “I defect on a perfect, deterministic copy of myself, exposed to identical inputs,” I feel like: really? 

Objection 1:

So let’s imagine an agent who shares exactly your thoughts up to exactly the moment above.

So this agent, at this moment, has just thought about this “extremely easy choice”  to cooperate and also exactly all of the logic leading up to this moment, just before they are about to press the "cooperate" button.

But then at this moment, they break from your story. 

The agent (deterministic AI system, who only wants money) thinks, “Well I can get 1M + 1K by defecting, so I’m going to do that”. 

Being a clone, having every atom, particle, quantum effect and random draw being identical, does not stop this logic. There’s no causal control of any kind, right? 

So RIP cooperation. We get the same prisoner dilemma effect again.

(Notice that linked thinking between agents doesn’t solve this. They  just can think “Well my clone must be having the same devious thoughts. Also, purple elephant spaghetti.”)

Objection 2:

Let’s say that the agents share exactly your strong views toward cooperation, and even in the strongest way, believe in the acausal control or mirroring that allows them to cooperate (or scratch their nose, perform weird mental gymnastics, etc.).

They cooperate. Great!

Ok, but here the agency/design/control was not exercised by the two agents, but instead by whatever device/process that linked/created them in the first place. 

That process  established this linkage with such certainty that it allows people to sync or cooperate over many light years.

It’s that process that created/pulled their strings, like a programmer writing a program. 

This is a pretty boring form of control and not what you envision. There’s no control, causal or otherwise exercised at all by your agents. 

Objection 3:

Again, let’s have the agents do exactly as you suggest and cooperate.

Notice that you assert some sort of abhorrence against the astronomical waste of defecting. 

Frankly, you lay it on pretty thick:

To me, it’s an extremely easy choice. Just press the “give myself a million dollars” button! Indeed, at this point, if someone tells me “I defect on a perfect, deterministic copy of myself, exposed to identical inputs,” I feel like: really? 


Note that this doesn’t seem like a case where any idiosyncratic predictors are going around rewarding irrationality. Nor, indeed, does feel to me like “cooperating is an irrational choice, but it would be better for me to be the type of person who makes such a choice” or “You should pre-commit to cooperating ahead of time, however silly it will seem in the moment” (I’ll discuss cases that have more of this flavor later). Rather, it feels like what compels me is a direct, object-level argument, which could be made equally well before the copying or after. This argument recognizes a form of acausal “control” that our everyday notion of agency does not countenance, but which, pretty clearly, needs to be taken into account.

Ok. Your agents follow exactly your thinking, exactly as you describe, and cooperate.

But now, your two agents aren’t really playing “prisoner’s dilemma” at all. 

They do have the mechanical, physical payoffs of 1M and 1K in front of them, but as your own emphatic writing lays out, this doesn’t describe their preferences at all.

Instead, it’s better to interpret your agents preferences/“game”/“utility function” as having some term for the cost of defecting or aversion to astronomical waste.

Please don't hurt me.
 

Comment by Charles He on Questions for Howie on mental health for the 80k podcast · 2021-08-27T19:02:03.238Z · EA · GW

 The case I would like to see made her is why EA orgs would benefit from getting mental health services from some EA provider rather than the existing ones available.  


My parent comment is a case for an organization that provides mental health services to EAs in general. 

I don't know why a case needs to be made that it needs to replace mental health services provided to EA orgs that are already available, which seems to be a major element in your objection. 

Replacing or augmenting mental health services in EA orgs is one aspect/form/subset of the services that could be provided. This isn't necessarily for it to be successful, the case is broader. 

However, some of the points given might suggest how it could do this, and at least be helpful to EA orgs.

I'm not sure why you think current mental services, eg regular therapists are unapproachable and how having an 'EA' service would get around this

Ok, here's another response. In one of the comments here, someone brought up a navigator service (which may be fantastic, or it may not be that active). 

On the website it says:

I can imagine objections related stats/validity with this one figure, but it's a member of a class  of evidence that seems ample.

Separately and additionally, I have models that support the view further.

However, honestly, as indicated by your objection, I'm concerned it's not going to be practical/productive to try to lay them out. 

I view myself as "steel-manning" an intervention (which I have no intention to implement or have any personal benefit to me) which makes my discourse acceptable to me.


 

Comment by Charles He on Questions for Howie on mental health for the 80k podcast · 2021-08-27T01:22:51.233Z · EA · GW

I think broadly what you're saying is "Well, if impact can be improved by mental health, then orgs can provision this without our help." 

I'm pattern matching this to a "free market" sort of argument, which I don't think this is right. 

Most directly, I argue that mental health services can be very unapproachable and are effectively under provisioned. Many people do not have access to it,  contrary to what you're saying. Secondly, there's large differences in quality and fit from services, and I suspect many EAs would benefit from a specific set of approaches that can be developed for them. 

More meta, I think a reasonable worldview is that mental health is a resource which normally gets depleted. Despite—or because someone is a strong contributor, they can make use of mental health resources. In this worldview, mental health services should be far more common since it's less of a defect to be addressed.

I suppose the intervention you have in mind is to improve the mental health of organisation as a whole - that is, change the system, rather than keep the system fixed but help the people in the system. This is a classic organizational psychology piece, and I'm sure there are consultants EAs orgs could hire to help them with this. 

No, this isn't what I'm thinking about. I don't understand what you're saying here.

Given my original comment, I think it's appropriate to give a broad view of the potential forms the intervention can take and what can be achieved by a strong founding team. 

These services can take forms that don't currently exist. I think it's very feasible to find multiple useful programs or approaches that could be implemented.

Comment by Charles He on Hits-based Giving · 2021-08-25T07:58:41.614Z · EA · GW

Holden Karnofsky says that:

We don’t: expect to be able to fully justify ourselves in writing.

We don’t: put extremely high weight on avoiding conflicts of interest, intellectual “bubbles” or “echo chambers.”

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed.

Despite this, Open Phil's communications seem seem to show great honesty and thoughtfulness in this article and others in 2016. Immense attention is given to communicate their decision and perspective and other meta issues, even on awkward, complex, or difficult to articulate topics.

For example, look at the effort given to explaining nuances in the "hits-based" decision of hiring Open Phil's first program officer:

https://www.openphilanthropy.org/blog/process-hiring-our-first-cause-specific-program-officer

In these conversations, a common pattern we saw was that a candidate would have a concrete plan for funding one broad kind of work (for example, ballot initiatives, alternative metrics for prosecutors, or research on alternatives to incarceration) but would have relatively little to say about other broad kinds of work. This was where we found the work that had gone into our landscape document particularly useful. When a candidate didn’t mention a major aspect of the criminal justice reform field, we would ask about it and see whether they were omitting it because they (a) had strong knowledge of it and were making a considered decision to de-prioritize this aspect of the field; (b) didn’t have much experience or knowledge of this area of the field.

There's a lot given to us in just this one paragraph: a peek into a failure of breadth (in probably very high quality candidates), a filter which helps explain how Open Phil finds someone "well-positioned to develop a good strategy". It also concretely shows the value of the structure in the "landscape document", which shows the value of process and how it is adhered to.

The article even gives us with a peek into the internal decision making process:

Early on, Alexander Berger and I expect to work closely with her, asking many questions and having many critical discussions about the funding areas and grants she proposes prioritizing. That said, we don’t expect to understand the full case for her proposals, and we will see our role more as “spot-checking reasoning” than as “verifying every aspect of the case.” Over time, we hope to build trust and reduce the intensity of (while never completely eliminating) our critical questions. Ultimately, we hope that our grants in the space of criminal justice reform will be less and less about our view of the details, and more and more about the bet we’re making on Chloe.

This clearly shows us meta-awareness of the process of 1) how decisions are made, 2) how some control and validation of the program officer occurs, and 3) acknowledgement of the limitations about the executives. 

This is an impressive level of honesty and explicitness about a very sensitive process. 

It's hard to think of another organization that would write something like this.

Comment by Charles He on How much money donated and to where (in the space of animal wellbeing) would "cancel out" the harm from leasing a commercial building to a restaurant that presumably uses factory farmed animals? · 2021-08-24T21:02:30.008Z · EA · GW

Charles_Dillon had a great answer. 

Answering the question of "Where to donate" that you asked:

TLDR; Consider donating to the EA Animal Welfare Fund, because I think they are more able to fund nimbler, high-impact projects, and the fund is well advised and connected to the entire farm animal welfare movement. 

Explanation:

I think one reason to donate to the Animal Welfare Fund is that they are more able to support nimbler projects. For example, The Humane League is a great organization to donate to and has a great track record of success. Now, the funding landscape for farm animal welfare has some structure now, so an established organization such as the The Humane League and their important initiatives seem to get funding somewhat reliably. In contrast, it's still hard for smaller projects to get started. The Animal Welfare Fund can make this happen.

Secondly, the job of the EA Animal Welfare Fund is to find and donate to good causes. They exist to serve you and use your donation money well. They grant to the Humane League and others. Their fund managers are extremely experienced, respected and aggressive in maximizing impact. Based on this, it seems to make sense to delegate granting to such an organization.

Comment by Charles He on How much money donated and to where (in the space of animal wellbeing) would "cancel out" the harm from leasing a commercial building to a restaurant that presumably uses factory farmed animals? · 2021-08-24T20:43:04.090Z · EA · GW

Charles_Dillon had a really great answer and I think his numerical calculations seem right. I encourage you to round up any answer because of the uncertainty in any calculation.

Since your question is great and you care about impact for animals here's another way to have an impact:

By far, the food with most suffering is chicken and especially eggs that come from caged, egg laying chicken

What this means is that, calorie for calorie, or bite per bite, eggs and chicken can be ten or hundred times worse than other animal products.

E.g., see Peter Singer saying this:

So what this means is that, you can further increase your impact by ensuring your restaurant tenants only serve "cage-free" eggs, which now account for about 20% of the US supply. 

Even better is pasture raised eggs in a local farm is (but this is much rarer). 

This shouldn't be very hard to do, since it sounds like your space can attract hip restaurants, and the cost of these nicer eggs is low. 

Comment by Charles He on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-20T23:00:55.137Z · EA · GW

In a technocracy, a small band of experts get lots of power to manipulate people, nudge them, or engage in social engineering. 

the limitations of technocrats and would be more in favor of civil liberties.

Is it possible for you to elaborate more on this or easily provide links to a writeup?

Technocracy to me just means having experts with decision making power or influence, maybe in some institution, presumably with good governance and other checks. 

This concept doesn't immediately lead to thoughts of manipulation, or social engineering. 

I'm trying to get educated here—I'm not being contentious, if this is the "Truth" about technocracy and the general mainstream belief, I want to learn about it

Many democrats are technocrats--indeed, the people I argue with, like Christiano, Estlund, and so on, are pretty hardcore technocrats who have been in favor of letting alphabet agencies have lots of dictatorial power during this crisis. 
 

Do you mean Thomas Christiano and David Estlund?

I guess related to the above, it seems like the object level argument really depends on some assumptions. 

It is just not clear what is being debated here in this subthread to me and I guess to many other readers of your AMA.

Again, is it possible for you to write just a little bit more on this or provide links to something to get novices up to speed?

Comment by Charles He on Questions for Howie on mental health for the 80k podcast · 2021-08-20T06:20:35.681Z · EA · GW

I can’t see that this would require a whole organisation to be set up. CEA could hire one person to work on this for example and that would seem to me to be sufficient.

This org would be setup to provision actual mental health services or programs at free or low cost for EAs.

To be really concrete, maybe imagine a pilot with 1-2 EA founders and maybe 2-4.0 FTE practitioners or equivalent partnerships.
 

It would certainly be useful for someone to make a summary of available resources and/or to do a meta-review of what works for mental health,

There are perspectives where the value of reviews and compendium websites have limited value and my first impression is that it may apply here.

My gut reaction is to think we should just make use of existing mental health resources out there which are abundant. I’m not sure why it would help for it to be EA specific.

This is a very strong statement. I have trouble relating to this belief. 

Your profile says you work in management consulting or economics.  You also seem to live in the UK. You seem have and directly use high human capital in management or highly technical fields. In totality, you probably enjoy substantial mental health services and while such jobs can be stressful, it's usually the case they do not involve direct emotional trauma.

Not all EAs enjoy the same experiences. For example:

  • In global health or in animal welfare, people who love animals have to pour over graphic footage of factory farming, or physically enter factory farms risking specific legal retaliation from industry, to create movies like this. I assure you funding for such work is low and there may be no mental health support.
  • Similar issues exist in global health and poverty, where senior leaders often take large pay cuts and reduction of benefits.
  • I know an EA-like person who had to work for free during 2020 for 3-4 months in a CEO role, where they worked 20 hour days. They regularly faced pleas for medical supplies from collapsing institutions, as well as personal attacks and fires inside and outside the org, for not doing enough.

Many of these roles are low status or have zero pay or benefits. 

Many of the people who do this work above have very high human capital and would enjoy high pay and status, but actively choose to do work because no one else will or will even understand it. 

Comment by Charles He on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-20T03:51:00.566Z · EA · GW

I think a downvoters view is that:

It packs powerful claims that really need to be unpacked ("unsustainable...massive suffering"), with a backhand against the community ("actually care...claim to") with extraordinary, vague demands ("large economic transition"), all in a single sentence.

It's hard to be generous, since it's so vague. If you tried to riff  some "steelman" off it, you could work in almost any argument critical of capitalism or even EA in general, which isn't a good sign. 

  

Comment by Charles He on Questions for Howie on mental health for the 80k podcast · 2021-08-19T21:07:53.064Z · EA · GW

There seems to be an opportunity for founding an org for “EA mental health”:
 

  • It’s plausible that, even ignoring the wellbeing of the recipients, the cost effectiveness for impact could be enough. For  example, even if you had to pay the costs of treating a thousand EAs, if doing so pulled a dozen of those people out of depression, the impact might be worth it. Depression and burnout is terrible. Also, the preventative value from understanding/protecting burnout and other issues seems high.
     
  • Conditional on mental health being a viable cause area, there’s probably spillover effects from founding and executing the org (normalizing mental health, gears level knowledge for other interventions, partnerships and visibility)
     
  • More delicately/pragmatically, EA meta orgs tend to be attractive to founders as such orgs tend to be high status and more stable in funding. There’s probably many versions of this org where a competent generalist EA founder, without mental health experience, can execute this well, by recruiting/partnering with established mental health practitioners .
     
  • On the other hand, I could see the "ceiling" and "floor" for success of such an org to be high. For example, it may be cheap and highly effective to get virtually every senior EA leader to contribute snippets of their own experiences. I already see this happening in private, high trust situations (or Peter Wildeford right above my comment in the EA forum). A founder probably needs to be trusted, established, and experienced to do the intervention well.
     
  • Mental health may be a unique case that helps overcome the normal reluctance of charities to spend on staff. There seems like an opportunity where a skilled founder can use the historical reluctances around mental health and charity spending, to cancel each other out.
     
  • There’s a sort of “virtue ethics” reason for having an org focusing on EA mental health. It just feels right. Long time EAs know a lot of people who have given a lot of their lives and income (which means giving up safety and ability to buy mental health services). This seems especially important in less technical cause areas.
     
  • EAs are different in how they view the world and why they think their choices have value. There’s probably benefits to focus that an EA specific org would have.
Comment by Charles He on Why scale is overrated: The case for increasing EA policy efforts in smaller countries · 2021-08-18T23:17:05.136Z · EA · GW

This makes a lot of sense. Thank you!

Comment by Charles He on [deleted post] 2021-08-17T23:15:03.361Z

This looks like a very exciting project that can save lives!

Often, it seems proposals to get help or collaborate give more background about the poster. 

To increase the response rate to your question, maybe it would make sense to write a paragraph or two about your background or expand on your past with the project?

Here are some examples from recent posts, in no particular order or significance (while signaling success or status is a common pattern, I don't think this is needed for responses or what I intend):

https://forum.effectivealtruism.org/posts/EwJuWKicdY76rtCsN/charity-teaching-people-to-learn-and-form-knowledge

My name's Alex, I'm a 25 year old data analyst from the UK with a Master's Degree in Bioengineering. I've previously worked with the EA-aligned & funded "Cellular Agriculture UK". For the past few years I've been iterating on a process of learning effectively, and have been thinking about how best to share this with people, as I think it's enormously powerful. Until recently I was planning on setting up a business (www.teachingyouhowtolearn.com is my current placeholder website), but I've recently rediscovered EA and think it could be a good candidate for an EA-aligned charity/ website akin to 80,000 hours.

https://forum.effectivealtruism.org/posts/hwTdJToSsb4DxW2a2/1h-volunteers-needed-for-a-small-ai-safety-related-research

I'm currently doing an AI Safety internship where we're carrying out a research study about "multimodal models: usefulness and safety". In this project, we want to analyse how people can directly use language models (and multimodal models in the future) to help them solve everyday tasks. This is conducted by researchers from the Centre for the Study of Existential Risk (CSER), the Leverhulme Centre for the Future of Intelligence (CFI), the University Complutense de Madrid (UCM), the Psychometrics Centre at the Cambridge Judge Business School (JBS) and the Valencian Institute for AI Research (VRAIN-UPV).
 

https://forum.effectivealtruism.org/posts/hjykgdqo2zDByqg6E/how-we-held-a-successful-1st-introductory-ea-fellowship-in

Hello! I'm Shen Javier, former president of Effective Altruism UP Diliman (EA UPD) for AY 2020-2021. This post summarizes how we organized our first introductory fellowship which we named the Utak at Puso Fellowship and was held last February to May 2021. We hope that EA group organizers, especially those who will soon run a fellowship, will find this helpful.


 

Comment by Charles He on Why scale is overrated: The case for increasing EA policy efforts in smaller countries · 2021-08-15T21:20:39.730Z · EA · GW

I am a total novice in this area, but this seems like a really great post!

In addition to great content, it is well written, with crisp, clear points. It's enjoyable to read.

I have a bunch of low to moderate quality comments below.

One motivation for writing this comment is that very high quality posts often don’t get comments for some reason. So please comment and criticize my thoughts!


The main point seems true!

I think the main point of scale and institutional advantages seems true (but I don't know much about the topic). 

Maybe another way of seeing this is that the 1% fallacy applies in reverse: in smaller countries you are much more able to effect change and the institutional lessons can then be transferred and scaled up in a second phase.

Critique: EA contributions are unclear?

I think a potential critique to your goal of attracting EA attention would be that the cause is not neglected. 

To flesh this out, I think a variant of this critique would be:

Nordic planners and experts have both high human capital and are funded by state wealth (vastly larger than any existing granter). Combined with their cultural/institutional knowledge, this could be an overwhelmingly high bar for an outsider to enter into and make an impact.

Another way of looking at this, is that the premise overvalues EA—EA is a great movement. But it’s small, and it's successes has been in neglected causes and meta-charity. 

So a premortem might look like “Ok so a lot of EAs came in, but generally, their initial help/advice amounted to noise. Ultimately, the most helpful pattern was that EAs ended up adding to our talent pool for our institutions (alongside the normal stream of Nordic candidates). This doesn't scale (since it adds people one-by-one) and doesn't really benefit from specific EA ideas.

This point could be responded to with some "vision" or scenarios of how EAs would "win". Maybe the EAF ballot initiative and Jan-Willem's work on workshops would be good examples.

Longtermist policies versus good governance

You mention longtermism being implemented in Nordic countries. 

Your examples included national saving withdrawals being codified at 3% per year, and programs giving explicit attention to time horizons of 40 years.

This looks a bit like "patient longtermism” (?)...but it mainly looks like "good governance" that does not require any attention to the astronomically large value of future generations.

Is it worth untangling this or is it reasonable to round off?

Comment by Charles He on Denise_Melchin's Shortform · 2021-08-12T19:24:06.211Z · EA · GW

My impression is that at best the top 3% of people in rich countries in terms of ability (intelligence, work ethic, educational credentials) are able to pursue such high impact options. What I have a less good sense of is whether other people agree with this number.


This seems "right", but really, I don't truly know.

One reason I'm uncertain because I don't know the paths you are envisioning for these people.

Do you have a sense of what paths are available to the 3%, maybe writing out very briefly, say 2 paths that they could reliably succeed in, e.g. we would be comfortable advising them today to work on?

For more context, what I mean is, building on this point:

high impact job options like becoming a grantmaker directing millions of dollars or meaningfully influencing developing world health policy will not be realistic paths.

So while I agree that the top 3% of people have access to these options, my sense is that influencing policy and being top grant makers have this "central planner"-like aspect. We would probably only want a small group of people involved for multiple reasons. I would expect the general class of such roles and even their "support" to be a tiny fraction of the population.

So it seems getting a sense of the roles (or even some much broader process in some ideal world where 3% of people get involved) is useful to answer your question.

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-12T06:00:41.739Z · EA · GW

I think you should write this up as a full post or at least as a question. 

I don't think people will see this and you deserve reasonable attention if its a full time project.

Note that my knee jerk reaction is caution. The value of RCTs is well known and they are coveted. Then, in the mental models I use, I would discount the idea that it could be readily distributed. 

For example, something like the following logic might apply:

  • An RCT, or something that looks like it, with many of the characteristics/quality you want, will cost more than the seed grant or early funding for the new org doing the actual intervention.
  • Most smaller projects start with a pilot that gives credible information about effectiveness (by design, often much cheaper than an RCT).
  • Then "democratizing RCTs", as you frame it, will basically boil down to funding/subsidizing smaller projects than bigger ones.

I'm happy for this reasoning to be thoroughly destroyed and RCTs available for all!

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-10T21:38:24.423Z · EA · GW

Ok, I see what you're saying now.

I might see this as a creating bounty program for altruistic successes, while at the same time creating a "thick" market for bounties that is crowd sourced, hopefully with virtuous effects.

Comment by Charles He on [PR FAQ] Adding profile pictures to the Forum · 2021-08-10T18:29:58.484Z · EA · GW

I am worried academic studies might underestimate how bad looking I am.

I mean, what if I am four, five standard deviations off here?

Comment by Charles He on Open Philanthropy is seeking proposals for outreach projects · 2021-08-09T17:28:14.960Z · EA · GW

Hi Linda,

It says here on the page on the website:

Applications are open until further notice and will be assessed on a rolling basis. If we plan to stop accepting applications, we will indicate it on this page at least a month ahead of time.

You're right, I don't immediately see it in the actual post, so it's unclear.

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-09T17:12:21.893Z · EA · GW

Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)
 

 

But the reason why you would evaluate someone's pitch as opposed to using hindsight is that nothing would be done without funding?

I don't see what you mean by centralization here, or how it's a problem. 

I think I am using centralization in the same way that cryptocurrency designers/architects talk about crypto currency systems actually work ("centralization pressures").

The point of NFTs, as opposed to you, me, or a giant granter producing certificates, is that it is part of a decentralized system, not under any one entity's control. 

My understanding is that this is the only logical reason why NFTs have any value, and are not a gimmick. 

They don't have any magical power by themselves or have any special function or information or anything like that.

Under this premise, centralization is undermined, if any other structural component of the system is missing. 

For example, if the grantors or their decisions come from a central source. Then the value of having a decentralized certificate is unclear.

Note undermining "centralization" is sort of like having a wrong step in a math theorem, it's existentially bad as opposed to a reduction in quality or something.

As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.

I meant that you have written out two distinct promises here that seem to be necessary for this system to structurally work in this proposal. One of these promises seem to be high quality evaluation:

Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. 

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

Comment by Charles He on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-09T00:20:13.739Z · EA · GW

Thanks for pointing this out!

Comment by Charles He on Most research/advocacy charities are not scalable · 2021-08-09T00:19:56.136Z · EA · GW

Thank you for pointing this out.

You are right, and I think maybe even a reasonable guess is that CSET funding is starting out at less than 10M a year.

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T21:58:26.378Z · EA · GW

I don't understand. Can you explain more what this project would do and how it would create change?

Also, this project seems to involve commitment of i) hundreds of millions of dollars of funding and ii) reliable guarantees that these will be used cost effectively.

These (extraordinarily) strong promises are structurally necessarily and also seem only achievable by "centralization".

Given this centralization, what is the function or purpose of the NFT?

(Note that my question isn't about technical knowledge about "blockchain" or "NFTs" and you can assume gears knowledge of them and their instantiations up through 2020.)

Comment by Charles He on Most research/advocacy charities are not scalable · 2021-08-07T21:27:13.954Z · EA · GW

When Benjamin_Todd wanted to encourage new projects by mentioning $100M+ size orgs and CSET, my take was that he wanted to increase awareness of an important class of orgs that can now  be built.

In this spirit, I think there might be some perspectives  not mentioned yet in the consequent discussions:

 

1. Projects with 100m+ of required capital/talent has different patterns of founding and success

There may be reasons why building such 100m+ projects are different both from many smaller  "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.

One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved:

Here are examples of members of the founding team of OpenAI and CSET:

CSET - Jason Matheny - https://en.wikipedia.org/wiki/Jason_Gaverick_Matheny

OpenAI - Sam Altman - https://en.wikipedia.org/wiki/Sam_Altman

If you look at these profiles, I think you can infer that if you have an org that is capped at $10M, or has to internalize a GiveWell-style cost effectiveness aesthetic, this wouldn't work and nothing would be founded. The people wouldn't be interested (as another datapoint, see 1M salaries at OpenAI).

 

2. Skillset and training patterns might differ than previous patterns used in the EA movement

I think it's important to add nuance to a 80,000 hours-style article of "get 100m+ org skills":

Being an executive at a charity that delivers a tangible product requires different skills to running a research or advocacy charity. A smaller charity will likely need to recruit all-rounders who are pretty good at strategy, finance, communications and more. In contrast, in a $100 million you will also need people with specialized skills and experience in areas like negotiating contracts or managing supply chains.

Note that being good at the most senior levels usually involves mastering or being fluent in many smaller, lower status skills. 

For evidence, when you work together with them, you often see senior leaders flaunting or actively using these skills, when they don't apparently have to. 

This is because the gears-level knowledge improves judgement of all decisions (e.g. "kicking tires"/"tasting the soup"). 

Also, the most important skill of senior leaders is fostering and selecting staff and other leaders, and again, gears-level observation of these skills is essential to such judgement.

specialized skills and experience in areas like negotiating contracts or managing supply chains.

Note that in a 100M+ org, these specialized skills can be fungible in  way that "communication" or "strategy" is not.

If you want to start or join an EA charity that can scale to $100 million per year, you should consider developing skills in managing large-scale projects in industry, government or another large charity in addition to building relationships and experience within the EA community.

From the primal motivation of impact and under the premise in Benjamin_Todd's statement, I think we would expect the goal is to try to create these big projects within 3 or 5 years. 

Some of these skills, especially founding a 100M+ org, would be extremely difficult to acquire within this time. 

There are other reasons to be cautious:

  • Note that approximately every ambitious person wants these skills and profile, and this set of people is immensely larger than the set of people in more specialized skill sets (ML, science, economics, policy) that has been encouraged in the past.
  • The skills are hard to observe (outputs like papers or talks are far less substantive, and blogging/internet discussion is often looked down).
  • The skillsets and characters can be orthogonal or opposed to EA traits such as conscientiousness or truth seeking.
  • Related to the above, free-riding and other behavior that pools with altruism is often used to mask very conventional ambition (see Theranos, and in some points of view, approximately every SV startup).

I guess my point is that I don't want to see EAs get Rickon'd by running in a straight line in some consequence of these discussions.

 

Note that underlying all of this is a worldview that views founder effects/relationships/leadership as critical and the founders as not fungible. 

It's important to explicitly notice this, as this worldview may be very valid for some interventions but is not for others. 

It is easy for these worldviews to spill over harmfully, especially if packaged with the high status we might expect to be associated with new EA mega projects.

 

3. Pools of EA leaders already exist

I also think there exists large pool of EA-aligned people (across all cause areas/worldviews) who have the judgement to lead such orgs but may not feel fully comfortable creating and carrying them from scratch. 

Expanding on this, I mean that, conditional on seeing an org with them in the top role, I would trust the org and the alignment. However, these people may not want to work with the intensity or deal with the operational and political issues (e.g. put down activist revolts, noxious patterns such as "let fires burn", and winning two games of funding and impact).

This might leave open important opportunities related to training and other areas of support.

Comment by Charles He on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T22:55:13.632Z · EA · GW

Ok, probably more relevant is the the OLPC project. Here is an extremely readable overview.

Honestly, many of the projects in the thread are more susceptible to the same flaws that apply  to these infrastructure projects. Bridges and dams are far more tangible, and benefit from deep pools of experience.

Related to the bigger goal, I think few people here believe the value of this thread is in brainstorming a specific project proposal. 

Rather, there's lots of other value, e.g. in seeing if any ideas or domains pop out that might help further discussion, and knowledge of existing projects and experts might arise.

(There's also a perspective that is a bit snobby and looks down on big, grandiose planning).