Posts

EA Funds donation platform is moving to Giving What We Can 2022-05-23T03:57:51.015Z
EA Librarian Update 2022-03-11T01:22:58.969Z
What does orthogonality mean? (EA Librarian) 2022-03-11T01:18:43.687Z
What are the strongest arguments against working on existential risk? (EA Librarian) 2022-03-11T01:14:49.854Z
What are the different types of longtermisms? (EA Librarian) 2022-03-11T01:09:48.750Z
EA Librarian: CEA wants to help answer your EA questions! 2022-01-17T17:22:07.404Z
calebp's Shortform 2021-12-19T12:31:29.542Z
Looking for more 'PlayPumps' like examples 2021-05-28T10:32:25.340Z
Why We Think Tobacco Tax Advocacy Could be More Cost-Effective than AMF 2020-03-05T15:03:47.642Z
Tackling the Largest Cause of Death Worldwide: Good Policies Update on Tobacco Taxes 2020-01-28T11:15:10.784Z
Introducing Good Policies: A new charity promoting behaviour change interventions 2019-11-18T15:08:56.793Z

Comments

Comment by calebp on Criticism of the 80k job board listing strategy · 2022-09-22T22:54:20.656Z · EA · GW

If somebody can't evaluate jobs on the job board for themselves, I'm not that confident that they'll take a good path regardless. People have a tendency to try to offload thinking to places like 80k, and I actually think it could be bad if 80k made it easier to do that on extremely fine grained topics like individual jobs.

 

I often agree with something in this direction in specific cases, when there is some skill that is present in both the job choosing and job performing endeavour. I think job choosing is often largely about 'having good judgement and caring a lot' whereas doing well in a job often does not rely on 'having good judgment'. 

I think there are many examples of solid software engineers, operations staff, marketers etc. where having good judgement does not seem to be particularly important for their role (although that's not to say good judgement and these roles are anticorrelated or that good judgment isn't ever required here).

Comment by calebp on Criticism of the 80k job board listing strategy · 2022-09-22T22:46:00.006Z · EA · GW

I think it is kind of odd to say that the setup of the jobs board is fine but the perspective needs shifiting, as 80k are by far best positioned to change the perspective people have of the jobs board.

I am not confident that 80k are making a bad trade off here, the current setup may well be close to optimal given the tradeoffs (including the time tradeoff of bothering to optimise these things). But, I am a bit averse to attitudes of 'it's not this one orgs problem, everyone else needs to change' when it seems more efficient to address issues at the source.
 

Comment by calebp on CEA Ops is now EV Ops · 2022-09-13T21:11:32.142Z · EA · GW

I know a few EAs amongst their fellows as well but I have never heard Emergent Ventures referred to as EV in practice, so it seems fine to me.

Comment by calebp on [Link post] Optimistic “Longtermism” Is Terrible For Animals · 2022-09-07T22:16:30.143Z · EA · GW

Before I read this I took it mostly as a given that most people's mainline scenario for astronomical numbers of people involved predominantly digital people. If this is your mainline scenario the arguments for astronomical amounts of animal suffering seem much weaker (I think).

Comment by calebp on 'Psychology of Effective Altruism' course syllabus · 2022-09-07T22:12:54.197Z · EA · GW

Could the syllabus be uploaded as an appendix to this post? 

Otherwise you could make it a pdf/google doc to solve any rendering issues and ensure that people can read it on mobile devices.

Comment by calebp on EAs underestimate uncertainty in cause prioritisation · 2022-08-30T15:33:21.168Z · EA · GW

No, I mean roughly the total number of cause areas. 

It's a bit different to total number of causes as each cause area adds more diversity if it is uncorrelated from other cause areas. Maybe a better operationalisation is 'total amount of the cause area space covered'.

Comment by calebp on calebp's Shortform · 2022-08-26T16:42:03.971Z · EA · GW

One of my criticisms of criticisms

I often see public criticisms of EA orgs claiming poor execution on some object level activity or falling short on some aspect of the activity (e.g. my shortform about the 80k jobs board). I think this is often unproductive.

In general I think we want to give feedback to change the organisations policy (decision making algorithm), and maybe the EA movements policy. When you publicly criticise an org on some activity you should be aware that you are disincentivising the org from generally doing stuff.

Imagine the case where the org was choosing between scrappily running a project to get data and some of the upside value strategically as opposed to carefully planning and failing to execute fully. I think in these cases you should react differently and from the outside it is hard to know which situation the org was in.

If we also criticised orgs for not doing enough stuff I might feel differently, but this is an extremely hard criticism to make unless you are on the inside. I'd only trust a few people who didn't have inside information to do this kind of analysis.

Maybe a good idea would be to describe the amount of resources that would have had to have gone into the project for you to see the outcome as being reasonably successful ??? Idk seems hard to be well calibrated.

I expect some people to react negatively to this, and think that I am generally discouraging of criticism. I think that I feel moderately about most criticism, neither helpful nor particularly unhelpful. The few pieces of thoughtful criticism I see written up I think are very valuable, but thoughtful criticism in my view is hard to come by and requires substantial effort.

Comment by calebp on The EAIF is discontinuing most university group funding · 2022-08-26T09:46:43.378Z · EA · GW

Yes, I'll ask Max to clarify that in the first line.

Comment by calebp on calebp's Shortform · 2022-08-25T20:36:57.286Z · EA · GW

Oh this is a cool idea! I endorse this on the current margin and think it's cool that you are trying this out.

I think that ideally a high context person/org could do the curation and split this into a bunch of different categories based on their view (ideally this is pretty opinionated/inside viewy).

Comment by calebp on calebp's Shortform · 2022-08-25T17:29:10.616Z · EA · GW

The 80k job board has too much variance.

(Quickly written, will probably edit at some point in future)

Jobs in the main 80k job board range from (in my estimation) negligible value to amongst the best opportunities I'm aware of. I have also seen a few jobs that I think are probay actively harmful (e.g. token alignment orgs who are trying to build AGI where the founders haven't thought carefully about alignment - based on my conversations with them).

I think a helpful orientation towards jobs on the jobs board is, at least one person with EA values who happens to work at 80k thinks it's worth signal boosting. And NOT EA/80k endorses all of these jobs without a lot more thought from potential applicants.

Jobs are also on the board for a few different reasons e.g. building career cap Vs direct impact Vs ... And there's isn't lots of info about why the job is there in the first place.

I think 80k does try to to give more of this vibe than people get. I don't mean to imply that they are falling short in an obvious way.

I also think that the jobs board is more influential than 80k thinks. Explicit endorsements of organisations from core EA orgs are pretty rare and I think they'd be surprised how many young EAs overupdate on their suggestions (but only medium confidence about it being pretty influential).

My concrete improvement would be to seperate jobs into a few different boards to the degree that they endorse the organisation.

One thing I find slightly frustrating is the response that I have heard from 80k staff to this is that the main reason they don't do this is around managing relationships with the organisations (which could be valid). Idk if it's the right call but I think it's a little sus, I think people are too quick to jump to the nice thing that doesn't make them feel uncomfortable over the impact maximising thing (pin to write more about this in future).

One error that I think I'm making is criticising an org for doing a thing that is probably much better than not doing the thing even if it think it's leaving some value on the table, I think that this is kind of unhealthy and incentives inaction. I'm not sure what to do about this other than flag that I think 80k is great as is most of the stuff they do and I'd rather orgs had a policy of occasionally producing things that I feel moderately about if this helps them do a bunch of cool stuff, than underperform and not get much done (pin to write more about this in future).

Comment by calebp on EAs underestimate uncertainty in cause prioritisation · 2022-08-25T06:47:33.491Z · EA · GW

Thanks for writing this!

I think you make some reasonable points in your post, but I don't think that you make a strong argument for what appears to be your central point, that more uncertainty would and should lead to greater diversity in cause areas.

I think I'd like to see your models for the following points to buy your conclusion

  • How much EA uncertainty does the current amount of diversity predict that we have, is this less than you think we 'should' have? My sense is that you're getting more of a vibe we should have more causes, but

  • Why does more diversity fall out of more uncertainty? This seems to kind of be assumed but I think the only argument that was made here was the timelines thing which feels like the wrong way to think about this (at least to me).

  • A few concrete claims about uncertainty over some crux and why you think that means we are missing [specific cause area].

(You do point to a few reasons for why the diversity of causes may not exist which I think is very helpful - although I probably disagree with the object level takes)

Comment by calebp on calebp's Shortform · 2022-08-23T08:28:42.100Z · EA · GW

Why handing over vision is hard.

I often see projects of the form [come up with some ideas] -> [find people to execute on ideas] -> [hand over the project].

I haven't really seen this work very much in practice. I have two hypothesis for why.

  1. The skills required to come up with great projects are pretty well correlated with the skills required to execute on them. If someone wasn't able to come up with the idea in the first place, it's evidence against them having the skills to execute well on it.

  2. Executing well looks less like firing a canon and more like deploying a heat seeking missile. In reality most projects are a sequence of decisions that build on each other and the executors need to have the underlying algorithm to keep the project on track. In general when someone explains a project they communicate roughly where the target is and the initial direction to aim in, but it's much harder hand off the algorithm that keep the missile on track.

I'm not saying seperating out ideas and execution is impossible, just that it's really hard and good executors are rare and very valuable. Good ideas are cheap and easy to come by, but good execution is expensive.

A formula that I see more often work well is [person has idea] -> [person executes well in their own idea until they are doing something fairly repitive or otherwise hand over-able] -> person hands over project to competent executor.

Comment by calebp on Long-Term Future Fund: December 2021 grant recommendations · 2022-08-21T18:49:43.915Z · EA · GW

Thanks for asking this, this didn't feel rude and I think it's a very reasonable question. I think that this report was released much later than we would have liked.

Firstly I want to clarify that EA Funds is not part of CEA, it was spun out a few years ago and I now run by me, whereas CEA is run by Max Dalton. Asya Bergal chairs the LTFF.

Asya may want to add more information below but my take is that EA Funds is bottlenecked on grant making capacity as well as good applications. Our goal is to make excellent grants and writing these reports trades off against grant making capacity. If we had more time I expect we'd put out these reports more quickly but I'm keen to try and protect the time of our part time fund managers as much as possible. I would happily hire more part time fund managers but we have found it hard to find people who are at our current bar and we have a reasonable amount of fund manager turn over (as our fund managers pursue other valuable projects).

We do have 1 assistant fund manager on EAIF and are hiring some more, but I don't expect them to speed up this process very much (as the fund managers themselves need to write up why they decided to fund the project). We will soon have a public grants database with each project we fund, but I'm less excited about just reporting our grant making as opposed to explaining our reasoning (as most of my theory of change for why these reports are useful is more around improving EA community project taste or being transparent in a high fidelity way).

I'm a bit confused about why people aren't sure whether it's worth their time to apply when the form takes less than an hour and people can apply for arbitrarily large amounts of money, the EV/hour seems very high (based on previous report acceptance rates).

Another factor which I expect to get push back on is that being transparent just ends up being very operationally costly and it's not obvious that this is the best use of our time relative to supporting grantees, approving more grants, or trying to solicit better applications. Also a large proportion of our funding comes from Open Phil, which in my view does decrease the requirement to be transparent outside of trying to encourage good community norms, and steer future EA projects.

Comment by calebp on Ok Doomer! SRM and Catastrophic Risk Podcast · 2022-08-20T20:37:32.794Z · EA · GW

Have you considered paying for the podcast to be transcribed? You may get more engagement and discussion here if it were.

I think there are digital devices that do a pretty good job for relatively little money but I haven't looked into it properly.

Comment by calebp on Important, actionable research questions for the most important century · 2022-08-18T14:27:17.197Z · EA · GW

If anyone ended up working on these questions as a result of this post, I would be interested in asking you a few questions about your experience, so far I haven't encountered many people who actually decided to put in substantial effort to tackling these questions but I have seen a lot of people who are supportive of others trying. 

I am thinking about grantmaking programs that might support people trying out this kind of research, or encourage people to try it out.

You can message me on the forum or at caleb.parikh [at] centreforeffectivealtruism.org.

Comment by calebp on Public reports are now optional for EA Funds grantees · 2022-08-17T16:15:04.694Z · EA · GW

I think that this would probably be fully funged by other donors as we have a very small number of grants that aren't publicly reported and a relatively small proportion of donors provide the majority of our funding.

That said GWWC now manages the donations side of funds and I can request they add this feature if I see more demand for it (it will create some operational overhead on our side).

Comment by calebp on Public reports are now optional for EA Funds grantees · 2022-08-17T14:49:57.812Z · EA · GW

Thanks for sharing your concern!

The vast majority of projects do not opt-out of public reporting and as a charity, our trustees do have oversight over large grants that we make.

As Linch said, I do think that this change to our requirements does require you to place some more trust in our grantmakers but I still think, due to the sensitive nature of some of our grants, this is the right call.

Comment by calebp on EA Librarian: CEA wants to help answer your EA questions! · 2022-08-17T14:44:44.352Z · EA · GW

The project is no longer active, I deactivated the form but didn't update this post so have added this info at the top now.

Thanks for catching this!

Comment by calebp on Announcing the Longtermism Fund · 2022-08-12T03:55:11.595Z · EA · GW

I do agree with GWWC here and have been involved in some of the strategic decision-making that lead to launching this new fund. I'm excited to have a donation option that is less weird than LTFF for longtermists but still (like GWWC) see a lot of value in both donation opportunities existing.

I think that excellent but illegible projects already have (in my probably biased opinion) good funding options through both the LTFF and the FTX regranting program.

Comment by calebp on calebp's Shortform · 2022-08-02T18:33:51.485Z · EA · GW

(crosspost of a comment on imposter syndrome that I sometimes refer to)

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people ...) and trust the EA community to be tracking some of the right things. 

I find it helpful to sometimes be in a mindset of "helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).

Comment by calebp on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-29T00:16:16.418Z · EA · GW

the organizations you listed are also highly selective so only a few people will end up working at them.
 

Which organisations?  I think I only mentioned CFAR which I am not sure is very selective right now (due to not running hiring rounds).

Comment by calebp on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-29T00:14:47.285Z · EA · GW

 .... But the number of people we need working on them should probably be more limited than the current trajectory ....

 

I’ll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I’ll suggest that we should be doing less work on them - and list a few concrete suggestions for how to do that.

I feel like the op was mostly talking about direct work. Even if they weren't I think most of the impact that EA will have will eventually cash out as direct work so it would be a bit surprising if 'EA attention' and direct work were not very correlated AND we were losing a lot of impact because of problems in the attention bit and not the direct work bit.
 

Comment by calebp on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T14:08:57.697Z · EA · GW

(I think CSER has struggled to get funding for a some of its work, but this seems like a special case so I don't think it's much of a counter argument)

I think if this claim is true it's less because of motivated reasoning arguments/status of interesting work, and more because object level research is correlated with a bunch of things that make it harder to fund.

I still don't think I actually buy this claim though, it seems if anything easier to get funding to do prosaic alignment/strategy type work than theory (for example).

Comment by calebp on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T13:17:13.679Z · EA · GW

I agree in principle with this argument but ....

Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA[1].

I really don't think there are many people at all putting substantial resources into any of these areas.

  • Theoretical AIS work seems really important and depending on your definition of 'theoretical' there are probably 20-100 FTE working on this per year. I would happily have at least 10-100x this amount of work if the level of quality can be at least maintained.
  • Who is actually working on infinite ethics?? I don't think this has pulled a lot of EA talent, I'd guess fewer than 5 FTE.
  • Rat techniques - CFAR basically doesn't exist and has fewer than 5 FTE, I'd guess most of this work is done on the side now but it certainly does not seem to be over-done by EAs.
  • Criticisms - I think this is more reasonable, I don't think lots of time is spent here but a lot of attention is and I generally haven't found criticism posts to be particularly insight generating (although this one is more interesting).

My suspicion is that this "people are generally overprioritising interesting things" claim sounds nice but won't hold up to empirical investigation (at least on my world view).

Comment by calebp on Hiring Programmers in Academia · 2022-07-27T01:34:33.817Z · EA · GW


That the money is coming from a grant doesn't resolve this: the university would still not let you pay a higher salary because you need to go through university HR and follow their approach to compensation. 
 

 

Would the following solution work?
1. academic applies for funding but ask for it not to be paid out until they make a hire
2. academic finds a hire 
3. uni pays hire as normal
4. funder tops up the hires salary to market rate (or whatever was agreed on)

Alternatively, you can just get rid of step 3 but maybe the hire loses benefits like a uni affiliation, pension contribution etc.

Comment by calebp on calebp's Shortform · 2022-07-25T20:51:43.967Z · EA · GW

More EAs should give rationalists a chance

My first impression of meeting rationalists was at a AI safety retreat a few years ago. I had a bunch of conversations that were decidedly mixed and made me think that they weren’t taking the project of doing a large amount of good seriously, reasoning carefully (as opposed to just parroting rationalist memes) or any better at winning than the standard EA types that I felt were more ‘my crowd’.

I now think that I just met the wrong rationalists early on. The rationalists that I most admire:

  • Care deeply about their values
  • Are careful reasoners, and actually want to work out what is true
  • Are able to disentangle their views from themselves, making meaningful conversations much more accessible
  • Are willing to seriously consider weird views that run against their current views

Calling yourself a rationalist or EA is a very cheap signal and I made an error early on (insensitivity to small samples sizes etc.) dismissing their community. Whilst there is still some stuff that I would  change, I think that the median EA could move several steps in a ’rationalist’ direction.

Having a rationalist/scout mindset + caring a lot about impact are pretty correlated with me finding someone promising. It’s not essential to having a lot of impact but I am starting to think that EA is doing the altruism (A) part of EA super well and the rationalist are doing the effective (E) part of EA super well. 

My go to resources are probably:

  • The scout mindset - Julia Galen
  • The codex - Scott Alexander
  • The sequences highlights - Eliezer Yudkowsky/Less Wrong
  • The Less Wrong highlights
Comment by calebp on calebp's Shortform · 2022-07-22T09:58:57.215Z · EA · GW

‘EA is too elitist’ criticisms seem to be more valid from a neartermist perspective than a longtermist one

I sometimes see criticisms around

  • EA is too elitist
  • EA is too focussed on exceptionally smart people

I do think that you can have a very outsized impact even if you're not exceptionally smart, dedicated, driven etc. However I think that from some perspectives focussing on outliery talent seems to be the right move.

A few quick claims that push towards focusing on attracting outliers:

  • The main problems that we have are technical in nature (particularly AI safety)
  • Most progress on technical problems historically seems to be attributable to a surprisingly small set of the total people working on the problem
  • We currently don't have a large fraction of the brightest minds working on what I see as the most important problems

If you are more interested in neartermist cause areas I think it's reasonable to place less emphasis on finding exceptionally smart people. Whilst I do think that very outliery-trait people have a better shot at very outliery impact, I don't think that there is as much of an advantage for exceptionally smart people over very smart people.

(So if you can get a lot of pretty smart people for the price of one exceptionally smart person then it seems more likely to be worth it.)

This seems mostly true to me by observation, but I have some intuition that motivates this claim.

  • AIS is a more novel problem than most neartermist causes, there's a lot of working going in to getting more surface area on the problem as opposed to moving down a well defined path.
  • Being more novel also makes the problem more first mover-y so it seems important to start with a high density of good people to push onto good trajectories.
  • The resources for getting up to speed on the latest stuff seemless good than in more established fields.
Comment by calebp on calebp's Shortform · 2022-07-21T08:59:57.623Z · EA · GW

I adjust upwards on EAs who haven't come from excellent groups

I spend a substantial amount of my time interacting with community builders and doing things that look like community building.

It's pretty hard to get a sense of someone's values, epistemics, agency .... by looking at their CV. A lot of my impression of people that are fairly new to the community is based on a few fairly short conversations at events. I think this is true for many community builders.

I worry that there are some people who were introduced to some set of good ideas first, and then people use this as a proxy for how good their reasoning skills are. On the other hand, it's pretty easy to be in an EA group where people haven't thought hard about different cause areas/interventions/... And come away with the mean take that's not very good despite being relatively good reasoning wise.

When I speak to EAs I haven't met before I try extra hard to get a sense of why they think x and how reasonable a take that is, given their environment. This sometimes means I am underwhelmed by people who come from excellent EA groups, and impressed by people who come from mediocre ones.

You end up winning more Caleb points if your previous EA environment was 'bad' in some sense, all else equal.

(I don't defend why I think a lot of the causal arrow points from the EA environment quality to the EA quality - I may write something on this, another time.)

Comment by calebp on Announcing the Center for Space Governance · 2022-07-10T18:29:03.777Z · EA · GW

Sounds exciting.

The main thing that I am interested in when I read announcement posts or websites from very young orgs is who is on the core team.

I don't know if this has been left out intentionally, but if you did want to add this to the post I'd be interested in seeing that.

Comment by calebp on Critiques of EA that I want to read · 2022-06-21T15:38:52.639Z · EA · GW

I found this helpful and I feel like it resolved some cruxes for me. Thank you for taking the time to respond!

Comment by calebp on Critiques of EA that I want to read · 2022-06-20T21:56:06.571Z · EA · GW

Thanks for writing this post, I think it raises some interesting points and I'd be interested in reading several of these critiques.

(Adding a few thoughts on some of the funding related things, but I encourage critiques of these points if someone wants to write them)

Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.

I'm not aware of this happening very much, at least between EA Funds, Open Phil and FTX (but it's plausible to me that this does happen occasionally). In general I think that funders have a preference to just try and be transparent with each other and cooperate. I think occasionally this will stop organisations being funded, but I think it's pretty reasonable to not want to fund org x for project y given that they already have money for it from someone or take actions in this direction. I am aware of quite a few projects that have been funded by both Open Phil and FTX - I'm not sure whether this is much evidence against your position or is part of the 5d chess.

Sometimes funders don’t provide much clarity on the amount of time they intend to fund organizations for, which makes it harder to operate the organization long-term or plan for the future. Lots of EA funding mechanisms seem basically based on building relationships with funders, which makes it much harder to start a new organization in the space if you’re an outsider.

This is a thing I've heard a few times from grantees, I think there is some truth to it, although most funding applications that I see are time bounded anyway and we tend to just fund for the lifetime of specific projects or orgs will apply for x years worth of costs and we provide funding for that with the expectation that they will ask for more if they need it. If there are better structures that you think are easier to implement I'd be interested in hearing them, perhaps you'd prefer funding for a longer period of time conditional on meeting certain goals? I think relationships with funders can be helpful but I think it is relatively rarely the difference between people receiving funding and not receiving it within EA (although this is pretty low confidence). I can think of lots of people that we have decided against funding who have pretty good professional/personal relationships with funders. To be clear, I'm just saying that pre-existing relationships are NOT required to get funding and they do not substantially increase the chances of being funded (in my estimation).

Relatedly, it’s harder to build these relationships without knowing a large EA vocabulary, which seems bad for bringing in new people. These interactions seem addressable through funders basically thinking less about how other funders are acting, and also working on longer time-horizons with grants to organizations.

I think I disagree that the main issue is vocabulary, maybe there's cultural differences? One way in which I could imagine non EAs struggling to get funding for good projects is if they over inflate their accomplishments or set unrealistic goals as might be expected when applying to other funders, if probably think they had worse judgement than people who are more transparent about their shortcomings and strengths or worry that they were trying to con me in other parts of the application. This seems reasonable to me though, I probably do want to encourage people to be transparent.

Re funders brain drain

I'm not super convinced by this, I do think grantmaking is impactful and I'm not sure it's particularly high status relative to working at other EA orgs (e.g. I'd be surprised if people were turning down roles at redwood or Arc to work at OPP because of status - but maybe you have similar concerns about these orgs?). Most grantmakers have pretty small teams so it's plausibly not that big an issue anyway although I agree that if these people weren't doing grant making they'd probably do useful things elsewhere.

Comment by calebp on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-10T22:02:26.460Z · EA · GW

I know this isn't the point of the thread but I feel the need to say that if people think a better laptop will increase their productivity they should apply to the EAIF.

https://funds.effectivealtruism.org/funds/ea-community

(If you work at an EA org, I think that your organisation normally should pay unless they aren't able to for legal/bureaucratic reasons)

Comment by calebp on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T13:58:41.460Z · EA · GW

I think that Holden assigns more than a 10% chance to AGI in the next 15 years, the post that you linked to says 'more than a 10% chance we'll see transformative AI within 15 years'.

Comment by calebp on Sam Bankman-Fried should spend $100M on short-term projects now · 2022-05-31T19:29:48.522Z · EA · GW

SBF/FTX already gives quite a lot to neartermist projects afaict. He's also pretty open about being vegan and living a frugal lifestyle. I'm not saying that this mitigates optics issues, just that I expect to see diminishing marginal returns on this kind of donation wrt optics gains.

https://ftx.com/foundation

Comment by calebp on Some unfun lessons I learned as a junior grantmaker · 2022-05-24T00:19:24.812Z · EA · GW

The policy that you referenced is the most up-to-date policy that we have but, I do intend to publish a polished version of the COI policy on our site at some point. I am not sure right now when I will have the capacity for this but thank you for the nudge.

Comment by calebp on Some unfun lessons I learned as a junior grantmaker · 2022-05-23T22:08:01.049Z · EA · GW

My impression is that Linch's description of their actions above is consistent with our current COI policy. The Fund chairs and I have some visibility over COI matters, and fund managers often flag cases when they are unsure what the policy should be, and then I or the fund Chairs can weigh in with our suggestion. 

Often we suggest proceeding as usual or a partial but not full recusal (e.g. the fund manager should participate in discussion but not vote on the grant themselves).

Comment by calebp on Deferring · 2022-05-23T01:55:30.977Z · EA · GW

(I think that the pushing towards a score thing wasn't a crux in downvoting, I think there are lots of reasons to downvote things that aren't harmful as outlined in the 'how to use the form post/moderator guidelines')

I think that karma is supposed to be a proxy for the relative value that a post provides.

I'm not sure what you mean by zero-sum here, but I would have thought that the control system type approach is better as the steady-state values will be pushed towards the mean of what users see as the true value of the post. I think that this score + total number of votes is quite easy to interpret.

The everyone voting independently thing performs poorly when some posts have much more views than others (so it seems to be tracking something more like how many people saw it and liked it rather than is the post high quality).

I think I misunderstand your concern, but the control system approach seems, on the surface to be much better to me, but I am keen to find the crux here, if there is one.

Comment by calebp on Deferring · 2022-05-22T17:44:46.366Z · EA · GW

I don't think we should only downvote harmful things, we should instead look at the amount of karma and use our votes to push the score to the value we think the post should be at.

I downvoted the comment because:

  • Saying things like "... obviously push an agenda...." And "I'm pretty sure anyone reading this... " Has persuasiony vibes which I don't like.
  • Saying "this post says people should defer to authority" is a bit of a straw/weak man and isn't very charitable.
Comment by calebp on Deferring · 2022-05-13T16:46:10.325Z · EA · GW

I think I roughly agree althought I haven't thought much about the epistemic vs authority deferring thing before.

Idk if you were too terse, it seemed fine to me. That said, I would have predicted this would be around 70 karma by now, so I may be poorly calibrated on what is appealing to other people.

Comment by calebp on Deferring · 2022-05-13T13:49:33.047Z · EA · GW

Thanks for writing this, I thought it was great.

(Apologies if this is already included, I have checked the post a few times but possible that I missed where it's mentioned.)

Edit: I think you mention this in social defering (point 2).

One dynamic that I'm particularly worried about is belief double counting due to deference. You can imagine the following scenario:

Jemima: "People who's name starts with J are generally super smart."

Mark: [is a bit confused, but defers because Jemima has more experience with having a name that starts with J] "hmm, that seems right"

[Mary joins conversation]

Mary: [hmm, seems odd but 2 people think and I'm just 1 person so I should update towards their position] "hmm, I can believe that"

Bill: [hmm, seems odd but 3 people think and I'm just 1 person so I should update towards their position] "hmm, I can believe that"

From Bill's perspective it looks like there are 3 pieces of evidence pointing in the direction of a hypothesis but really there was just one piece (Jemima's experience) and a bunch of parroting.

I don't think we often have these literal conversations, but sometimes I feel confused and I find myself doing belief aggregation type things in conversations to make progress on some question. I think it's helpful to stop and be careful when making moves like "hmm most people here seem to think x therefore I should update in that direction" before seeing how much people at an individual level are themselves deferring to each other (or someone upstream of them) both to form better beliefs myself and not pollute the epistemic environment for others.

Distinguishing between your 'impression" and"all considered view" is helpful for this too.

Another way of saying this is is it can be hard to distinguish "great minds think alike" from "highly correlated error sources".

Comment by calebp on EA Tours of Service · 2022-05-10T19:15:33.986Z · EA · GW

Thanks for writing this, it's a cool idea.

I'll consider doing this when I next run a hiring round!

Comment by calebp on Effective altruism’s odd attitude to mental health · 2022-04-29T09:32:27.189Z · EA · GW

I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don't think the anecdote you chose highlights much of a tension.

>  I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.

I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going to help them be more productive (I do also care about EAs being happy as much as I care about anyone being happy, but I expect that value produced from this to be much less that the value produced from the EAs actions).

I care much less about the productivity benefits that we'd see from improving the mental health of people outside of the EA community (although of course I do think their mental health matters for other reasons).

So the above claim seems pretty reasonable to me. 

As an illustration, I can care about EAs having good laptops much more than I care about random people having good laptops, I am much more sceptical about giving random people good laptops producing impact than giving EAs good laptops.

Comment by calebp on Three Reflections from 101 EA Global Conversations · 2022-04-25T23:48:57.766Z · EA · GW

I really liked this post, one of the best things that I have read here in a while.

+1 for taking weird ideas seriously and considering wide action spaces being underrated.

Comment by calebp on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T13:08:13.621Z · EA · GW

This is a bit weird and not really a framing that I expect to be helpful for most people here, I recommend that you probably don't internalise the following or maybe even read it. I think that it is worth this comment partly for transparency and in case it is useful to a few people.

I have recently found it helpful to think about how important and difficult the problems I care about are and recognise that on priors I won't be good enough to solve them. That said, the EV of trying seems very very high, and people that can help solve them are probably incredibly useful. 

So one strategy is to just try and send lots of information that might help the community work out whether I can be useful, into the world (by doing my job, taking actions in the world, writing posts, talking to people ...) and trust the EA community to be tracking some of the right things. I find it helpful to sometimes be in a mindset of "helping people reject me is good because if they reject me then it was probably positive EV and that means that the EA community is winning therefore I am winning (even if I am locally not winning).

Comment by calebp on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T12:59:25.303Z · EA · GW

I think this might be one of my current hypothesis for why I am doing what I am doing. 

Or maybe I think I think it's ~60% likely  I'm ok at my job, and 40% likely I have fooled other people into thinking I'm ok at my job.

Comment by calebp on FTX/CEA - show us your numbers! · 2022-04-22T12:30:54.306Z · EA · GW

I as an individual would endorse someone hiring an MEL consultant to do this for the information value and would also bet on this not providing much value due to the analysis being poor at $100.

Terms to be worked out of course, but if someone was interested in hiring the low context consultant, I'd be interested in working out the terms.

Comment by calebp on FTX/CEA - show us your numbers! · 2022-04-22T12:28:56.253Z · EA · GW

Fwiw, I personally would be excited about CEA spending much more on this at their current level of certainty if there were ways to mitigate optics, community health, and tail risk issues.

Comment by calebp on FTX/CEA - show us your numbers! · 2022-04-22T12:25:39.786Z · EA · GW

Oh right, I didn't pick up on the ftx said they'd like to see if this was popular thing. This resolves part of this for me (at least on the ftx as opposed to the CEA side).

Comment by calebp on Is the EA Librarian still a thing? If so, what is the current turnaround? · 2022-04-21T16:45:44.363Z · EA · GW

Hi Jeremy, I'm very sorry with how slow the turn around has been recently.

I've been at a very low capacity to manage the project after being ill, and I'm sorry that we haven't gotten back to you yet. I also don't feel like I can indicate turnaround time right now due to having some of the librarians leave recently.

We will certainly aim to answer all submitted questions but I expect that I will close the form this/next week, at least until I work out a more sustainable model.

Comment by calebp on FTX/CEA - show us your numbers! · 2022-04-21T08:29:28.393Z · EA · GW

Broken into a different comment so people can vote more clearly

In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.

I think that there is likely different epistemic standards between cause areas such that this is a pretty complicated question and people underpreciate how much of a challenge this is for the EA movement.