Posts

Funds are available to fund non-EA-branded groups 2021-07-21T01:08:10.308Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
Thoughts on whether we're living at the most influential time in history 2020-11-03T04:07:52.186Z
Some thoughts on EA outreach to high schoolers 2020-09-13T22:51:24.200Z
Buck's Shortform 2020-09-13T17:29:42.117Z
Some thoughts on deference and inside-view models 2020-05-28T05:37:14.979Z
My personal cruxes for working on AI safety 2020-02-13T07:11:46.803Z
Thoughts on doing good through non-standard EA career pathways 2019-12-30T02:06:03.032Z
"EA residencies" as an outreach activity 2019-11-17T05:08:42.119Z
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA 2019-11-15T22:44:17.606Z
A way of thinking about saving vs improving lives 2015-08-08T19:57:30.985Z

Comments

Comment by Buck on Buck's Shortform · 2021-07-23T22:07:01.332Z · EA · GW

Ah, fair.

Comment by Buck on Buck's Shortform · 2021-07-23T19:20:38.680Z · EA · GW

Yeah but this pledge is kind of weird for an altruist to actually follow, instead of donating more above the 10%. (Unless you think that almost everyone believes that most of the reason for them to do the GWWC pledge is to enforce the norm, and this causes them to donate 10%, which is more than they'd otherwise donate.)

Comment by Buck on Buck's Shortform · 2021-07-23T17:05:09.501Z · EA · GW

[This is an excerpt from a longer post I'm writing]

Suppose someone’s utility function is

U = f(C) + D

Where U is what they’re optimizing, C is their personal consumption, f is their selfish welfare as a function of consumption (log is a classic choice for f), and D is their amount of donations.

Suppose that they have diminishing utility wrt (“with respect to”) consumption (that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility wrt donations is a constant, and their marginal utility wrt consumption is a decreasing function. There has to be some level of consumption where they are indifferent between donating a marginal dollar and consuming it. Below this level of consumption, they’ll prefer consuming dollars to donating them, and so they will always consume them. And above it, they’ll prefer donating dollars to consuming them, and so will always donate them. And this is why the GWWC pledge asks you to input the C such that dF(C)/d(C) is 1, and you pledge to donate everything above it and nothing below it.

This is clearly not what happens. Why? I can think of a few reasons.

  • The above is what you get if the selfish and altruistic parts of you “negotiate” once, before you find out how high your salary is going to be. If instead, you negotiate every year to spend some fair share of your resources on altruistic and selfish resources, you get something like what we see.
  • People aren’t scope sensitive about donations, and so donations also have diminishing marginal returns (because small ones are disproportionately good at making people think you’re good).
  • When you’re already donating a lot, other EAs will be less likely to hold consumption against you (perhaps because they want to incentivize rich and altruistic people to hang out in EA without feeling judged for only donating 90% of their $10M annual expenditure or whatever).
  • When you’re high income, expensive time-money tradeoffs like business class flights start looking better. And it’s often pretty hard to tell which purchases are time-money tradeoffs vs selfish consumption, and if your time is valuable enough, it’s not worth very much time to try to distinguish between these two categories.
  • Early-career people want to donate in order to set themselves up for a habit of donating later (and in order to signal altruism to their peers, which might be rational on both a community and individual level).
  • As you get more successful, your peers will be wealthier, and this will push you towards higher consumption. (You can think of this as just an expense that happens as a result of being more successful.)

I think that it seems potentially pretty suboptimal to have different levels of consumption at different times in your life. Like, suppose you’re going to have a $60k salary one year and a $100k salary the next. It would be better from both an altruistic and selfish perspective to concentrate your donations in the year you’ll be wealthier; it seems kind of unfortunate if people are unable to make these internal trades.


EDIT: Maybe a clearer way of saying my main point here: Suppose you're a person who likes being altruistic and likes consuming things. Suppose you don't know how much money you're going to make next year. You'll be better off in expectation from both a selfish and altruistic perspective if you decide in advance how much you're going to consume, and donate however much you have above that. Doing anything else than this is Pareto worse.

Comment by Buck on Buck's Shortform · 2021-07-23T00:34:33.390Z · EA · GW

[epistemic status: I'm like 80% sure I'm right here. Will probably post as a main post if no-one points out big holes in this argument, and people seem to think I phrased my points comprehensibly. Feel free to leave comments on the google doc here if that's easier.]

I think a lot of EAs are pretty confused about Shapley values and what they can do for you. In particular Shapley values are basically irrelevant to problems related to coordination between a bunch of people who all have the same values. I want to talk about why. 

So Shapley values are a solution to the following problem. You have a bunch of people who can work on a project together, and the project is going to end up making some total amount of profit, and you have to decide how to split the profit between the people who worked on the project. This is just a negotiation problem. 

One of the classic examples here is: you have a factory owner and a bunch of people who work in the factory. No money is made by this factory unless there's both a factory there and people who can work in the factory, and some total amount of profit is made by selling all the things that came out of the factory. But how should the profit be split between the owner and the factory workers? The Shapley value is the most natural and mathematically nice way of deciding on how much of the profit everyone gets to keep, based only on knowing how much profit would be produced given different subsets of the people who might work together, and ignoring all other facts about the situation.

Let's talk about why I don't think it's usually relevant. The coordination problem EAs are usually interested in is: Suppose we have a bunch of people, and we get to choose which of them take which roles or provide what funds to what organizations. How should these people make the decision of what to do?

As I said, the input to the Shapley value is the coalition value function, which, for every subset of the people you have, tells you how much total value would be produced in the case where just that subset tried to work together.

But if you already have this coalition value function, you've already solved the coordination problem and there’s no reason to actually calculate the Shapley value! If you know how much total value would be produced if everyone worked together, in realistic situations you must also know an optimal allocation of everyone’s effort. And so everyone can just do what that optimal allocation recommended.

Another way of phrasing this is that step 1 of calculating the Shapley value is to answer the question “what should everyone do” as well as a bunch of other questions of the form “what should everyone do, conditioned on only this subset of EAs existing”. But once you’ve done step 1, there’s no reason to go on to step 2.

A related claim is that the Shapley value is no better than any other solution to the bargaining problem. For example, instead of allocating credit according to the Shapley value, we could allocate credit according to the rule “we give everyone just barely enough credit that it’s worth it for them to participate in the globally optimal plan instead of doing something worse, and then all the leftover credit gets allocated to Buck”, and this would always produce the same real-life decisions as the Shapley value.

--

So I've been talking here about what you could call global Shapley values, where we consider every action of everyone in the whole world. And our measure of profit or value produced is how good the whole world actually ends up being. And you might have thought that you could apply Shapley values in a more local sense. You could imagine saying “let's just think about the value that will be produced by this particular project and try to figure out how to divide the impact among the people who are working on this project”. But any Shapley values that are calculated in that way are either going to make you do the wrong thing sometimes, or rely on solving the same global optimization problem as we were solving before. 

Let's talk first about how the purely local Shapley values sometimes lead to you making the wrong decision. Suppose that some project that requires two people in order to do and will produce $10,000 worth of value if they cooperate on it. By symmetry, the Shapley value for each of them will be $5,000.

Now let’s suppose that one of them has an opportunity cost where they could have made $6,000 doing something else. Clearly, the two people should still do the $10,000 project instead of the $6,000 project. And so if they just made decisions based on the “local Shapley value”, they’d end up not doing the project. And that would end up making things overall worse. The moral of the story here is that the coalition profit function is measured in terms of opportunity cost, which you can’t calculate without reasoning globally. So in the case where one of the people involved had this $6,000 other thing they could have done with their time, the amount of total profit generated from the project is now actually only $4,000. Probably the best way of thinking about this is that you had to pay a $6,000 base salary to the person who could have made $6,000 doing something else. And then you split the $4k profit equally. And so one person ends up getting $8k and the other one ends up getting $2k. 

--

I think a lot of EAs are hoping that you can use Shapley values to get around a lot of these problems related to coordination and figuring out counterfactual impact and all this stuff. And I think you just basically can't at all. 

I think Shapley values are more likely to be relevant to cases where people have different values, because in this case you have more like a normal negotiation problem, but even here, I think people overstate their relevance. Shapley values are just a descriptive claim about what might happen in the world rather than a normative claim about what should happen. In particular, they assume that everyone has equal bargaining power to start with which doesn't seem particularly true.

I think the main way that Shapley values are relevant to coordination between people with different values is that they're kind of like a Schelling fair way of allocating stuff. Maybe you want to feel cooperative with other people and maybe you don't want to spend a lot of time going back and forth about how much everyone has to pay, and Shapley values are maybe a nice, fair solution to this. I haven’t thought this through properly yet.

In conclusion, Shapley values are AFAICT not relevant to figuring out how to coordinate between people who have the same goals.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-07-06T04:41:00.789Z · EA · GW

I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-07-05T17:13:58.132Z · EA · GW

I would personally be pretty down for funding reimbursements for past expenses.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-08T17:54:15.613Z · EA · GW

This is indeed my belief about ex ante impact. Thanks for the clarification.

Comment by Buck on Buck's Shortform · 2021-06-06T23:42:33.862Z · EA · GW

That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.

Comment by Buck on Buck's Shortform · 2021-06-06T18:12:20.458Z · EA · GW

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
  • If I don’t want to give them the money, they can do whatever with the review.

What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:

  • Things directly related to traditional EA topics
  • Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
  • I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.

Goals:

  • I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
  • It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
  • I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
    • Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
  • It might surface some talented writers and thinkers who weren’t otherwise known to EA.
  • It might produce good content on the EA Forum and LW that engages intellectually curious people.

Suggested elements of a book review:

  • One paragraph summary of the book
  • How compelling you found the book’s thesis, and why
  • The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
  • Optionally, epistemic spot checks
  • Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:08:08.128Z · EA · GW

I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.

Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it

.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-06T17:14:23.226Z · EA · GW

Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".

I think that the funds' RFMF is only slightly real--I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn't really increase my ability to direct money at promising projects that I run across. (It's helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn't have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.

And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.

  • Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?

I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.

Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

I think that increasing available funding basically won't help at all for causing interventions of the types you listed in your post--all of those are limited by factors other than funding.

(Non-longtermist EA is more funding constrained of course--there's enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)

Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.

High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that's not where I expect most of their value to come from.

I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I'd rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-05T19:29:00.907Z · EA · GW

I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.

I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.

I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard and I feel like I'm going to have to spend more effort figuring out reasonable proxies than actually thinking about the question of whether this grant will be good, and so I feel drawn to a more "I'll know it when I see it" approach to evaluating my past grants.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-05T19:18:38.719Z · EA · GW

Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-05T19:17:03.187Z · EA · GW

Re 1: I don't think I would have granted more

Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.

Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse; if you're reading this, please submit your EAIF application rather than worry that it is not worth our time to evaluate. 

Re 4: It varies. Mostly it isn't that the applicant lacks a specific skill.

Re 5: There are a bunch of things that have to align in order for someone to make a good proposal. There has to be a good project idea, and there has to be someone who would be able to make that work, and they have to know about the idea and apply for funding for it, and they need access to whatever other resources they need. Many of these steps can fail. Eg probably there are people who I'd love to fund to do a particular project, but no-one has had the idea for the project, or someone has had the idea for the project but that person hasn't heard about it or hasn't decided that it's promising, or doesn't want to try it because they don't have access to some other resource. I think my current guess is that there are good project ideas that exist, and people who'd be good at doing them, and if we can connect the people to the projects and the required resources we could make some great grants, and I hope to spend more of my time doing this in future.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-05T19:08:53.169Z · EA · GW

Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.

Increasing grantmaking capacity and/or improving grantmaking processes

Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.

My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.

I think that grantmaking capacity is more of a bottleneck for things other than research output.

Scaling Effective Thesis, improving it, and/or creating new things sort-of like it

I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.

I'm not confident.

Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.

The post doesn't seem to exist yet so idk

Increasing and/or improving research by non-EAs on high-priority topics

I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.

Creating a central, editable database to help people choose and do research projects

I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.

Using Elicit (an automated research assistant tool) or a similar tool

I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.

Forecasting the impact projects will have

I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.

Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)

I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.

Improving the vetting of (potential) researchers, and/or better “sharing” that vetting

I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.

Increasing and/or improving career advice and/or support with network-building

Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.

I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.

Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers

I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.

Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.

Creating and/or improving relevant educational materials

I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs

Creating, improving, and/or scaling market-like mechanisms for altruism

I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.

Increasing and/or improving the use of relevant online forums

Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".

Increasing the number of EA-aligned aspiring/junior researchers

I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).

I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.

Increasing the amount of funding available for EA-aligned research(ers)

This seems almost entirely useless; I don't think this would help at all.

discovering, writing, and/or promoting positive case studies

Seems like a good use of someone's time.

 

---------------

This was a pretty good list of suggestions. I guess my takeaways from this are:

  • I care a lot about access to mentorship
  • I think that people who are willing to talk to lots of new people are a scarce and valuable resource
  • I think that most of the good that can be done in this space looks a lot more like "do a long schlep" than "implement this one relatively cheap thing, like making a website for a database of projects".
Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-05T18:20:54.265Z · EA · GW

I feel very unsure about this. I don't think my position on this question is very well thought through.

Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermism's last dollar.

I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to "where should the bar be" seems like a worse use of my time than those other activities.

I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.

(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I don't think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-04T17:47:15.860Z · EA · GW

re 1: I expect to write similarly detailed writeups in future.

re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)

re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes more time. The secondary con is that if I wrote more detailed grant reports, I'd have to be a bit clearer about the advantages and disadvantages of the grants we made, and this would involve me having to be clearer about kind of awkward things (like my detailed thoughts on how promising person X is vs person Y); this would be a pain, because I'd have to try hard to write these sentences in inoffensive ways, which is a lot more time consuming and less fun.

re 4: Yes I think this is a good idea, and I tried to do that a little bit in my writeup about Youtubers; I think I might do it more in future.

Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-04T17:43:13.118Z · EA · GW

I don't think this has much of an advantage over other related things that I do, like

  • telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why
  • asking people for their thoughts on grant applications that I've been given
  • asking people for ideas for active grantmaking strategies
Comment by Buck on EA Infrastructure Fund: Ask us anything! · 2021-06-04T17:39:32.989Z · EA · GW

 A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:

  • The donors to the fund
  • The grantmakers
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics)
  • The grantee

Presumably this differs a lot between grants; I'd be interested in some typical figures.

This question is important because you need a sense of these numbers in order to make decisions about which of these parties you should try to be. Eg if the donors get 90% of the credit, then EtG looks 9x better than if they get 10%.

 

(I'll provide my own answer later.)

Comment by Buck on How much do you (actually) work? · 2021-05-21T22:55:59.623Z · EA · GW

Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.

 

EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.

Comment by Buck on How much do you (actually) work? · 2021-05-21T15:02:58.429Z · EA · GW

I occasionally track my work time for a few weeks at a time; by coincidence I happen to be tracking it at the moment. I used to use Toggl; currently I just track my time in my notebook by noting the time whenever I start and stop working (where by "working" I mean "actively focusing on work stuff"). I am more careful about time tracking my work on my day job (working on longtermist technical research, as an individual contributor and manager) than working on the EAIF and other movement building stuff.

The first four days this week, I did 8h33m, 8h15m, 7h32m, and 7h48m of work on my day job. I think I did about four hours of work on movement building. So that's an average of about 9 hours a day. Probably four of those hours are deep work on average.

My typical schedule is to do movement building stuff first thing in the morning, eg perhaps 7:30am to 8:30am, and then to do my day job between about 8:30am and 7pm, with a 30m break at 10am to hang out with my girlfriend after she's woken up, and a maybe 40m break for lunch at about 12:10. I occasionally do some calls in the evenings, or respond to people's messages about work things. (I usually go to bed between 10:10 and 11pm.)

So my efficiency is probably about two thirds, if you include my morning break and lunch break in the denominator, and 75% if you don't.

I normally work for a couple hours on the weekend, mostly doing calls, and I also usually do some kind of unstructured and unfocused work like walking around and thinking about lots of stuff which sometimes includes my work. So I guess my total work time per week is probably like 47 hours or something.

My efficiency is highest when I wake up unusually early and work uninterrupted for a long time. It's also much higher when I'm doing tasks that it's easy to do for a long time. The most obvious example of this is meetings--they require less concentration than e.g. programming, and so if my day includes a lot of meetings, my efficiency looks higher.

Of course, work efficiency is a dangerous thing to optimize--I actually want to optimize the value of my work output, which is related but importantly different. In particular, sometimes I fall into a trap where I spot some task which I can easily spend lots of time on, but which isn't actually the most valuable. I try hard in this kind of situation to catch myself and ask "what's actually the most important thing to do right now".

My efficiency and total work time has usually been somewhat lower in the past. When I worked at MIRI, I would typically get something like 37 hours of work done in a typical week (roughly 2/3 technical work and 1/3 recruiting work). I also had some bad fatigue problems at various points over the last few years; I think I worked more like 20 hours per week for like a third of 2019, which was very sad and unpleasant. I was kind of depressed for a while last year, which I think took my work down to maybe 30 hours per week. I work more at my current job for a few reasons: it feels more tractable than my MIRI work, I feel more responsibility because I have a more senior role, and I am working more as a manager and so I spend more time doing types of work that I find less tiring.

I think that my current work schedule is basically sustainable for me as long as I feel reasonably happy and satisfied with my life, which is pretty hard for me to ensure.

I can imagine taking other jobs that seemed equally impactful where I'd end up working many fewer hours. And there are a few jobs where I'd end up working more hours (eg jobs where I was constantly talking to people and rarely trying to think hard about stuff).

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-25T21:20:58.338Z · EA · GW

That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-19T14:44:13.395Z · EA · GW

I am glad to have you around, of course.

My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I'd be very interested to hear I was wrong about that.

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-19T02:50:59.951Z · EA · GW

I am not sure whether I think it's a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren't obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)

But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it's pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it's worth putting some effort into not mocking religions or political views.

In cases like these, I mostly agree with "you need to figure out the exchange rate between welcomingness and unfiltered conversations".

I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That's ok if it's not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will's comment was doing just that, and I upvoted it as a result. 

I guess I expect the net result of Will's comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn't have made his other top level comment.

(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn't fully grok it nevertheless).

There's a difference between understanding a consideration and thinking that it's the dominant consideration in a particular situation :) 

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T15:36:32.511Z · EA · GW

More generally, I think our disagreement here probably comes down to something like this:

There's a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome.  As you say, if we're skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.

But this comes at a cost. I personally feel much less excited about writing about certain topics because I'd have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it's much harder for these EAs to explain their thoughts on things.

And so I think it's noticeably costly to criticise people for not being more careful and tactful. It's worth it in some cases, but we should remember that it's costly when we're considering pushing people to be more careful and tactful.

I  personally think that "you shouldn't write criticisms of  an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations" is too far in the "restrict people's ability to say true things, for the sake of making people feel welcome".

(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T15:36:15.573Z · EA · GW

(I'm writing these comments kind of quickly, sorry for sloppiness.)

With regard to

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.

In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.

I would have no meta-level objection to a comment saying "I disagree that X is bad, I think it's actually fine".

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T04:45:28.091Z · EA · GW

I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T01:43:52.260Z · EA · GW

I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor,  but as an intuition pump imagine the following comment.

"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem.  On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."

I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.

More generally I am opposed to "Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X."

Another take here is that if a group of people are sad that their views aren't sufficiently represented on the EA forum, they should consider making better arguments for them. I don't think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like "it's more interesting to hear arguments you haven't heard before).)

EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.

Comment by Buck on Why do so few EAs and Rationalists have children? · 2021-03-14T19:43:07.155Z · EA · GW

I’d be interested to see comparisons of the rate at which rationalists and EAs have children compared to analogous groups, controlling for example for education, age, religiosity, and income. I think this might make the difference seems smaller.

Comment by Buck on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-28T05:42:47.388Z · EA · GW

Great post, and interesting and surprising result.

An obvious alternative selection criterion would be something like “how good would it be if this person got really into EA”; I wonder if you would be any better at predicting that. This one takes longer to get feedback on, unfortunately.

Comment by Buck on Buck's Shortform · 2021-01-11T02:01:11.566Z · EA · GW

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.

----

I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.

Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun--you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions. 

----

I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.

I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.

----

I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.

Comment by Buck on If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant · 2020-11-24T16:36:21.278Z · EA · GW

My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.

Comment by Buck on Where are you donating in 2020 and why? · 2020-11-23T22:12:25.467Z · EA · GW

I'd be curious to hear why you think that these charities are excellent; eg I'd be curious for your reply to the arguments here.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-12T19:58:55.882Z · EA · GW

Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-10T19:02:43.628Z · EA · GW

How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong? 

This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it).  It seems like your outside view model assigns approximately zero probability to HoH, and so if now is the HoH, it's probably because we shouldn't be using your model, rather than because we're in the tiny proportion of worlds in your model where now is HoH.

I think this distinction is important because it seems to me that the probability of HoH give your beliefs should be almost entirely determined by the prior and HoH-likelihood of models other than the one you proposed--if your central model is the outside-view model you proposed, and you're 80% confident in that, then I suspect that the majority of your credence on HoH should come from the other 20% of your prior, and so the question of how much your outside-view-model updates based on evidence doesn't seem likely to be very important.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-10T03:07:34.881Z · EA · GW

Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-08T16:39:13.033Z · EA · GW

I agree with all this; thanks for the summary.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-07T20:42:14.981Z · EA · GW

Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-07T20:39:07.338Z · EA · GW

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.

 

This does make a lot more sense than what you wrote in your post. 

Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:58:09.314Z · EA · GW

The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:57:07.531Z · EA · GW

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me.

So you are saying that you do think that the evidence for longtermism/x-risk is enough to push you to thinking you're at a one-in-a-million time?

EDIT: Actually I think maybe you misunderstood me? When I say "you're one-in-a-million", I mean "your x-risk is higher than 99.9999% of other centuries' x-risk"; "one in a thousand" means "higher than 99.9% of other centuries' x-risk".  So one-in-a-million is a stronger claim which means higher x-risk.

What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million. I don't understand why you're willing to accept that we're one-in-a-million; this seems to me force you to have absurdly low x-risk estimates.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:56:01.386Z · EA · GW

My claim is that patient philanthropy is automatically making the claim that now is the time where patient philanthropy does wildly unusually much expected good, because we're so early in history that the best giving opportunities are almost surely after us.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:53:28.430Z · EA · GW

I've added a link to the article to the top of my post. Those changes seem reasonable.

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T20:56:54.735Z · EA · GW

This is indeed what I meant, thanks.

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T17:31:49.881Z · EA · GW

But if, as this talk suggests, it’s not obvious whether donating to near term interventions is good or bad for the world, why are you interested in whether you can pitch friends and family to donate to them?

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T17:30:36.153Z · EA · GW

I basically agree with the claims and conclusions here, but I think about this kind of differently.
 

I don’t know whether donating to AMF makes the world better or worse. But this doesn’t seem very important, because I don’t think that AMF is a particularly plausible candidate for the best way to improve the long term future anyway—it would be a reasonably surprising coincidence if the top recommended way to improve human lives right now was also the most leveraged way to improve the long term future.

So our attitude should be more like "I don’t know if AMF is good or bad, but it’s probably not nearly as impactful as the best things I’ll be able to find, and I have limited time to evaluate giving opportunities, so I should allocate my time elsewhere", rather than "I can’t tell if AMF is good or bad, so I’ll think about longtermist giving opportunities instead."

Comment by Buck on Existential Risk and Economic Growth · 2020-11-02T00:43:51.201Z · EA · GW

I think Carl Shulman makes some persuasive criticisms of this research here :

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.

If those things were solved, and the risk-reward tradeoffs well understood, then we're quite clearly in a world where we can have very low existential risk and high consumption. But if they're not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.

I agree with Carl; I feel like other commenters are taking this research as a strong update, as opposed to a simple model which I'm glad someone's worked through the details of but which we probably shouldn't use to influence our beliefs very much.

Comment by Buck on [deleted post] 2020-10-24T22:42:03.847Z

My guess is that this feedback would be unhelpful and probably push the grantmakers towards making worse grants that were less time-consuming to justify to uninformed donors.

Comment by Buck on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-14T05:14:02.148Z · EA · GW

Inasmuch as you expect people to keep getting richer, it seems reasonable to hope that no generation has to be more frugal than the previous.

Comment by Buck on In defence of epistemic modesty · 2020-10-11T19:27:11.541Z · EA · GW

when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.