Posts

Comments

Comment by howiel on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T16:24:52.561Z · score: 17 (6 votes) · EA · GW

Fair enough.

Fwiw, the forum explicitly discourages unnecessary rudeness (and encourages kindness). I think tone is part of that and the voting system is a reasonable mechanism for setting that norm. But there's room for disagreement.

If the original poster came back and edited in response to feedback or said that the tone wasn't intentional, I'd happily remove my downvote.

Comment by howiel on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T14:01:02.028Z · score: 28 (12 votes) · EA · GW

I downvoted this. "Please, if you disagree with me, carry your precious opinion elsewhere" reads to me as more than slightly rude and effectively an intentional insult to people who disagree with the OP and would otherwise have shared their views. I think it's totally reasonable to worry in advance about a thread veering away from the topic you want to discuss and to preempt that with a request to directly answer your question [Edited slightly] and I wouldn't have downvoted without the reference to other people's "precious views."

Comment by howiel on No More Pandemics: a lobbying group? · 2020-10-03T12:27:42.248Z · score: 14 (7 votes) · EA · GW

Lobbying v. grassroots advocacy

This is just semantic but I think you probably don't want to call what you're proposing a "lobbying group." Lobbying usually refers to one particular form of advocacy (face to face meetings with legislators) and in many countries[1] it is regulated more heavily than other forms of advocacy.

(It's possible that in the UK, "lobbying group" means something more general but in the U.S.)

[1] This is true in the U.S., which I know best. Wikipedia suggests it's true in the EU but appears less true in the UK.

Who else is working on this?

Here are a couple small examples of things being done along these lines, though I agree there is little overall:

-Resolve to Save Lives claims to do some advocacy for epidemic preparedness in low-income countries in collaboration with the Global Health Advocacy Incubator. The latter group seems to be hiring an Advocacy Director though the posting is old so I wouldn't be surprised if it's out of date.

-PATH has done some advocacy to encourage the U.S. government to invest in global health security.

Comment by howiel on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-29T10:11:27.011Z · score: 34 (20 votes) · EA · GW

I didn't actually become a member until after the wording of the pledge changed but I do vividly remember the first wave of press because all my friends sent me articles showing that there were some kids in Oxford who were just like me.

Learning about Giving What We Can (and, separately, Jeff and Julia) made me feel less alone in the world and I feel really grateful for that.

Comment by howiel on 80,000 Hours user survey closes this Sunday · 2020-09-23T13:46:40.471Z · score: 1 (1 votes) · EA · GW

Hi RandomEA,

Thanks for pointing this out (and for the support).

We only update the 'Last updated' field for major updates not small ones. I think we'll rename it 'Last major update' to make it clearer.

The edit you noticed wasn't intended to indicate that we've changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided we could make this point better by using illustrative figures that are more conservative than 80k’s actual rough guess and made the edit. We’ve tinkered with the wording to make it clearer that they are not actual cost-effectiveness estimates.

Also, note that in both cases the paragraph was about hypothetical effectiveness if you only cared about present lives, which is very different from our actual estimate of cost effectiveness.

Hope this helps clear things up.

Comment by howiel on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-19T20:34:22.887Z · score: 13 (8 votes) · EA · GW

Not an expert but, fwiw, my impression is that this is more common in CS than philosophy and the social science areas I know best.

Comment by howiel on Some thoughts on EA outreach to high schoolers · 2020-09-17T10:02:38.736Z · score: 9 (7 votes) · EA · GW

I'm very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you're learning about this.

Comment by howiel on Some thoughts on EA outreach to high schoolers · 2020-09-17T10:00:49.177Z · score: 16 (12 votes) · EA · GW

I think it's especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.

Comment by howiel on MichaelA's Shortform · 2020-09-11T18:38:34.831Z · score: 1 (1 votes) · EA · GW

Seems reasonable

Comment by howiel on MichaelA's Shortform · 2020-09-11T14:42:53.030Z · score: 9 (5 votes) · EA · GW

"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"

I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).

That said, as you know, I think your summaries/collections are useful and underprovided.

Comment by howiel on Should surveys about the quality/impact of research outputs be more common? · 2020-09-11T14:38:43.074Z · score: 3 (2 votes) · EA · GW

This all seems reasonable to me though I haven't thought much about my overall take.

I think the details matter a lot for "Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys"

A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it's often possible to close a survey after a certain number of responses.

A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it's not obvious to me you need a representative sample. "Among the people who are making the most use of my research, how is it useful" can be pretty informative on its own.

Comment by howiel on Should surveys about the quality/impact of research outputs be more common? · 2020-09-09T16:42:43.899Z · score: 13 (5 votes) · EA · GW

[Not meant to express an overall view.] I don't think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There's also risk of survey fatigue if EA researchers all double down on surveys.

Comment by howiel on Asking for advice · 2020-09-09T16:36:04.955Z · score: 6 (4 votes) · EA · GW

I find it off-putting though I don't endorse my reaction and overall think the time savings mean I'm personally net better off when other people use it.

I think for me, it's about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that's a status thing?

Comment by howiel on An argument for keeping open the option of earning to save · 2020-09-03T16:40:31.631Z · score: 1 (1 votes) · EA · GW

Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.

Comment by howiel on An argument for keeping open the option of earning to save · 2020-09-01T11:51:06.527Z · score: 16 (7 votes) · EA · GW

[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]

Thanks for the comment!

“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”

I mostly agree with this. The argument’s force/applicability is much weaker because of this. Indeed, if EAs are spending a higher/lower proportion of their assets at some point in the future, that’s prima facie evidence that the optimal allocation is higher/lower at that time.

(I do think a literal reading of the post is consistent with the optimal percentage varying endogenously but agree that it had an exogenous 'vibe' and that's important.)

“So the argument really feels like:
Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you’re smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.
I think if you don’t have reason to believe you’ll do better than the aggregate community then this shouldn’t get much weight; if you do have such reason then it’s legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?”

I think this is a good point but a bit too strong, as I do think there’s more to the argument than just the above. I feel pretty uncertain whether the below holds together and would love to be corrected but I understood the post to be arguing something like:

i) For people whose assets are mostly financial, it’s pretty easy to push the portfolio toward the now/later distribution they think is best. If this was also true for labour and actors had no other constraints/incentives, then I’d expect the community’s allocation to reflect its aggregate beliefs about the optimum so pushing away from that would constitute a claim that you know better.

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

iii) In an idealised ‘perfect marketplace’ holders of human capital would “invest” their labour to make up for this. But they also face constraints:

  • Global priorities research, movement/community building, and ‘meta’ can only usefully absorb a limited amount of labour.
  • Human capital can’t be saved after you die and loses value each year as you age.
  • [I’m less sure about this one and think it’s less important.] As career capital opportunities dry up when people age, it will become more and more personally costly for them to stay in career capital mode to ‘invest’ their labour. This might lead reasonable behaviour from a self-interested standpoint to diverge from what would create a theoretically optimal portfolio for the community.

This means that for the community to maintain the allocation it thinks is optimal, people may have to convert their labour into capital so that it can be ‘saved/invested.’ But most people don’t even know that this is an option (ETA: or at least it's not a salient one) and haven’t heard of earning to save. So pointing this out may empower the community to achieve its aggregate preferences, as opposed to being a way to undermine them.

“But at present I’m worried that this isn’t really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.”

I agree this is a reasonable concern and I was a bit worried about it, too, since I think this is overall a small consideration in favor of earning to save, which I agree could be quite toxic. But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting. (Fwiw, I think getting this reaction from you was valuable.)

I’m open to the possibility that this isn’t realistic, though. And something like “some considerations on earning to save” might have been a better title.

Comment by howiel on The academic contribution to AI safety seems large · 2020-07-30T15:05:39.050Z · score: 13 (6 votes) · EA · GW

If you want some more examples of specific research/researchers, a bunch of the grantees from FLI's 2015 AI Safety RFP are non-EA academics who have done some research in fields potentially relevant to mid-term safety.

https://futureoflife.org/ai-safety-research/

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-12T10:06:48.231Z · score: 8 (5 votes) · EA · GW

Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.

Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.

[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]

Comment by howiel on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T15:28:22.258Z · score: 5 (3 votes) · EA · GW

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:24:29.929Z · score: 14 (7 votes) · EA · GW

[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]

Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).

Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):

1) It's not obviously less informative.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

Like, let's say analogous institutions also have psychedelic-related COIs but just group them under "important social relationships" or something. Now, the LTF looks like that fund where all the staff are doing psychedelics with the grantees. I don't think anybody became more informed. (This is especially the case if the info is available *somewhere* for people who care about the details).


2) Flexibility

It's just really hard to anticipate all of the relevant cases and the principles you're using are the thing you might actually want to lock in.


3) Giving lots of detail means lack of disclosure can send a lot of signal.

If you have enough detail about exactly what level of friends someone needs to be with someone else in order to trigger a disclosure then you end up forcing members to send all sorts of weird signals by not disclosing things (e.g. I don't actually consider my friendship with person X that important). This just gets complicated fast.

---

All that said, I think a lot of this just has to be determined by the level of disclosure and type of policy LTF donors are demanding. I've donated a bit and would be comfortable trusting something more general but also am probably not representative.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:06:30.216Z · score: 2 (2 votes) · EA · GW

I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.

The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.

When for-profits start to need to achieve legitimacy that can't be provide by markets, they seem to start to look towards these kinds of boards, too. (E.g. FB looking into governance boards).

That said, I don't have a strong take on whether this is a good idea.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:00:29.580Z · score: 5 (3 votes) · EA · GW

Ah - whoops. Sorry I missed that.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:49.323Z · score: 4 (3 votes) · EA · GW

Having a private board for close calls also doesn't seem crazy to me.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:17.215Z · score: 6 (3 votes) · EA · GW

Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.

Comment by howiel on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-06T17:05:46.306Z · score: 2 (2 votes) · EA · GW

Thanks - this is helpful.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T15:17:22.442Z · score: 1 (1 votes) · EA · GW

(Note that I'm not saying that recusal would necessarily be bad)

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T14:48:28.020Z · score: 14 (9 votes) · EA · GW

Wanted to +1 this in general although I haven't thought through exactly where I think the tradeoff should be.

My best guess is that the official policy should be a bit closer to the level of detail GiveWell uses to describe their policy than to the level of detail you're currently using. If you wanted to elaborate, one possibility might be to give some examples of how you might respond to different situations in an EA Forum post separate from the official policy.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T14:42:38.492Z · score: 11 (5 votes) · EA · GW

+1 that requiring disclosure of past intimate relationships seems bad. Especially if the bar is lasting 2 weeks.

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T09:17:51.516Z · score: 12 (9 votes) · EA · GW

Fwiw, the "pleasure doing business" line was the only part of your tone that struck me as off when I read the thread.

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T15:33:12.759Z · score: 8 (5 votes) · EA · GW

FYI - study of outcomes a/o Jan 25 for all 99 2019-nCoV patients admitted to a hospital in Wuhan between Jan 1 and Jan 20.

Many caveats apply. Only includes confirmed cases, not suspected ones. People who end up at a hospital are selected for being more severely ill. 60% of the patients have not yet been discharged so haven't experienced the full progression of the disease. Etc.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30211-7/fulltext#%20

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T23:33:14.158Z · score: 8 (5 votes) · EA · GW

Here's a chart of odds of death by age that was tweeted by an epidmiology professor at Hopkins. I can't otherwise vouch for the reliability of the data and caveat that mortality data sucks this early in an epidemic. https://twitter.com/JustinLessler/status/1222108497556279297

Comment by howiel on Love seems like a high priority · 2020-01-25T15:10:57.369Z · score: 1 (1 votes) · EA · GW

[Retracted]

Comment by howiel on Where are you donating this year and why – in 2019? Open thread for discussion. · 2020-01-08T18:34:43.420Z · score: 4 (5 votes) · EA · GW

Thanks for setting such a good example here, Nicole! Taking care of yourself like this is a really important community norm and sharing your example seems like a really good way to promote it.

Comment by howiel on In praise of unhistoric heroism · 2020-01-08T15:09:24.418Z · score: 11 (9 votes) · EA · GW

This riff from Eliezer seems relevant to me:

The rules say we have to use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

https://www.facebook.com/yudkowsky/posts/10154965691294228

Thinking in terms of virtue ethics on a day to day basis seems like a good way for some people to internalize some of the things folks have brought up in this thread although I've never been able to do it successfully myself.

Comment by howiel on Is mindfulness good for you? · 2019-12-30T22:35:15.538Z · score: 22 (8 votes) · EA · GW

I briefly and informally looked into this several years ago and, at the time, had a few additional concerns. (Can't promise I'm remembering this perfectly and the research may have progressed since then).

1) Many of the best studies on mindfulness's effect on depression and anxiety were specifically on populations where people had other medical conditions (especially, I think, chronic pain or chronic illness) in addition to mental illness. But, most people I know who are interested in mindfulness aren't specifically interested in this population.

My impression is that Jon Kabat-Zinn initially developed Mindfulness-Based Stress Reduction (MBSR) for people with other conditions and my intuition from my experience with it is that it might be especially helpful for things like chronic pain. So I had some external validity concerns.

2) There were few studies of long-term effects and it seems pretty plausible the effects would fade over time. This is especially true if we care about intention-to-treat effects. The fixed cost of an MBSR course might only be justified if it can be amortized over a fairly long period. But it wouldn't be surprising if there are short-to-medium term benefits that fade over time as people stop practicing.

By contrast, getting a prescription for anti-depressants or anti-anxiety has a much lower fixed cost and it's less costly and easier to take a pill every day (or as needed) than to keep up a meditation practice. (On the other hand, some meds have side effects for many people.)

3) You already mention that "many of those researching it seem to be true believers" but it seems worth reemphasizing this. When I looked over the studies included in a meta-analysis (I think it was the relevant Cochrane Review), I think a significant proportion of them literally had Jon Kabat-Zinn (the founder of MBSR) as a coauthor.

---

All that said, my personal subjective experience is that meditating has had a moderate but positive effect on my anxiety and possibly my depression when I've managed to keep it up.


Comment by howiel on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-04T08:32:47.276Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by howiel on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-03T13:01:31.931Z · score: 30 (16 votes) · EA · GW

I've read some useful stuff in the Bulletin as well as some stuff I really disagree with. I definitely don't think there's anything *wrong* with it.

Greg Lewis, an EA who works on biorisk at FHI, published an article I really like in the Bulletin called "Horsepox synthesis: A case of the unilateralist’s curse?" Here's a post on their critique of Open Phil's biorisk program.

I think there are a bunch of potential reasons the Bulletin doesn't appear much in EA discussions:

-It's a media/magazine/news organization so it mostly publishes articles on current events, which EAs tend not to focus on. [ETA: As cwgoes mentions, the journal has a longer time horizon but is still more focused on currentish stuff than most EAs. More like a policy journal than an academic one.]

-While it does have some content on biorisk and AI, the two potential x-risks EAs tend to focus on, it's still quite focused on nukes.

-EA can be a bit insular and a lot of EAs know a lot more about GCR-relevant orgs with some connection to EA than those without.


Comment by howiel on ALLFED 2019 Annual Report and Fundraising Appeal · 2019-11-26T01:31:40.045Z · score: 3 (3 votes) · EA · GW

Prolific means number of papers authored?

Comment by howiel on EA Mental Health Care Navigator Pilot · 2019-11-25T17:44:52.658Z · score: 3 (2 votes) · EA · GW

Hi Danica,

Could I include your Bay Area list in a global EA mental health resource list?

Hmm. I'd like to be helpful but also want to be sensitive to the privacy of people who gave me info to add to my list (and don't have the ability to go back and check with everyone who contributed).

I'm happy to have you link to my list and list the names of the practitioners on there. I feel a little funny about taking the descriptions/commentary and adding it to a second list. do you think that would be sufficient?

Alternatively, would you be interested in expanding the existing list to global EA-recommended mental health practitioners? Or collaborating to create a separate global list?

This sounds like a great project and I'd love to help but I unfortunately don't have the time. Good luck!
 

 

Comment by howiel on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T21:09:47.647Z · score: 15 (8 votes) · EA · GW

I thought this was great. Thanks, Buck

Comment by howiel on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T23:57:07.571Z · score: 2 (2 votes) · EA · GW

"If I thought there was a <30% chance of AGI within 50 years, I'd probably not be working on AI safety."

Do you have a guess at what you would be working on?

Comment by howiel on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T23:52:38.609Z · score: 45 (16 votes) · EA · GW

Hi EarlyVelcro,

Howie from 80k here.

As Ben said in his comment, the key ideas page, which is the most current summary of 80k’s views, doesn't recommend that “EA should focus on AI alone”. We don't think the EA community's focus should be anything close to that narrow.

That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths. The page mentions that “we’d be excited for people to explore [our list of problems we haven’t yet investigated] as well as other areas that could foreseeably have a positive effect on the long-term future” but it doesn’t say anything about what those problems are (other than a link to our problem profiles page, which has a list).

I think it makes sense that people end up focusing on the areas we mention directly and the page could do a better job of communicating that our priorities are more diverse.

The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren't among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.

More generally, I think 80k’s content was particularly heavy on AI over the last year and, while it will likely remain our top priority, I expect it will make up a smaller portion of our content over the next few years.

[1] Many of these will be areas we haven't yet investigated or areas that are too niche to highlight among our priority paths.

Comment by howiel on EA Mental Health Care Navigator Pilot · 2019-10-31T17:05:21.428Z · score: 11 (7 votes) · EA · GW

Here's a list of mental health practitioners in the Bay Area (mostly SF/East Bay) that I or someone I know has at least minimally vetted. https://docs.google.com/document/d/1KKwe1bAagI7FOInrkcnENCcQsFrKc5K3hXPG0dALWWI/edit

Unfortunately, it's increasingly out of date and I don't live in the Bay anymore.

I'd be happy to add to it if anybody has practitioners they do (or don't) like or sees one of the people on the list and wants to tell me how it went. The list is publicly viewable and I'd include as much or as little information as you'd like. Just DM me if you'd like to add someone.

Alternatively, you can make an anonymous request for me to add to the list here. https://www.admonymous.co/howie

(Also consider adding to SSC's Psychiat-list, mentioned by Milan above - https://psychiat-list.slatestarcodex.com)

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-12T07:55:27.806Z · score: 1 (1 votes) · EA · GW

Huh. Ok, I think you're onto something since if I go to Audible.co.uk in Incognito, the book seems to be there. But I don't totally follow you.

It's right that my Audible/Amazon accounts were registered in the US and I'm now in the UK. Do I need to reregister my account in the UK somehow so it's consistent with where I live? Why would this make certain audiobooks unbuyable for me but not others?

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-11T23:46:44.031Z · score: 6 (2 votes) · EA · GW

That's great!

I ended up getting one from the US Audible store so doesn't matter for me personally anymore. But just FYI since it's at least possible the problem isn't limited to me not knowing how to use technology:

That link for the Kindle version works for me but when I search for "Human Compatible" on my phone (in the Android Kindle App's store), it doesn't appear.

When I follow the Audible link, I get the message "Title Not For Sale In This Country/Region." Same happens when I search in my phone's Audible store.


Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T23:05:04.216Z · score: 10 (6 votes) · EA · GW

Cool. That's a bit more distinctive although not more than Hillary Clinton said in her book.

Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.

https://lukemuehlhauser.com/hillary-clinton-on-ai-risk/

Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T22:59:55.625Z · score: 3 (2 votes) · EA · GW

Thanks

Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T22:45:19.721Z · score: 11 (9 votes) · EA · GW

[I am not an expert on any of this.]

Is that tweet the only (public) evidence that Andrew Yang understands/cares about x-risk?

A cynical interpretation of the tweet is that we learned that Yang has one (maxed out) donor who likes Bostrom.

My impression is that: 1) it'd be very unusual for somebody to understand much about x-risk from one phone call; 2) sending out an enthusiastic tweet would be the polite/savvy thing to do after taking a call that a donor enthusiastically set up for you; 3) a lot of politicians find it cool to spend half an hour chatting with a famous Oxford philosophy professor with mind blowing ideas. I think there are a lot of influential people who'd be happy to take a call on x-risk but wouldn't understand or feel much different about it than the median person in their reference class.

I know virtually nothing about Andrew Yang in particular and that tweet is certainly *consistent* with him caring about this stuff. Just wary of updating *too* much.


Comment by howiel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T13:07:46.192Z · score: 4 (3 votes) · EA · GW

Ah, sorry. Was writing quickly and that was kind of sloppy on my part. Thanks for the correction!

Edited to be clearer.

Comment by howiel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T03:55:24.513Z · score: 12 (5 votes) · EA · GW

For anybody who wants to look more into CSER, Sean provided me with a his quick take on a few articles he thinks are representative and that he's proud of.

https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison?commentId=cwgFuMEc55i3w3wyf

[Edited to more accurately describe the list as just Sean's quick take]

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-11T03:49:34.458Z · score: 2 (2 votes) · EA · GW

Will it be available on Kindle/Audible in the UK? If so, do you know when?