Comment by matthewp on Why is the EA Hotel having trouble fundraising? · 2019-04-29T20:57:29.621Z · score: 1 (1 votes) · EA · GW

"People don’t want to be associated with something low status and are likely to subject anything they perceive as low status to a lot of scrutiny."

Ouch! Alas, it is true in general. However, I think it's a dangerous heuristic when not backed by the kinds of substantive comments made in 1-6.

I do think toning down 5 might foster a better culture. Perhaps there is more information here I don't know. But this kinda sounds like someone tried something it didn't work out, and they don't get a second chance. That's not a great rubric to establish if you want people to take risks.

Comment by matthewp on Reasons to eat meat · 2019-04-29T05:51:40.880Z · score: 15 (5 votes) · EA · GW

Ego depletion is quite a narrow psychological effect. If the idea that people's moment to moment fatigue saps moment to moment willpower is debunked, that's far from showing that akrasia isn't a thing in general.

In a world where general-sense akrasia was not a thing there would be a far higher rate of people being ripped like movie stars, a far lower rate of smoking, a much high rate of personal savings etc than there is in the world we inhabit.

[Link] Freedom Week

2019-04-25T17:30:44.324Z · score: 2 (1 votes)
Comment by matthewp on Reasons to eat meat · 2019-04-24T06:45:30.061Z · score: 7 (2 votes) · EA · GW

The willpower argument is actually quite good. There are ways to reduce the amount of willpower required, but the kernel of the argument applies.

My prediction for people who constantly feel bad for not living up to an exacting standard is that a majority will fall off the boat entirely.

Comment by matthewp on Ben Garfinkel: How sure are we about this AI stuff? · 2019-03-06T22:24:55.058Z · score: 1 (1 votes) · EA · GW

Maximising paperclips is a misunderstood human value. Some lazy factory owners says, gee wouldn't it be great if I could get an AI to make my paperclips for me? Then builds an AGI and asks it to make paperclips, and it then makes everything into paperclips its utility function being unreflective of its owners true desire to also have a world.

If there is a flaw here it's probably somewhere in thinking that AGI will get built as some sort of intermediate tool and that it will be easy to rub the lamp and ask the genie to do something in easy to misunderstand natural language.

Comment by matthewp on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T19:43:57.616Z · score: 15 (10 votes) · EA · GW

Nice point.

'I also wish we didn't accidentally make donating to AMF or GiveDirectly so uncool.'

This reminds me of the pattern where we want to do something original, so we don't take the obvious solution.

Tech volunteering: market failure?

2019-02-17T16:51:33.851Z · score: 17 (11 votes)
Comment by matthewp on List of possible EA meta-charities and projects · 2019-02-10T21:23:23.539Z · score: 1 (1 votes) · EA · GW

"Making rationality more accessible."

Sounds great, and I've thought about this too. But what does it look like?

  • Seminar series. Probably in the workplace - this would not be so scalable but for me would be highly targeted.
  • Video lectures. Costly, probably get wide reach though. Maybe better done in short form, slick and well marketed.
  • Podcast. IMHO hard to beat Rationally Speaking. However, this content should be more introductory so perhaps more of an audio series than a podcast.

How to assess what the main topics should be though? I feel the pedagogy for rationality is lacking, because for many people who are interested they picked up the basics by osmosis before getting into it in a more organised way. I.e. what is the first thing someone should learn, the second etc. For me, everything revolves around an understanding of probability - but that's a long and somewhat indirect road to walk.