Posts

New Top EA Causes for 2020? 2020-04-01T07:39:59.687Z · score: 23 (9 votes)
April Fool's Day Is Very Serious Business 2020-03-13T09:16:37.023Z · score: 67 (41 votes)
Open Thread #46 2020-03-13T08:01:31.342Z · score: 8 (3 votes)
Should you familiarize yourself with the literature before writing an EA Forum post? 2019-10-06T23:17:09.317Z · score: 32 (15 votes)
[Link] How to contribute to the psychedelics ecosystem 2019-09-28T01:55:14.267Z · score: 10 (6 votes)
How to Make Billions of Dollars Reducing Loneliness 2019-08-24T01:49:45.629Z · score: 26 (17 votes)
New Top EA Cause: Flying Cars 2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Open Thread #43 2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Open Thread #41 2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Five books to make you super effective 2015-04-02T02:31:48.509Z · score: 6 (6 votes)

Comments

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T23:57:16.960Z · score: 2 (1 votes) · EA · GW

I won't stop you! :)

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T21:22:18.505Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T21:21:57.489Z · score: 2 (1 votes) · EA · GW

Yep!

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-01T08:07:47.059Z · score: 35 (14 votes) · EA · GW

Get Joe Biden To Take Nootropics.

For a while, the 2020 American presidential contest was down to three men: The man with heart trouble, the man with brain trouble, and the man with ego trouble. But now that the man with heart trouble is ranking below Andrew Cuomo, who is not even running, in prediction markets for the Democratic nomination, it is looking increasingly likely that the American public will have to decide whether brain trouble or ego trouble is less disqualifying.

What they should be asking is which of brain trouble or ego trouble is more easily fixed.

It's possible you've heard some buzz in your social circle about brain-enhancing nootropic drugs. One thing you might not know is that in some cases, although the drug appears to be something of a dud for younger folks, it works in oldsters:

While it is known that the human brain endures diverse insults in the process of ageing, food-based nootropics are likely to go a long way in mitigating the impacts of these insults. Further research is needed before we reach a point where food-based nootropics are routinely prescribed.

From a lit review.

...According to a meta-analysis on human studies, piracetam improves general cognition when supplemented by people in a state of cognitive decline, such as the kind that comes with aging. Though piracetam may be a useful supplement for improving longevity, it offers limited benefits for healthy people.

Healthy people supplementing piracetam do experience little to no cognitive benefit. Though piracetam supplementation in healthy people is understudied, preliminary evidence suggests that piracetam is most effective for older people...

...

In persons with cognitive decline, supplementation of Piracetam was able to reduce aggression and agitation symptoms.

From Examine.com. (Remember from a few news cycles ago: Joe Biden tells factory worker ‘you’re full of shit’ during a tense argument over guns.)

Why might this be a Top EA Cause? In addition to the usual massive responsibilities of being POTUS, America is currently suffering from a pandemic. A 1% improvement in the intelligence of the actions taken by the chief executive could directly and immediately save thousands of lives.

Is it tractable? Yes, but it's not talent or money limited. It's memetics limited. We need to figure out if we have any connections to the Biden campaign who can start planning his meals to keep the bulb as bright as possible. Failing that, we could suggest that a nootropics company form a marketing initiative around this. Or Kelsey could write about it in Vox. Or something.

Don't forget the importance of regular Super Mario 64 play either.

Comment by john_maxwell on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-22T03:29:16.916Z · score: 8 (2 votes) · EA · GW

What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?

Comment by john_maxwell on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-22T03:16:02.361Z · score: 3 (2 votes) · EA · GW

My best guess is no, but feel like I should throw this question out there in case anyone can think of plausible candidates.

Can you explain your thinking behind this? My model is that COVID-19 will spread to developing countries before too long, and once there, it will quickly become a much bigger problem than malaria etc. So the highest-impact global health intervention would appear to be "beta testing" of anti-COVID-19 interventions that we think can be transferred to a developing country context.

Anyway, this recent post on far-ultraviolet light looks pretty interesting. I'm pretty optimistic about ideas like this which could use the momentum of COVID-19 to overcome regulatory hurdles etc. and then end up being super valuable for other problems going forward.

Comment by john_maxwell on Advice for getting the most out of one-on-ones · 2020-03-21T05:19:06.043Z · score: 2 (3 votes) · EA · GW

If everyone records their 1-on-1s and rates their value on a scale of 1 to 10, along with various features that might be predictive of 1-on-1 value (e.g. how junior/senior they are, whether you're working on similar problems, whether you are from the same/different countries, your general conversational prompts/questions/conversation topic, etc.) then we can assemble a dataset and develop a predictive model of how valuable a 1-on-1 is likely to be. That helps with choosing who to meet with, and also persuading people to meet with you (if the predictive model says you should meet, that increases the odds they respond), and also knowing what to talk about (check to see which questions/conversation topics are predictive of a valuable 1-on-1).

Actually for the questions/conversation topics part, if I was an EAG attendee, I would start a thread of question/conversation ideas on Facebook or somewhere for people to brainstorm in, and then use some kind of approval voting so people can figure out what the best prompts are over time. If you have a good conversation, try to figure out what prompt could have created that conversation in retrospect then add it to the list.

Comment by john_maxwell on April Fool's Day Is Very Serious Business · 2020-03-13T22:03:03.730Z · score: 11 (4 votes) · EA · GW

Makes sense, I'll do that.

Comment by john_maxwell on Open Thread #46 · 2020-03-13T21:54:19.762Z · score: 4 (2 votes) · EA · GW

That occurred to me, but I've noticed myself feeling more willing to post in an Open Thread than post as shortform. LW also has shortform, but despite that, their monthly Open Threads are seeing a lot of activity:

https://www.lesswrong.com/s/yai5mppkuCHPQmzpN

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-13T06:51:22.514Z · score: 6 (3 votes) · EA · GW

Apology accepted, thanks. I agree on point 2.

Comment by john_maxwell on Insomnia with an EA lens: Bigger than malaria? · 2020-03-12T06:36:32.881Z · score: 4 (2 votes) · EA · GW

Since this insomnia is apparently a high-impact topic, I might as well share some anecdotes from my own battle with sleep difficulties.

I've had some success with behavioral solutions to insomnia ("don't use screens after 11 PM" type stuff). But the problem with behavioral solutions, in my view, is that they are too brittle. Life always happens and your habit breaks at some point. So in the spirit of Nassim Nicholas Taleb's comments on fragility, I've instead recently focused on finding "robust" or "antifragile" solutions to the problem of getting enough sleep. These tend to be technological. Right now I'm stacking a bunch of different technologies for better sleep:

  • Ebb forehead cooler device
  • Weighted blanket
  • f.lux
  • White noise machine
  • Eye mask
  • Glycine
  • Airway expansion. Note: I haven't gotten a sleep study, and I doubt I would strictly meet the criteria for sleep apnea diagnosis, but I still seem to be benefiting a lot from this.
  • Lying on an acupressure mat. Note: I think the most common explanations for why acupuncture works are pseudoscience. I recommend this book.
  • If I have to get up in the middle of the night, I wear orange glasses to block blue light. I also colored the night lights in our house with a red marker so they emit less blue light.

It might sound like a lot, but the nightly overhead of maintaining this is not high--less than 1% of the time I spend asleep. In aggregate this all seems to improve my sleep considerably in a way that doesn't depend on fragile behavioral interventions. (Some of the most valuable-seeming additions have been pretty recent, so we'll see how things work long term.)

Note: I suspect my sleep problems are more "physiological" than "psychological" in nature. CBT-i might work better for someone whose problems are more psychological.

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-12T04:24:44.943Z · score: 23 (8 votes) · EA · GW

I just want to note that in principle, large & weird or small & welcoming movements are both possible. 60s counterculture was a large & weird movement. Quakers are a small & welcoming movement. (If you want to be small & welcoming, I guess it helps to not advertise yourself very much.)

I think you are right that there's a debate around whether EA should be sanitized for a mass audience (by not betting on pandemics or whatever). But e.g. this post mentions that caution around growth could be good because growth is hard to reverse, but I don't see weirdness advocacy.

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-12T04:11:36.726Z · score: 8 (3 votes) · EA · GW

"View X is a rare/unusual view, and therefore it's not a debate." That seems a little... condescending or something?

How are we ever supposed to learn anything new if we don't debate rare/unusual views?

Comment by john_maxwell on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-09T05:29:14.393Z · score: 0 (2 votes) · EA · GW

I guess there's an interesting argument here for making casual gambling illegal--based on this thread, it seems like "Bets are serious & somber business, not for frivolous things like horse races" could be a really high value meme to spread.

Comment by john_maxwell on What should EAs interested in climate change do? · 2020-01-15T08:15:17.248Z · score: 3 (2 votes) · EA · GW

In terms of plant-based alternatives, I think nutrition research could be high-impact and neglected. It seems like people are focused on trying to replicate the taste of meat, but when I experimented with veganism, I found myself wanting meat more the longer I'd gone without it, and experiencing it to be unusually satisfying if I hadn't had it in a long while, which seems more compatible with a nutritional issue -- the same pattern doesn't seem to manifest for other foods I find tasty.

I'm imagining a study which feeds participants a vegan diet along with some randomly chosen nutritional supplements to see which are correlated with reduced desire to eat meat or something like that. Or maybe just better publicizing already known nutrition research / integrating it into plant based meat substitutes -- for example, I just found this article which says iron from red meat is absorbed much more easily -- I do think I was craving red meat specifically relative to other animal products. (Come to think of it, I was also experiencing more fatigue than normal, which seems compatible with mild anemia?)

Comment by john_maxwell on What should EAs interested in climate change do? · 2020-01-11T07:37:12.768Z · score: 18 (9 votes) · EA · GW

Some related links:

More speculative questions (my own personal uninformed thoughts):

  • Regarding the tree planting option, can we breed trees which are less vulnerable to wildfires?
  • Regarding the marine cloud brightening option - could you make it doubly useful by going to areas which experience periodic flooding and spraying floodwaters up into the air? Maybe you could even get municipalities to pay you and make a business out of it.
  • Kelly and Zach Weinersmith wrote a book called Soonish which says (among other interesting things) that robots which automatically build buildings are on the horizon. To what extent could easy, cheap construction of new buildings and cities help mitigate sealevel rises and other global warming effects?
  • My brother has a physics degree and finds this to be a bit implausible: http://superchimney.org But it does make me wonder if there's a way to make money by buying land, terraforming it in a way that's good from a climate perspective, and selling the land after it's increased in value.
Comment by john_maxwell on Space governance is important, tractable and neglected · 2020-01-11T06:39:21.423Z · score: 7 (3 votes) · EA · GW

This is challenging because vast distances in space will likely be an obstacle to effective enforcement. Space is, in a nutshell, an endless desert with oases that are extremely far apart from each other. The closest star to Earth is 4.3 light years away, resulting in a round-trip latency of 8.6 years even if light-speed communication and transport are possible. The closest galaxy is approximately 2.5 million light years away, rendering conventional enforcement impossible.

Reminds me of an interesting article which appeared in Scientific American recently.

Anyway I thought this was a good post. With regard to tractability, I think it's possible that as we start to colonize space, the necessity of space governance may become apparent -- perhaps in a sudden & unexpected way. If political leaders are looking for solutions at that time, it's probably a good thing if there are proposals available which have been forged through an extensive & lively debate (as opposed to some kind of hastily composed emergency measure which ends up locking us into a suboptimal trajectory).

Another thought: If you think high quality political conversations are unusually difficult to have right now, but this situation might improve in the future, that could be an argument for delaying widespread public discussion of high-impact political topics to some future time when the situation has improved. (No reason not to think about such topics privately though.)

Comment by john_maxwell on Response to recent criticisms of EA "longtermist" thinking · 2020-01-11T03:29:53.247Z · score: 8 (2 votes) · EA · GW

It seems weird that the longtermism is being accused of white supremacy given that population growth is disproportionately happening in countries that aren't traditionally considered white? As you can see from the map on this page, population growth is concentrated in places like Africa, the Middle East, and South Asia. It appears to me that it's neartermist views of population ethics ("only those currently alive are morally relevant") that place greater moral weight on white folks? I wonder how a grandmother from one of those places, proud of her many grandchildren, would react if a childless white guy told her that future generations weren't morally relevant... It also seems weird to position climate change as a neartermist cause.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T08:42:30.156Z · score: 18 (7 votes) · EA · GW

The reversal test doesn't mean 'if you don't think a charity for X is promising, you should be in favour of more ¬X'. I may not find homeless shelters, education, or climate change charities promising, yet not want to move in the direction of greater homelessness, illiteracy, or pollution.

Suppose you're the newly appointed director of a large charitable foundation which has allocated its charitable giving in a somewhat random way. If you're able to resist status quo bias, then usually, you will not find yourself keeping the amount allocated for a particular cause at exactly the level it was at originally. For example, if the foundation is currently giving to education charities, and you don't think those charities are very effective, then you'll reduce their funding. If you think those charities are very effective, then you'll increase their funding.

Now consider "having EAs live alone in apartments in expensive cities" as a cause area. Currently, the amount we're spending on this area has been set in a somewhat random way. Therefore, if we're able to resist status quo bias, we should probably either be moving it up or moving it down. We could move it up by creating a charity that pays EAs to live alone, or move it down by encouraging EAs to move to the EA Hotel. (Maybe creating a charity that pays EAs to live alone would be impractical or create perverse incentives or something, this is more of an "in principle" intuition pump sort of an argument.)

Edit: With regard to the professionalism thing, my personal feelings on this are something like the last paragraph in this comment -- I think it'd be good for some of us to be more professional in certain respects (e.g. I'm supportive of EAs working to gain institutional legitimacy for EA cause areas), but the Hotel culture I observed feels mostly acceptable to me. Probably some mixture of not seeing much interpersonal drama while I was there, and expecting the Hotel residents will continue to be fairly young people who don't occupy positions of power (grad student housing comes to mind). FWIW, my personal experience is that the value of professionalism comes up more often in Blackpool EA conversations than Bay Area EA conversations. With the Bay Area, you may very well be paying more rent for a less professional culture. Just my anecdotal impressions.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T01:05:25.267Z · score: 14 (10 votes) · EA · GW

I'm not convinced community health issues are uniquely problematic when you have people living together. I feel like one could argue just as easily that conferences are risky for community health. If something awkward happens at EA Global, you'll have an entire year to chew on that before running into the person next year. (Pretty sure that past EA Global conferences have arranged shared housing in e.g. dormitories for participants, by the way.) And there is less shared context at a conference because it happens over a brief period of time. One could also argue that having the community be mostly online runs risks for community health (for obvious reasons), and it's critical for us to spend lots of time in person to build stronger bonds. And one could argue that not having much community at all, neither online nor in person, runs risks for community health due to value drift. Seems like there are risks everywhere.

If people really think there are significant community health risks with EA roommates, then they could start a charity which pays EAs who currently live with EA roommates to live alone. To my knowledge, no one has proposed a charity like that. It doesn't seem like a very promising charity to me. If you agree, then by the reversal test, it follows that as a community we should want to move a bit further in the direction of EAs saving money by living together.

Comment by john_maxwell on Institutions for Future Generations · 2019-11-22T20:39:10.016Z · score: 4 (2 votes) · EA · GW

Interesting point re: savings rate. It wouldn't surprise me if economists have done research into what factors cause an increase in the savings rate. (If no research has been done so far, it seems like such research would fill a valuable gap in the literature.) Anyway, it seems plausible to me that some things which cause an increase in the savings rate also increase longtermism more generally. (This is another topic which we could gather information about by checking to see if people who save a lot of money are more longtermist generally.) My personal guess would be that economic and political stability predicts savings rate better than equality. I suspect drastic efforts to mitigate present day inequality would probably decrease savings rate if anything. What's the point in saving money if the government might randomly take it at some point in the future? [Edit: If you replace "reducing inequality" with "ensuring more people have lower levels on Maslow's hierarchy met" then I'd be more convinced.]

More broadly, I'd be interested to see people tackling longtermism as a psychological rather than a political project--what are the correlates of longtermist outlook that could feasibly be affected through interventions?

Comment by john_maxwell on Institutions for Future Generations · 2019-11-22T20:24:41.498Z · score: 3 (2 votes) · EA · GW

Can I sell my security? Why not just sell right before doing whatever it is I want to do that is going to screw the future over?

Comment by john_maxwell on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-22T05:17:26.451Z · score: 12 (4 votes) · EA · GW

Thanks for the aggregate position summary! I'd be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who aren't employed at existing EA organizations. I'm especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who don't work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.

For a while now I've been thinking that the crowdsourcing of alternate perspectives ("breadth-first" rather than "depth-first" exploration of idea space) is one of the internet's greatest strengths. (I also suspect "breadth-first" idea exploration is underrated in general.) On the flip side, I'd say one of the internet's greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectly -- btw, I couldn't find any reference to academic research on the impact of remittances on Givewell's current GiveDirectly profile -- maybe they just didn't think to look it up -- case study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), there's a sense in which we'd be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama / bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.

Maybe part of what's going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.

Comment by john_maxwell on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T22:14:44.584Z · score: 19 (8 votes) · EA · GW

Not Buck, but one possibility is that people pursuing different high-level agendas have different intuitions about what's valuable, and those kind of disagreements are relatively difficult to resolve, and the best way to resolve them is to gather more "object-level" data.

Maybe people have already spent a fair amount of time having in-person discussions trying to resolve their disagreements, and haven't made progress, and this discourages them from writing up their thoughts because they think it won't be a good use of time. However, this line of reasoning might be mistaken -- it seems plausible to me that people entering the field of AI safety are relatively impartial judges of which intuitions do/don't seem valid, and the question of where new people in the field of AI safety should focus is an important one, and having more public disagreement would improve human capital allocation.

Comment by john_maxwell on EA syllabi and teaching materials · 2019-11-19T03:36:09.018Z · score: 2 (1 votes) · EA · GW

Here's another one: https://forum.effectivealtruism.org/posts/8i3Wdy4FuJbSDQr5k/a-semester-long-course-in-ea

Comment by john_maxwell on What areas of maths are useful across disciplines? · 2019-11-19T03:28:18.032Z · score: 2 (1 votes) · EA · GW

multiobjective optimization theory

Can you say something about why you feel this is especially useful?

Comment by john_maxwell on How to find EA documents on a particular topic · 2019-11-19T02:56:43.091Z · score: 9 (7 votes) · EA · GW

I assembled a huge list of domains like this and created a custom search engine using this tool from Google. Unfortunately despite it being Google, the search results are really terrible, so I never posted it. (Example: a search for "capacity-building" returns 5 results, none of which are this page. I know it's picking up concepts.effectivealtruism.org because when I search for "moral uncertainty" the #2 result is from concepts.effectivealtruism.org. BTW, I included quite a number of domains in the search engine, not all the results are necessarily EA-related.)

https://searchstack.co is a nice little tool which makes use of the site:A OR site:B mechanism, but unfortunately I believe Google caps the number of distinct domains you can search using that trick? But maybe we could use multiple searchstacks for different EA subtopics. I think if there are search companies that actually do a good job of allowing you to create a custom search engine, that would be the ideal solution, even if it requires paying a monthly fee. If someone else wants to take initiative on this, I'd love to collaborate.

It'd be especially cool if a search engine could search Facebook group archives, since there's so much EA discussion in those.

Comment by john_maxwell on Applying EA to climate change · 2019-11-18T23:14:14.205Z · score: 2 (1 votes) · EA · GW

Surely some food emits much more carbon than other food. Maybe we could just tax food based on how much carbon it emits? Then people won't want to throw it away because they don't want to waste their money. (And they'll also substitute high-emission food for low-emission food.)

Comment by john_maxwell on Applying EA to climate change · 2019-11-18T07:06:49.871Z · score: 7 (2 votes) · EA · GW

35% of food is thrown away in high-income economies.

That number seems pretty high. I wonder where most of the waste happens? Somewhat contrived scenario here, but suppose the drug store buys a new food product. Customers aren't having it so they throw it away. But then due to this awareness campaign, next time they keep it on the shelf--which means they don't have room for something customers do want to buy, so the customers drive to a different store, cancelling out the alleged food waste benefit. Again, contrived, I just feel like we should know why the waste is happening before working to stop it. There's a clear financial incentive not to waste food. Maybe it's mostly food with a short shelf life, like fresh vegetables, that people intend to eat but never do?

Instead of a public campaign against food waste, maybe a public campaign that shows the decarbonization benefit of everyday lifestyle changes. Which is better from an individual perspective: stop driving and take the bus to work, or cut food waste from 35% to 0%?

Comment by john_maxwell on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-12T07:57:59.332Z · score: 33 (17 votes) · EA · GW

Thanks for this post!

One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?

There isn't necessarily a contradiction in expressing both positions. For example, perhaps there's an intellectual center and it's too weird. (Though, if the weirdness comes in the form of "People saying crazy stuff online", this explanation seems less likely.) You could also argue that we are open to weird ideas, just not the right weird ideas.

But I think it could be interesting to try & make this tradeoff more explicit in future surveys. It seems plausible that the de facto result of announcing survey results such as these is to move us in some direction along a single coarse intellectual centralization/decentralization dimension. (As I said, there might be a way to square this circle, but if so I think you want a longer post explaining how, not a survey like this.)

Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension--but perceptions may differ. Maybe one leader says "we need more talk and less action", and another leader says "we need less talk and more action", but they both agree on the ideal talk/action balance, they just disagree about the current balance (because they've made different observations about the current balance).

One way to address this problem in general for some dimension X is to have a rubric with 5 written descriptions of levels of X the community could aim for, and ask each leader to select the level of X that seems optimal to them. Another advantage of this scheme is if there's a fair amount of community variation in levels of X, the community could be below the optimal level of X on average, but if leaders publicly announce that levels of X should move up (without having specified a target level), people who are already above the ideal level of X might move even further above the ideal level.

Comment by john_maxwell on Assumptions about the far future and cause priority · 2019-11-12T06:06:28.884Z · score: 3 (2 votes) · EA · GW

It seems extremely unlikely to me that we will come remotely close to discovering the utility-maximizing pattern of matter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many different ways could these atoms be organized in space? To keep things simple, suppose that we just want to delete some of them to form a more harmonious pattern, and otherwise do not move anything. Then there are already 2^(10^50) possible patterns for us to explore.

One direction you could take this: It's probably not actually necessary for us to explore 2^(10^50) patterns in a brute-force manner. For example, once I've tried brussels sprouts once, I can be reasonably confident that I still won't like them if you move a few atoms over microscopically. A Friendly AI programmed to maximize a human utility function it has uncertainty about might offer incentives for humans to try new matter configurations that it believes offer high value of information. For example, before trying a dance performance which lasts millions of years, it might first run an experimental dance performance which lasts only one year and see how humans like it. I suspect a superintelligent Friendly AI would hit diminishing returns on experiments of this type within the first thousand years.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:13:48.518Z · score: 38 (15 votes) · EA · GW

The fact that there are only 18 total donations totaling less than $10k is concerning

If you are well-funded, they'll say: "You don't need my money. You're already well-funded." If you aren't well-funded, they'll say: "You aren't well-funded. That seems concerning."

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:12:49.077Z · score: 27 (11 votes) · EA · GW

This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That's why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It's why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.

If you really want to discuss this topic, I suggest you create a separate post laying out your position - but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you'll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, "unsafe" AI research that DeepMind is famous for should be a major priority suggests to me that there's a fair amount you don't know about positions & thinking that are common to the EA movement.

Here are some misc links which could be relevant to the topic of measurability:

And here's a list of lists of EA resources more generally speaking:

Comment by john_maxwell on Notes on 'Atomic Obsession' (2009) · 2019-10-27T20:56:37.811Z · score: 2 (1 votes) · EA · GW

Seems like some form of Pascal's Wager is valid in this case -- it's hard to know for sure what the impact of nukes will be, especially without the benefit of hindsight, so it's better to err on the side of caution.

Comment by john_maxwell on Resource Generation: Inheriting-to-give, for systemic change · 2019-10-15T15:50:28.931Z · score: 13 (5 votes) · EA · GW

Do they have thoughts on GiveDirectly? Looks like although they mention the Global South, they're asking members to donate to first world political advocacy groups. Was Giridharadas one of the critics who says rich people have too much influence on US public policy?

BTW this group recently got a grant from the EA Meta fund: https://generationpledge.org

Comment by john_maxwell on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-12T06:01:14.743Z · score: 14 (5 votes) · EA · GW

I'd be interested to know how people think long-range forecasting is likely to differ from short-range forecasting, and to what degree we can apply findings from short-range forecasting to long-range forecasting. Could it be possible to, for example, ask forecasters to forecast at a variety of short-range timescales, fit a curve to their accuracy as a function of time (or otherwise try to mathematically model the "half-life" of the knowledge powering the forecast--I don't know what methodologies could be useful here, maybe survival analysis?) and extrapolate this model to long-range timescales?

I'm also curious why there isn't more interest in presenting people with historical scenarios and asking them to forecast what will happen next in the historical scenario. Obviously if they already know about that period of history this won't work, but that seems possible to overcome.

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-12T05:43:08.805Z · score: 2 (1 votes) · EA · GW

Another thought is that even if the original post had a weak epistemic status, if the original post becomes popular and gets the chance to receive widespread scrutiny, which it survives, it could be reasonable to believe its "de facto" epistemic status is higher than what's posted at the top. But yes, I guess in that case there's the risk that none of the people who scrutinized it had familiarity with relevant literature that contradicted the post.

Maybe the solution is to hire someone to do lit reviews to carefully examine posts with epistemic status disclaimers that nonetheless became popular and seem decision relevant.

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-09T20:28:31.097Z · score: 4 (2 votes) · EA · GW

Interesting thought, upvoted!

Is there particular evidence for source amnesia you have in mind? The abstract for the first Wikipedia citation says:

Experiment 2 demonstrated that when normal subjects' level of item recall was equivalent to that of amnesics, they exhibited significantly less source amnesia: Normals rarely failed to recollect that a retrieved item derived from either of the two sources, although they often forgot which of the two experimenters was the correct source. The results are discussed in terms of their implications for theories of normal and abnormal memory.

So I guess the question is whether the epistemic status disclaimer falls into the category of source info that people will remember ("an experimenter told me X") or source info that people often forget ("Experimenter A told me X"). (Or whether it even makes sense to analyze epistemic status in the paradigm of source info at all--for example, including an epistemic status could cause readers to think "OK, these are just ideas to play with, not solid facts" when they read the post, and have the memory encoded that way, even if they aren't able to explicitly recall a post's epistemic status. And this might hold true regardless of how widespread a post is shared. Like, for all we know, certain posts get shared more because people like playing with new ideas more than they like reading established facts, but they're pretty good at knowing that playing with new ideas is what they're doing.)

I think if you fully buy into the source amnesia idea, that could be considered an argument for posting anything to the EA Forum which is above average quality relative to a typical EA information diet for that topic area--if you really believe this source amnesia thing, people end up taking Facebook posts just as seriously as papers they read on Google Scholar.

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:57:27.352Z · score: 6 (4 votes) · EA · GW

site:forum.effectivealtruism.org on Google has been working OK for me.

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:36:59.291Z · score: 6 (3 votes) · EA · GW

Clever :)

However, I'm not sure that post follows its own advice, as it appears to be essentially a collection of anecdotes. And it's possible to marshal anecdotes on both sides, e.g. here is Claude Shannon's take:

...very frequently someone who is quite green to a problem will sometimes come in and look at it and find the solution like that, while you have been laboring for months over it. You’ve got set into some ruts here of mental thinking and someone else comes in and sees it from a fresh viewpoint.

[Edit: I just read that Shannon and Hamming, another person I cited in this thread, apparently shared an office at Bell Labs, so their opinions may not be 100% independent pieces of evidence. They also researched similar topics.]

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:31:37.344Z · score: 17 (8 votes) · EA · GW

One possible synthesis comes from Turing award winner Richard Hamming's book The Art of Doing Science and Engineering. He's got chapters at the end on Creativity and Experts. The chapters are somewhat rambly and I've quoted passages below. My attempt to summarize Hamming's position: Having a deep intellectual toolkit is valuable, but experts are often overconfident and resistant to new ideas.

Chapter 25: Creativity

...Do not be too hasty [in refining a problem], as you are likely to put the problem in the conventional form and find only the conventional solution...

...

...Wide acquaintance with various fields of knowledge is thus a help—provided you have the knowledge filed away so it is available when needed, rather than to be found only when led directly to it. This flexible access to pieces of knowledge seems to come from looking at knowledge while you are acquiring it from many different angles, turning over any new idea to see its many sides before filing it away. This implies effort on your part not to take the easy, immediately useful “memorizing the material” path, but prepare your mind for the future.

...

Over the years of watching and working with John Tukey I found many times he recalled the relevant information and I did not, until he pointed it out to me. Clearly his information retrieval system had many more “hooks” than mine did. At least more useful ones! How could this be? Probably because he was more in the habit than I was of turning over new information again and again so his “hooks” for retrieval were more numerous and significantly better than mine were. Hence wishing I could similarly do what he did, I started to mull over new ideas, trying to make significant “hooks” to relevant information so when later I went fishing for an idea I had a better chance of finding an analogy. I can only advise you to do what I tried to do—when you learn something new think of other applications of it—ones which have not arisen in your past but which might in your future. How easy to say, but how hard to do! Yet, what else can I say about how to organize your mind so useful things will be recalled readily at the right time?

...

...Without self-confidence you are not likely to create great, new things. There is a thin line between having enough self-confidence and being over-confident. I suppose the difference is whether you succeed or fail; when you win you are strong willed, and when you lose you are stubborn!...

Chapter 26: Experts

...

In an argument between a specialist and a generalist the expert usually wins by simply: (1) using unintelligible jargon, and (2) citing their specialist results which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts are both necessary, and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.

...

Experts in looking at something new always bring their expertise with them as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You can not blame them too much since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.

All things which are proved to be impossible must obviously rest on some assumptions, and when one or more of these assumptions are not true then the impossibility proof fails—but the expert seldom remembers to carefully inspect the assumptions before making their “impossible” statements. There is an old statement which covers this aspect of the expert. It goes as follows: “If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.”

...

...It appears most of the great innovations come from outside the field, and not from the insiders... examples occur in most fields of work, but the text books seldom, if ever, discuss this aspect.

...the expert faces the following dilemma. Outside the field there are a large number of genuine crackpots with their crazy ideas, but among them may also be the crackpot with the new, innovative idea which is going to triumph. What is a rational strategy for the expert to adopt? Most decide they will ignore, as best they can, all crackpots, thus ensuring they will not be part of the new paradigm, if and when it comes.

Those experts who do look for the possible innovative crackpot are likely to spend their lives in the futile pursuit of the elusive, rare crackpot with the right idea, the only idea which really matters in the long run. Obviously the strategy for you to adopt depends on how much you are willing to be merely one of those who served to advance things, vs. the desire to be one of the few who in the long run really matter. I cannot tell you which you should choose that is your choice. But I do say you should be conscious of making the choice as you pursue your career. Do not just drift along; think of what you want to be and how to get there. Do not automatically reject every crazy idea, the moment you hear of it, especially when it comes from outside the official circle of the insiders—it may be the great new approach which will change the paradigm of the field! But also you cannot afford to pursue every “crackpot” idea you hear about. I have been talking about paradigms of Science, but so far as I know the same applies to most fields of human thought, though I have not investigated them closely. And it probably happens for about the same reasons; the insiders are too sure of themselves, have too much invested in the accepted approaches, and are plain mentally lazy. Think of the history of modern technology you know!

...

...In some respects the expert is the curse of our society with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do”. Especially in the areas where you are so sure you know; the area of the paradigms of your field.

Hamming shares a number of stories from the history of science to support his claims. He also says he has more stories which he didn't include in the chapter, and that he looked for stories which went against his position too.

A couple takeaways:

  • Survivorship bias regarding stories of successful contrarians - most apparent crackpots actually are crackpots.

  • Paradigm shifts - if an apparent crackpot is not actually a crackpot, their idea has the potential to be extremely important. So shutting down all the apparent crackpots could have quite a high cost even if most are full of nonsense. As Jerome Friedman put it regarding the invention of bagging (coincidentally mentioned in the main post):

The first time I saw this-- when would that have been, maybe the mid '90s-- I knew a lot about the bootstrap. Actually, I was a student of Brad Efron, who invented the bootstrap. And Brad and I wrote a book together on the bootstrap in the early '90s. And then when I saw the bag idea from Leo, I thought this looks really crazy. Usually the bootstrap is used to get the idea of standard errors or bias, but Leo wants to use bootstrap to produce a whole bunch of trees and to average them, which sounded really crazy to me. And it was a reminder to me that you see an idea that looks really crazy, it's got a reasonable chance of actually being really good. If things look very familiar, they're not likely to be big steps forward. This was a big step forward, and took me and others a long time to realize that.

However, even if one accepts the premise that apparent crackpots deliver surprisingly high expected value, it's still not obvious how many we want on the Forum!

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:11:14.029Z · score: 4 (3 votes) · EA · GW

More thoughts re: the wisdom of the crowds: I suppose the wisdom of the crowds works best when each crowd member is in some sense an "unbiased estimator" of the quantity to be estimated. For example, suppose we ask a crowd to estimate the weight of a large object, but only a few "experts" in the crowd know that the object is hollow inside. In this case, the estimate of a randomly chosen expert could beat the average estimate of the rest of the crowd. I'm not sure how to translate this into a more general-purpose recommendation though.

Comment by john_maxwell on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:01:15.782Z · score: 5 (5 votes) · EA · GW

(Upvoted)

Maybe it's possible to develop more specific guidelines here. For example, your comment implies that you think it's essential to know all the key considerations. OK... but I don't see why ignorance of known key considerations would prevent someone from pointing out a new key consideration. And if we discourage them from making them post, that could be very harmful, because as you say, it's important to know all the key considerations.

In other words, maybe it's worth differentiating the act of generating intellectual raw material, and the act of drawing conclusions.

Comment by john_maxwell on Long-term Donation Bunching? · 2019-09-28T13:21:24.496Z · score: 3 (2 votes) · EA · GW

Another argument against extreme donation bunching: Because marginal tax rates get higher as your income increases, being able to deduct $40K is not necessarily twice as valuable as being able to deduct $20K.

Comment by john_maxwell on Some personal thoughts on EA and systemic change · 2019-09-27T02:47:40.215Z · score: 45 (22 votes) · EA · GW

I wish the systemic change discussion was less focused on cost-effectiveness and more focused on uncertainty regarding the results of our actions. For example, in 2013 Scott Alexander wrote this post on how military strikes are an extremely cheap way to help foreigners ("at least potentially"). I'm glad he included the disclaimer, because although Scott's article works off the premise that "life is ~10% better in Libya after Gaddafi was overthrown", Libya isn't looking too hot right now - Obama says Libya is the biggest regret of his presidency. Scott also failed to mention that American intervention in Libya may have reduced North Korea's willingness to negotiate regarding its nuclear weapons program.

To me, uncertainty means it's valuable to research systemic changes well in advance of trying to make them. If systemic changes aren't cost-effective now, but might be cost-effective in the future, we should consider starting to theorize, debate, and run increasingly large experiments now anyway. (Disclaimer: Having productive disagreements about systemic changes is in itself a largely unsolved institution design problem, I'd argue! Maybe we should start by trying to solve that.)

Comment by john_maxwell on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-27T02:05:16.083Z · score: 3 (2 votes) · EA · GW

Maybe it'd be helpful to build the charter city somewhere like here?

Comment by john_maxwell on Movement Collapse Scenarios · 2019-09-21T22:02:07.285Z · score: 2 (1 votes) · EA · GW

Thanks to everyone who entered this contest! I decided to split the prize money evenly between the four entries. Winners, please check your private messages for payment details!

Comment by john_maxwell on How much EA analysis of AI safety as a cause area exists? · 2019-09-19T02:49:22.869Z · score: 3 (2 votes) · EA · GW

This critique is quite lengthy :-) Is there a summary available?

Comment by john_maxwell on What things do you wish you discovered earlier? · 2019-09-19T01:33:00.410Z · score: 3 (2 votes) · EA · GW

http://painscience.com saved my career from a disabling repetitive strain injury. I'll never get the 1-2 years of misery before finding that website back.

Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-09-17T01:05:25.243Z · score: 2 (1 votes) · EA · GW

It looks like this report is from 2018, and doesn't incorporate the 2019 YouGov research I linked. (I doubt pre-2004 data will give us insight into modern loneliness. Facebook and Twitter didn't exist back then, for instance.) This bit is interesting though:

More recently, some media outlets have misinterpreted the results of a 2018 Cigna survey to argue that loneliness has increased. The survey indicated that loneliness was higher for younger Americans than for older ones. A mistaken interpretation of this finding would be that older Americans were less likely to be lonely when they were younger than today's younger Americans are. This interprets life-course changes in loneliness as reflecting a change over time for Americans whatever their stage in the life course. While USA Today reported the age-based results as "surprising," the research on the relationship between age and loneliness suggests that the "[p]revalence and intensity of lonely feelings are greater in adolescence and young adulthood (i.e., 16-25 years of age)," decline with age, and then increase again in the very old.33 The Cigna survey does not support the claim that loneliness has increased over time, nor is the increased loneliness of adolescents a new revelation.

It's not clear to me how to reconcile this with e.g. the research YouGov cites to attribute loneliness among current youth to social media use. I guess a natural first step would be to see whether the magnitude of historical effects in the Handbook of Individual Differences in Social Behavior can explain what YouGov saw. I think you'd have to analyze data carefully to figure out if it supports the hypothesis "young people just tend to be lonelier" or the hypothesis "social ties get weaker with every passing generation + elderly people get lonely as their friends die".

In any case, I think loneliness could be a problem worth tackling even if it isn't rising. (And you will notice I didn't technically claim it was rising :P) The point is also somewhat moot as only one person expressed interest as a result of me posting here.