Posts

New Top EA Causes for 2020? 2020-04-01T07:39:59.687Z · score: 24 (10 votes)
April Fool's Day Is Very Serious Business 2020-03-13T09:16:37.023Z · score: 64 (44 votes)
Open Thread #46 2020-03-13T08:01:31.342Z · score: 8 (3 votes)
Should you familiarize yourself with the literature before writing an EA Forum post? 2019-10-06T23:17:09.317Z · score: 32 (15 votes)
[Link] How to contribute to the psychedelics ecosystem 2019-09-28T01:55:14.267Z · score: 10 (6 votes)
How to Make Billions of Dollars Reducing Loneliness 2019-08-24T01:49:45.629Z · score: 26 (17 votes)
New Top EA Cause: Flying Cars 2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Open Thread #43 2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Open Thread #41 2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Five books to make you super effective 2015-04-02T02:31:48.509Z · score: 7 (7 votes)

Comments

Comment by john_maxwell on A Case and Model for Aggressively Funding Effective Charities · 2020-09-28T02:00:03.261Z · score: 2 (1 votes) · EA · GW

Sure, but if you only award prizes for the latter, I think people will gradually recognize the difference.

Maybe your point is that the opinions of loudmouths like myself will be overrepresented in such a scheme? Allowing for private submissions could help address that.

Comment by john_maxwell on A Case and Model for Aggressively Funding Effective Charities · 2020-09-25T09:42:32.166Z · score: 12 (5 votes) · EA · GW

In terms of hearing diverse perspectives, I suspect there are more effective ways to accomplish that goal than having diverse funders. For example, a funder could require that a nonprofit lay their thinking out publicly in detail, and offer prizes for the best critiques other people write in response to their thinking. That way you're optimizing for hearing from people who think they have something to add.

Comment by john_maxwell on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-14T11:53:28.269Z · score: 3 (2 votes) · EA · GW

I thought this recent Netflix documentary which talks a lot about Bill Gates' charity work was fairly inspiring (and informative). I haven't tried watching videos of suffering... I doubt it would be very motivating for the sort of study/brainstorm/write EA work I most want myself to do.

Comment by john_maxwell on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T09:37:22.886Z · score: 7 (4 votes) · EA · GW

Why not just have the people who need mentorship serve as "research personal assistants" to improve the productivity of people who are qualified to provide mentorship? (This describes something which occurs between professors and graduate students right?)

Comment by john_maxwell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T06:23:28.329Z · score: 4 (2 votes) · EA · GW

At time T0, someone suggests X as a joke.

Telling jokes as an EA cause.

Comment by john_maxwell on EA Cameroon - COVID-19 Awareness and Prevention in the Santa Division of Cameroon Project Proposal · 2020-07-23T03:30:47.896Z · score: 2 (1 votes) · EA · GW

I have no idea, I already shared my notes above! :) Perhaps the team could reach out to e.g. the author of the Johns Hopkins article?

BTW, I did find this article which argues for knitted masks:

https://stringking.com/face-masks/knit-vs-woven-fabric/

However, I'm more inclined to trust Johns Hopkins. But maybe the author of the Johns Hopkins article would have interesting opinions on the above link.

Edit: Here's more info

https://med.stanford.edu/news/all-news/2020/06/stanford-scientists-contribute-to-who-mask-guidelines.html

Comment by john_maxwell on EA Cameroon - COVID-19 Awareness and Prevention in the Santa Division of Cameroon Project Proposal · 2020-07-22T03:42:54.148Z · score: 2 (1 votes) · EA · GW

Hm, socks are knitted not woven, right?

Comment by john_maxwell on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T07:25:59.659Z · score: 3 (3 votes) · EA · GW

My guess would be that if you play with GPT-3, it can talk about as well about human values (or AI alignment for that matter) as it can talk about anything else. In that sense, it seems like stronger capabilities for GPT-3 also potentially help solve the alignment problem.

Edit: More discussion here:

https://www.lesswrong.com/posts/BnDF5kejzQLqd5cjH/alignment-as-a-bottleneck-to-usefulness-of-gpt-3?commentId=vcPdcRPWJe2kFi4Wn

Comment by john_maxwell on EA Cameroon - COVID-19 Awareness and Prevention in the Santa Division of Cameroon Project Proposal · 2020-07-20T06:56:51.977Z · score: 2 (1 votes) · EA · GW

Since you requested feedback, here are some quick thoughts:

While I very much hope Cameroon is able to bring COVID under control, it seems like this could be difficult based on what we've seen in other countries. So the part of your plan that I'm most optimistic about is the mask making, because I think that could save lives even if COVID is not brought under control. Somewhere I read (can't remember where unfortunately) that if you wear a mask, then you'll end up inhaling a smaller number of viral particles if you get exposed to an infected person, and inhaling a smaller number of viral particles tends to give you a milder case, which means you're more likely to acquire immunity without putting your life at risk.

So I'd encourage you think about questions like: After we practice explaining the mask-making process in our workshops, can we find a way to explain mask-making via radio / flyers / newspaper articles? Or can we tell everyone at our mask-making workshops that they should run their own mask-making workshops for their family and friends and so on, so the mask-making knowledge spreads through the population that way?

Additionally, from what I've read some homemade masks are much more effective than others. Some snippets from my notes on mask effectiveness:

The test produced a few clear winners. While droplets from the average cough traveled around eight feet from an uncovered face, they went only 2.5 inches when produced behind a mask made of two layers of simple cotton quilting fabric. A mask made from a folded handkerchief produced droplets that traveled a bit over a foot. A loose, single-ply cotton bandana didn’t fare as well: While it prevented some fluid release, the cough’s plume still traveled nearly four feet.

“Whenever you have the option, use tightly woven fabric that has minimal leakage,” Verma concludes. “Any sort of covering is better than none.”

https://www.nationalgeographic.com/science/2020/07/how-to-make-coronavirus-masks-that-everyone-will-want-to-wear-cvd/

Thicker, more densely woven cotton fabrics are best, such as quilting cotton or cotton sheets. Stretchy knits aren’t ideal. Hold the fabric up to the light: The fewer tiny holes you can see, the better it will work to filter droplets.

https://www.hopkinsmedicine.org/health/conditions-and-diseases/coronavirus/coronavirus-face-masks-what-you-need-to-know

The fabric should be a woven fabric, not a knitted fabric. What’s the difference? Woven fabrics don’t stretch much, so when you tie it around your face, the tiny holes between the threads don’t get bigger and let in more viruses.

https://www.sleepphones.com/Comfortable-Coronavirus-Face-Mask-Beard-Glasses-Fogging

In recent tests, HEPA furnace filters scored well, as did vacuum cleaner bags, layers of 600-count pillowcases and fabric similar to flannel pajamas. Stacked coffee filters had medium scores. Scarves and bandanna material had the lowest scores, but still captured a small percentage of particles.

If you don’t have any of the materials that were tested, a simple light test can help you decide whether a fabric is a good candidate for a mask.

“Hold it up to a bright light,” said Dr. Scott Segal, chairman of anesthesiology at Wake Forest Baptist Health who recently studied homemade masks. “If light passes really easily through the fibers and you can almost see the fibers, it’s not a good fabric. If it’s a denser weave of thicker material and light doesn’t pass through it as much, that’s the material you want to use.”

...

Dr. Wang’s group tested two types of air filters. An allergy-reduction HVAC filter worked the best, capturing 89 percent of particles with one layer and 94 percent with two layers. A furnace filter captured 75 percent with two layers, but required six layers to achieve 95 percent. To find a filter similar to those tested, look for a minimum efficiency reporting value (MERV) rating of 12 or higher or a microparticle performance rating of 1900 or higher.

The problem with air filters is that they potentially could shed small fibers that would be risky to inhale. So if you want to use a filter, you need to sandwich the filter between two layers of cotton fabric. Dr. Wang said one of his grad students made his own mask by following the instructions in the C.D.C. video, but adding several layers of filter material inside a bandanna.

Dr. Wang’s group also found that when certain common fabrics were used, two layers offered far less protection than four layers. A 600 thread count pillow case captured just 22 percent of particles when doubled, but four layers captured nearly 60 percent. A thick woolen yarn scarf filtered 21 percent of particles in two layers, and 48.8 percent in four layers. A 100 percent cotton bandanna did the worst, capturing only 18.2 percent when doubled, and just 19.5 percent in four layers.

...

The best-performing designs were a mask constructed of two layers of high-quality, heavyweight “quilter’s cotton,” a two-layer mask made with thick batik fabric, and a double-layer mask with an inner layer of flannel and outer layer of cotton.

...

Bonnie Browning, executive show director for the American Quilter’s Society, said that quilters prefer tightly woven cottons and batik fabrics that stand up over time. Ms. Browning said most sewing machines can handle only two layers of fabric when making a pleated mask, but someone who wanted four layers of protection could wear two masks at a time.

https://www.nytimes.com/article/coronavirus-homemade-mask-material-DIY-face-mask-ppe.html

...adding a layer of nylon stocking over the masks minimized the flow of air around the edges of the masks and improved particle filtration efficiency for all masks, including all commercial products tested. Use of a nylon stocking overlayer brought the particle filtration efficiency for five of the ten fabric masks above the 3M surgical mask baseline...

https://www.medrxiv.org/content/10.1101/2020.04.17.20069567v2.full.pdf

So it's probably worth doing some research to figure out the best mask design from the perspective of effectiveness, ease of explaining how to create, and likelihood that people in the Santa Division will be able to acquire the necessary materials, if you haven't already done this.

Comment by john_maxwell on Concern, and hope · 2020-07-20T05:37:51.168Z · score: 14 (6 votes) · EA · GW

Something I've been doing just a bit lately which seems to be working surprisingly well so far: If I see a polarizing discussion on EA Facebook, and someone writes a comment in a way which seems needlessly combative/confrontational to me, I add them as a friend and private message them trying to persuade them to rewrite their comment.

My general model here is that private 1-on-1 communication is much higher bandwidth, less ego-driven, and more amenable to the resolution of misunderstandings etc. However it's not nearly as scalable (in terms of the size of the audience reached) as a forum discussion is. But private 1-on-1 communication where you try to persuade someone to change their forum writing gets you the best of both worlds.

Another model is that combativeness tends to beget combativeness, so it's high-leverage to try & change the tone of the conversation as early as possible.

Comment by john_maxwell on The 80,000 Hours podcast should host debates · 2020-07-13T00:42:10.737Z · score: 4 (3 votes) · EA · GW

Another way to accomplish something similar would be to post the podcasts to the EA Forum and have this be the official place for people to comment on them.

Comment by john_maxwell on EA Cameroon - COVID-19 Awareness and Prevention in the Santa Division of Cameroon Project Proposal · 2020-07-10T03:29:28.066Z · score: 8 (5 votes) · EA · GW

This looks like a great initiative!

I noticed that based on the link in your EA Forum profile, you're located in the Czech Republic--would you mind talking a bit about your relationship with EA Cameroon?

Comment by john_maxwell on EA Survey 2019 Series: How many people are there in the EA community? · 2020-06-30T00:59:20.281Z · score: 5 (3 votes) · EA · GW

Coming together weekly to meet in person as a movement, like megachurches do, is an interesting thought experiment. Post-COVID, if remote work is the new norm, it might be feasible to locate all of EA in a single city with low cost of living. Would this be a positive change? My intuition says yes, but with high uncertainty. Maybe it's just me being extroverted.

Comment by john_maxwell on Request for proposal - EA Animal Welfare Fund · 2020-05-29T08:24:18.120Z · score: 10 (5 votes) · EA · GW

Here's a project idea that someone might want to take on.

It's been argued (response) that bivalves have much reduced capacity for suffering compared to other commonly eaten animals. And according to this recent NY times article, "Mollusks like clams, oysters and scallops are also great low-carbon choices."

What could get people to substitute with bivalves? Lowering the price should help. A quick search on Amazon suggests that canned mussel (one of the bivalves highest in Omega-3s according to this chart) is 2-3x as expensive per ounce as canned salmon.

How could the price be reduced? On this US government website, you can see a picture of a guy culling and grading oysters by hand. Google has a case study on its website of a Japanese cucumber farm which used deep learning to sort cucumbers. Could similar technology be developed for bivalves? As a bonus, you could develop AI skills along the way, and potentially make a decent amount of money. It might be best to partner with/be hired by existing efforts in the bivalve automation space... here is one I found on Google.

Comment by john_maxwell on Tips for overcoming low back pain · 2020-04-18T07:49:40.959Z · score: 3 (2 votes) · EA · GW

Painscience.com has been amazing for all my chronic pain problems. Looks like he has a comprehensive guide for low back pain which might be worth checking out:

https://www.painscience.com/tutorials/low-back-pain.php

Comment by john_maxwell on April Fool's Day Is Very Serious Business · 2020-04-08T09:05:15.231Z · score: 3 (2 votes) · EA · GW

Yes!

https://forum.effectivealtruism.org/posts/GdsEF95LobbSEGuBM/new-top-ea-causes-for-2020

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T23:57:16.960Z · score: 2 (1 votes) · EA · GW

I won't stop you! :)

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T21:22:18.505Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-02T21:21:57.489Z · score: 2 (1 votes) · EA · GW

Yep!

Comment by john_maxwell on New Top EA Causes for 2020? · 2020-04-01T08:07:47.059Z · score: 38 (17 votes) · EA · GW

Get Joe Biden To Take Nootropics.

For a while, the 2020 American presidential contest was down to three men: The man with heart trouble, the man with brain trouble, and the man with ego trouble. But now that the man with heart trouble is ranking below Andrew Cuomo, who is not even running, in prediction markets for the Democratic nomination, it is looking increasingly likely that the American public will have to decide whether brain trouble or ego trouble is less disqualifying.

What they should be asking is which of brain trouble or ego trouble is more easily fixed.

It's possible you've heard some buzz in your social circle about brain-enhancing nootropic drugs. One thing you might not know is that in some cases, although the drug appears to be something of a dud for younger folks, it works in oldsters:

While it is known that the human brain endures diverse insults in the process of ageing, food-based nootropics are likely to go a long way in mitigating the impacts of these insults. Further research is needed before we reach a point where food-based nootropics are routinely prescribed.

From a lit review.

...According to a meta-analysis on human studies, piracetam improves general cognition when supplemented by people in a state of cognitive decline, such as the kind that comes with aging. Though piracetam may be a useful supplement for improving longevity, it offers limited benefits for healthy people.

Healthy people supplementing piracetam do experience little to no cognitive benefit. Though piracetam supplementation in healthy people is understudied, preliminary evidence suggests that piracetam is most effective for older people...

...

In persons with cognitive decline, supplementation of Piracetam was able to reduce aggression and agitation symptoms.

From Examine.com. (Remember from a few news cycles ago: Joe Biden tells factory worker ‘you’re full of shit’ during a tense argument over guns.)

Why might this be a Top EA Cause? In addition to the usual massive responsibilities of being POTUS, America is currently suffering from a pandemic. A 1% improvement in the intelligence of the actions taken by the chief executive could directly and immediately save thousands of lives.

Is it tractable? Yes, but it's not talent or money limited. It's memetics limited. We need to figure out if we have any connections to the Biden campaign who can start planning his meals to keep the bulb as bright as possible. Failing that, we could suggest that a nootropics company form a marketing initiative around this. Or Kelsey could write about it in Vox. Or something.

Don't forget the importance of regular Super Mario 64 play either.

Comment by john_maxwell on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-22T03:29:16.916Z · score: 8 (2 votes) · EA · GW

What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?

Comment by john_maxwell on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-22T03:16:02.361Z · score: 3 (2 votes) · EA · GW

My best guess is no, but feel like I should throw this question out there in case anyone can think of plausible candidates.

Can you explain your thinking behind this? My model is that COVID-19 will spread to developing countries before too long, and once there, it will quickly become a much bigger problem than malaria etc. So the highest-impact global health intervention would appear to be "beta testing" of anti-COVID-19 interventions that we think can be transferred to a developing country context.

Anyway, this recent post on far-ultraviolet light looks pretty interesting. I'm pretty optimistic about ideas like this which could use the momentum of COVID-19 to overcome regulatory hurdles etc. and then end up being super valuable for other problems going forward.

Comment by john_maxwell on Advice for getting the most out of one-on-ones · 2020-03-21T05:19:06.043Z · score: 2 (3 votes) · EA · GW

If everyone records their 1-on-1s and rates their value on a scale of 1 to 10, along with various features that might be predictive of 1-on-1 value (e.g. how junior/senior they are, whether you're working on similar problems, whether you are from the same/different countries, your general conversational prompts/questions/conversation topic, etc.) then we can assemble a dataset and develop a predictive model of how valuable a 1-on-1 is likely to be. That helps with choosing who to meet with, and also persuading people to meet with you (if the predictive model says you should meet, that increases the odds they respond), and also knowing what to talk about (check to see which questions/conversation topics are predictive of a valuable 1-on-1).

Actually for the questions/conversation topics part, if I was an EAG attendee, I would start a thread of question/conversation ideas on Facebook or somewhere for people to brainstorm in, and then use some kind of approval voting so people can figure out what the best prompts are over time. If you have a good conversation, try to figure out what prompt could have created that conversation in retrospect then add it to the list.

Comment by john_maxwell on April Fool's Day Is Very Serious Business · 2020-03-13T22:03:03.730Z · score: 11 (4 votes) · EA · GW

Makes sense, I'll do that.

Comment by john_maxwell on Open Thread #46 · 2020-03-13T21:54:19.762Z · score: 4 (2 votes) · EA · GW

That occurred to me, but I've noticed myself feeling more willing to post in an Open Thread than post as shortform. LW also has shortform, but despite that, their monthly Open Threads are seeing a lot of activity:

https://www.lesswrong.com/s/yai5mppkuCHPQmzpN

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-13T06:51:22.514Z · score: 6 (3 votes) · EA · GW

Apology accepted, thanks. I agree on point 2.

Comment by john_maxwell on Insomnia with an EA lens: Bigger than malaria? · 2020-03-12T06:36:32.881Z · score: 4 (2 votes) · EA · GW

Since this insomnia is apparently a high-impact topic, I might as well share some anecdotes from my own battle with sleep difficulties.

I've had some success with behavioral solutions to insomnia ("don't use screens after 11 PM" type stuff). But the problem with behavioral solutions, in my view, is that they are too brittle. Life always happens and your habit breaks at some point. So in the spirit of Nassim Nicholas Taleb's comments on fragility, I've instead recently focused on finding "robust" or "antifragile" solutions to the problem of getting enough sleep. These tend to be technological. Right now I'm stacking a bunch of different technologies for better sleep:

  • Ebb forehead cooler device
  • Weighted blanket
  • f.lux
  • White noise machine
  • Eye mask
  • Glycine
  • Airway expansion. Note: I haven't gotten a sleep study, and I doubt I would strictly meet the criteria for sleep apnea diagnosis, but I still seem to be benefiting a lot from this.
  • Lying on an acupressure mat. Note: I think the most common explanations for why acupuncture works are pseudoscience. I recommend this book.
  • If I have to get up in the middle of the night, I wear orange glasses to block blue light. I also colored the night lights in our house with a red marker so they emit less blue light.

It might sound like a lot, but the nightly overhead of maintaining this is not high--less than 1% of the time I spend asleep. In aggregate this all seems to improve my sleep considerably in a way that doesn't depend on fragile behavioral interventions. (Some of the most valuable-seeming additions have been pretty recent, so we'll see how things work long term.)

Note: I suspect my sleep problems are more "physiological" than "psychological" in nature. CBT-i might work better for someone whose problems are more psychological.

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-12T04:24:44.943Z · score: 24 (9 votes) · EA · GW

I just want to note that in principle, large & weird or small & welcoming movements are both possible. 60s counterculture was a large & weird movement. Quakers are a small & welcoming movement. (If you want to be small & welcoming, I guess it helps to not advertise yourself very much.)

I think you are right that there's a debate around whether EA should be sanitized for a mass audience (by not betting on pandemics or whatever). But e.g. this post mentions that caution around growth could be good because growth is hard to reverse, but I don't see weirdness advocacy.

Comment by john_maxwell on What are the key ongoing debates in EA? · 2020-03-12T04:11:36.726Z · score: 8 (3 votes) · EA · GW

"View X is a rare/unusual view, and therefore it's not a debate." That seems a little... condescending or something?

How are we ever supposed to learn anything new if we don't debate rare/unusual views?

Comment by john_maxwell on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-09T05:29:14.393Z · score: 0 (2 votes) · EA · GW

I guess there's an interesting argument here for making casual gambling illegal--based on this thread, it seems like "Bets are serious & somber business, not for frivolous things like horse races" could be a really high value meme to spread.

Comment by john_maxwell on What should EAs interested in climate change do? · 2020-01-15T08:15:17.248Z · score: 3 (2 votes) · EA · GW

In terms of plant-based alternatives, I think nutrition research could be high-impact and neglected. It seems like people are focused on trying to replicate the taste of meat, but when I experimented with veganism, I found myself wanting meat more the longer I'd gone without it, and experiencing it to be unusually satisfying if I hadn't had it in a long while, which seems more compatible with a nutritional issue -- the same pattern doesn't seem to manifest for other foods I find tasty.

I'm imagining a study which feeds participants a vegan diet along with some randomly chosen nutritional supplements to see which are correlated with reduced desire to eat meat or something like that. Or maybe just better publicizing already known nutrition research / integrating it into plant based meat substitutes -- for example, I just found this article which says iron from red meat is absorbed much more easily -- I do think I was craving red meat specifically relative to other animal products. (Come to think of it, I was also experiencing more fatigue than normal, which seems compatible with mild anemia?)

Comment by john_maxwell on What should EAs interested in climate change do? · 2020-01-11T07:37:12.768Z · score: 18 (9 votes) · EA · GW

Some related links:

More speculative questions (my own personal uninformed thoughts):

  • Regarding the tree planting option, can we breed trees which are less vulnerable to wildfires?
  • Regarding the marine cloud brightening option - could you make it doubly useful by going to areas which experience periodic flooding and spraying floodwaters up into the air? Maybe you could even get municipalities to pay you and make a business out of it.
  • Kelly and Zach Weinersmith wrote a book called Soonish which says (among other interesting things) that robots which automatically build buildings are on the horizon. To what extent could easy, cheap construction of new buildings and cities help mitigate sealevel rises and other global warming effects?
  • My brother has a physics degree and finds this to be a bit implausible: http://superchimney.org But it does make me wonder if there's a way to make money by buying land, terraforming it in a way that's good from a climate perspective, and selling the land after it's increased in value.
Comment by john_maxwell on Space governance is important, tractable and neglected · 2020-01-11T06:39:21.423Z · score: 8 (4 votes) · EA · GW

This is challenging because vast distances in space will likely be an obstacle to effective enforcement. Space is, in a nutshell, an endless desert with oases that are extremely far apart from each other. The closest star to Earth is 4.3 light years away, resulting in a round-trip latency of 8.6 years even if light-speed communication and transport are possible. The closest galaxy is approximately 2.5 million light years away, rendering conventional enforcement impossible.

Reminds me of an interesting article which appeared in Scientific American recently.

Anyway I thought this was a good post. With regard to tractability, I think it's possible that as we start to colonize space, the necessity of space governance may become apparent -- perhaps in a sudden & unexpected way. If political leaders are looking for solutions at that time, it's probably a good thing if there are proposals available which have been forged through an extensive & lively debate (as opposed to some kind of hastily composed emergency measure which ends up locking us into a suboptimal trajectory).

Another thought: If you think high quality political conversations are unusually difficult to have right now, but this situation might improve in the future, that could be an argument for delaying widespread public discussion of high-impact political topics to some future time when the situation has improved. (No reason not to think about such topics privately though.)

Comment by john_maxwell on Response to recent criticisms of EA "longtermist" thinking · 2020-01-11T03:29:53.247Z · score: 8 (2 votes) · EA · GW

It seems weird that the longtermism is being accused of white supremacy given that population growth is disproportionately happening in countries that aren't traditionally considered white? As you can see from the map on this page, population growth is concentrated in places like Africa, the Middle East, and South Asia. It appears to me that it's neartermist views of population ethics ("only those currently alive are morally relevant") that place greater moral weight on white folks? I wonder how a grandmother from one of those places, proud of her many grandchildren, would react if a childless white guy told her that future generations weren't morally relevant... It also seems weird to position climate change as a neartermist cause.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T08:42:30.156Z · score: 18 (7 votes) · EA · GW

The reversal test doesn't mean 'if you don't think a charity for X is promising, you should be in favour of more ¬X'. I may not find homeless shelters, education, or climate change charities promising, yet not want to move in the direction of greater homelessness, illiteracy, or pollution.

Suppose you're the newly appointed director of a large charitable foundation which has allocated its charitable giving in a somewhat random way. If you're able to resist status quo bias, then usually, you will not find yourself keeping the amount allocated for a particular cause at exactly the level it was at originally. For example, if the foundation is currently giving to education charities, and you don't think those charities are very effective, then you'll reduce their funding. If you think those charities are very effective, then you'll increase their funding.

Now consider "having EAs live alone in apartments in expensive cities" as a cause area. Currently, the amount we're spending on this area has been set in a somewhat random way. Therefore, if we're able to resist status quo bias, we should probably either be moving it up or moving it down. We could move it up by creating a charity that pays EAs to live alone, or move it down by encouraging EAs to move to the EA Hotel. (Maybe creating a charity that pays EAs to live alone would be impractical or create perverse incentives or something, this is more of an "in principle" intuition pump sort of an argument.)

Edit: With regard to the professionalism thing, my personal feelings on this are something like the last paragraph in this comment -- I think it'd be good for some of us to be more professional in certain respects (e.g. I'm supportive of EAs working to gain institutional legitimacy for EA cause areas), but the Hotel culture I observed feels mostly acceptable to me. Probably some mixture of not seeing much interpersonal drama while I was there, and expecting the Hotel residents will continue to be fairly young people who don't occupy positions of power (grad student housing comes to mind). FWIW, my personal experience is that the value of professionalism comes up more often in Blackpool EA conversations than Bay Area EA conversations. With the Bay Area, you may very well be paying more rent for a less professional culture. Just my anecdotal impressions.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T01:05:25.267Z · score: 14 (10 votes) · EA · GW

I'm not convinced community health issues are uniquely problematic when you have people living together. I feel like one could argue just as easily that conferences are risky for community health. If something awkward happens at EA Global, you'll have an entire year to chew on that before running into the person next year. (Pretty sure that past EA Global conferences have arranged shared housing in e.g. dormitories for participants, by the way.) And there is less shared context at a conference because it happens over a brief period of time. One could also argue that having the community be mostly online runs risks for community health (for obvious reasons), and it's critical for us to spend lots of time in person to build stronger bonds. And one could argue that not having much community at all, neither online nor in person, runs risks for community health due to value drift. Seems like there are risks everywhere.

If people really think there are significant community health risks with EA roommates, then they could start a charity which pays EAs who currently live with EA roommates to live alone. To my knowledge, no one has proposed a charity like that. It doesn't seem like a very promising charity to me. If you agree, then by the reversal test, it follows that as a community we should want to move a bit further in the direction of EAs saving money by living together.

Comment by john_maxwell on Institutions for Future Generations · 2019-11-22T20:39:10.016Z · score: 4 (2 votes) · EA · GW

Interesting point re: savings rate. It wouldn't surprise me if economists have done research into what factors cause an increase in the savings rate. (If no research has been done so far, it seems like such research would fill a valuable gap in the literature.) Anyway, it seems plausible to me that some things which cause an increase in the savings rate also increase longtermism more generally. (This is another topic which we could gather information about by checking to see if people who save a lot of money are more longtermist generally.) My personal guess would be that economic and political stability predicts savings rate better than equality. I suspect drastic efforts to mitigate present day inequality would probably decrease savings rate if anything. What's the point in saving money if the government might randomly take it at some point in the future? [Edit: If you replace "reducing inequality" with "ensuring more people have lower levels on Maslow's hierarchy met" then I'd be more convinced.]

More broadly, I'd be interested to see people tackling longtermism as a psychological rather than a political project--what are the correlates of longtermist outlook that could feasibly be affected through interventions?

Comment by john_maxwell on Institutions for Future Generations · 2019-11-22T20:24:41.498Z · score: 3 (2 votes) · EA · GW

Can I sell my security? Why not just sell right before doing whatever it is I want to do that is going to screw the future over?

Comment by john_maxwell on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-22T05:17:26.451Z · score: 12 (4 votes) · EA · GW

Thanks for the aggregate position summary! I'd be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who aren't employed at existing EA organizations. I'm especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who don't work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.

For a while now I've been thinking that the crowdsourcing of alternate perspectives ("breadth-first" rather than "depth-first" exploration of idea space) is one of the internet's greatest strengths. (I also suspect "breadth-first" idea exploration is underrated in general.) On the flip side, I'd say one of the internet's greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectly -- btw, I couldn't find any reference to academic research on the impact of remittances on Givewell's current GiveDirectly profile -- maybe they just didn't think to look it up -- case study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), there's a sense in which we'd be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama / bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.

Maybe part of what's going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.

Comment by john_maxwell on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T22:14:44.584Z · score: 19 (8 votes) · EA · GW

Not Buck, but one possibility is that people pursuing different high-level agendas have different intuitions about what's valuable, and those kind of disagreements are relatively difficult to resolve, and the best way to resolve them is to gather more "object-level" data.

Maybe people have already spent a fair amount of time having in-person discussions trying to resolve their disagreements, and haven't made progress, and this discourages them from writing up their thoughts because they think it won't be a good use of time. However, this line of reasoning might be mistaken -- it seems plausible to me that people entering the field of AI safety are relatively impartial judges of which intuitions do/don't seem valid, and the question of where new people in the field of AI safety should focus is an important one, and having more public disagreement would improve human capital allocation.

Comment by john_maxwell on EA syllabi and teaching materials · 2019-11-19T03:36:09.018Z · score: 2 (1 votes) · EA · GW

Here's another one: https://forum.effectivealtruism.org/posts/8i3Wdy4FuJbSDQr5k/a-semester-long-course-in-ea

Comment by john_maxwell on What areas of maths are useful across disciplines? · 2019-11-19T03:28:18.032Z · score: 2 (1 votes) · EA · GW

multiobjective optimization theory

Can you say something about why you feel this is especially useful?

Comment by john_maxwell on How to find EA documents on a particular topic · 2019-11-19T02:56:43.091Z · score: 9 (7 votes) · EA · GW

I assembled a huge list of domains like this and created a custom search engine using this tool from Google. Unfortunately despite it being Google, the search results are really terrible, so I never posted it. (Example: a search for "capacity-building" returns 5 results, none of which are this page. I know it's picking up concepts.effectivealtruism.org because when I search for "moral uncertainty" the #2 result is from concepts.effectivealtruism.org. BTW, I included quite a number of domains in the search engine, not all the results are necessarily EA-related.)

https://searchstack.co is a nice little tool which makes use of the site:A OR site:B mechanism, but unfortunately I believe Google caps the number of distinct domains you can search using that trick? But maybe we could use multiple searchstacks for different EA subtopics. I think if there are search companies that actually do a good job of allowing you to create a custom search engine, that would be the ideal solution, even if it requires paying a monthly fee. If someone else wants to take initiative on this, I'd love to collaborate.

It'd be especially cool if a search engine could search Facebook group archives, since there's so much EA discussion in those.

Comment by john_maxwell on Applying EA to climate change · 2019-11-18T23:14:14.205Z · score: 2 (1 votes) · EA · GW

Surely some food emits much more carbon than other food. Maybe we could just tax food based on how much carbon it emits? Then people won't want to throw it away because they don't want to waste their money. (And they'll also substitute high-emission food for low-emission food.)

Comment by john_maxwell on Applying EA to climate change · 2019-11-18T07:06:49.871Z · score: 7 (2 votes) · EA · GW

35% of food is thrown away in high-income economies.

That number seems pretty high. I wonder where most of the waste happens? Somewhat contrived scenario here, but suppose the drug store buys a new food product. Customers aren't having it so they throw it away. But then due to this awareness campaign, next time they keep it on the shelf--which means they don't have room for something customers do want to buy, so the customers drive to a different store, cancelling out the alleged food waste benefit. Again, contrived, I just feel like we should know why the waste is happening before working to stop it. There's a clear financial incentive not to waste food. Maybe it's mostly food with a short shelf life, like fresh vegetables, that people intend to eat but never do?

Instead of a public campaign against food waste, maybe a public campaign that shows the decarbonization benefit of everyday lifestyle changes. Which is better from an individual perspective: stop driving and take the bus to work, or cut food waste from 35% to 0%?

Comment by john_maxwell on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-12T07:57:59.332Z · score: 33 (17 votes) · EA · GW

Thanks for this post!

One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?

There isn't necessarily a contradiction in expressing both positions. For example, perhaps there's an intellectual center and it's too weird. (Though, if the weirdness comes in the form of "People saying crazy stuff online", this explanation seems less likely.) You could also argue that we are open to weird ideas, just not the right weird ideas.

But I think it could be interesting to try & make this tradeoff more explicit in future surveys. It seems plausible that the de facto result of announcing survey results such as these is to move us in some direction along a single coarse intellectual centralization/decentralization dimension. (As I said, there might be a way to square this circle, but if so I think you want a longer post explaining how, not a survey like this.)

Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension--but perceptions may differ. Maybe one leader says "we need more talk and less action", and another leader says "we need less talk and more action", but they both agree on the ideal talk/action balance, they just disagree about the current balance (because they've made different observations about the current balance).

One way to address this problem in general for some dimension X is to have a rubric with 5 written descriptions of levels of X the community could aim for, and ask each leader to select the level of X that seems optimal to them. Another advantage of this scheme is if there's a fair amount of community variation in levels of X, the community could be below the optimal level of X on average, but if leaders publicly announce that levels of X should move up (without having specified a target level), people who are already above the ideal level of X might move even further above the ideal level.

Comment by john_maxwell on Assumptions about the far future and cause priority · 2019-11-12T06:06:28.884Z · score: 4 (3 votes) · EA · GW

It seems extremely unlikely to me that we will come remotely close to discovering the utility-maximizing pattern of matter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many different ways could these atoms be organized in space? To keep things simple, suppose that we just want to delete some of them to form a more harmonious pattern, and otherwise do not move anything. Then there are already 2^(10^50) possible patterns for us to explore.

One direction you could take this: It's probably not actually necessary for us to explore 2^(10^50) patterns in a brute-force manner. For example, once I've tried brussels sprouts once, I can be reasonably confident that I still won't like them if you move a few atoms over microscopically. A Friendly AI programmed to maximize a human utility function it has uncertainty about might offer incentives for humans to try new matter configurations that it believes offer high value of information. For example, before trying a dance performance which lasts millions of years, it might first run an experimental dance performance which lasts only one year and see how humans like it. I suspect a superintelligent Friendly AI would hit diminishing returns on experiments of this type within the first thousand years.

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:13:48.518Z · score: 41 (16 votes) · EA · GW

The fact that there are only 18 total donations totaling less than $10k is concerning

If you are well-funded, they'll say: "You don't need my money. You're already well-funded." If you aren't well-funded, they'll say: "You aren't well-funded. That seems concerning."

Comment by john_maxwell on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:12:49.077Z · score: 30 (12 votes) · EA · GW

This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That's why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It's why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.

If you really want to discuss this topic, I suggest you create a separate post laying out your position - but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you'll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, "unsafe" AI research that DeepMind is famous for should be a major priority suggests to me that there's a fair amount you don't know about positions & thinking that are common to the EA movement.

Here are some misc links which could be relevant to the topic of measurability:

And here's a list of lists of EA resources more generally speaking:

Comment by john_maxwell on Notes on 'Atomic Obsession' (2009) · 2019-10-27T20:56:37.811Z · score: 2 (1 votes) · EA · GW

Seems like some form of Pascal's Wager is valid in this case -- it's hard to know for sure what the impact of nukes will be, especially without the benefit of hindsight, so it's better to err on the side of caution.