Comment by john_maxwell_iv on Can the EA community copy Teach for America? · 2019-02-22T06:29:23.411Z · score: 9 (4 votes) · EA · GW

Another question related to Task Y: supposing Task Y does exist, would you rather people working on Task Y think of themselves as "Soft EAs", or as people who are part of the "Task Y community"? For example, if eating a vegan diet is Task Y, would you like vegans to start thinking of themselves as EAs due to their veganism? If veganism didn't exist already, and it was an idea that originated from within the EA community, would it be best to spin it off or keep it internal?

I can think of arguments on both sides:

  • Maybe there's already a large audience of people who have heard about EA and think it's really cool but don't know how to contribute. If these people already exist, we might as well figure out the best things for them to do. This isn't necessarily an argument for expansion of EA, however. (It's also not totally clear which direction this consideration points in.)
  • If Task Y is a task where the argument for positive impact is abstruse & hard to follow, then maybe a "Task Y Movement" isn't ever going to get off the ground because it lacks popular appeal. Maybe the EA movement has more popular appeal, and the EA movement's popular appeal can be directed into Task Y.
  • Some find the EA movement uninviting in its elitism. Even on this forum, reportedly the most elitist EA discussion venue, a highly upvoted post says: "Many of my friends report that reading 80,000 Hours’ site usually makes them feel demoralized, alienated, and hopeless." There have been gripes about the difficulty of getting grant money for EA projects from grantmaking organizations after it became known that "EA is no longer funding-limited". (I might be guilty of this griping myself.) Do we want average Janes and Joes reading EA career advice that Google software engineers find "very depressing"? How will they feel after learning that some EAs are considered 1000x as impactful as them?
  • Expansion of the EA movement itself could be hard to reverse and destroy option value.
Comment by john_maxwell_iv on Three Biases That Made Me Believe in AI Risk · 2019-02-16T04:53:51.414Z · score: 10 (5 votes) · EA · GW

If people are biased towards believing their actions have cosmic significance, does this also imply that people without math & CS skills will be biased against AI safety as a cause area?

Comment by john_maxwell_iv on The Need for and Viability of an Effective Altruism Academy · 2019-02-15T23:29:38.809Z · score: 13 (5 votes) · EA · GW

The EA Hotel hosted an EA Retreat which sounds a bit similar. Here's a report from a Czech EA retreat.

Comment by john_maxwell_iv on The Need for and Viability of an Effective Altruism Academy · 2019-02-15T23:27:55.256Z · score: 14 (5 votes) · EA · GW

The Pareto Fellowship even moreseo for me. Here CEA explains why they discontinued it.

Comment by john_maxwell_iv on Three Biases That Made Me Believe in AI Risk · 2019-02-14T04:24:54.480Z · score: 3 (3 votes) · EA · GW
I changed the sentence you mention to "If you want to understand present-day algorithms, the "pre-driven car" model of thinking works a lot better than the "self-driving car" model of thinking. The present and past are the only tools we have to think about the future, so I expect the "pre-driven car" model to make more accurate predictions." I hope this is clearer.

That is clearer, thanks!

I think that it is a hopeless endeavour to aim for such precise language in these discussions at this point in time, because I estimate that it would take a ludicrous amount of additional intellectual labour to reach that level of rigour. It's too high of a target.

Well, it's already possible to write code that exhibits some of the failure modes AI pessimists are worried about. If discussions about AI safety switched from trading sentences to trading toy AI programs, which operate on gridworlds and such, I suspect the clarity of discourse would improve.

I might post some scraps of arguments on my blog soonish, but those posts won't be well-written and I don't expect anyone to really read those.

Cool, let me know!

Comment by john_maxwell_iv on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T00:56:19.715Z · score: 5 (4 votes) · EA · GW

Presumably if the argument is for why the weight should be higher, then kbog will pay attention?

Comment by john_maxwell_iv on Three Biases That Made Me Believe in AI Risk · 2019-02-14T00:52:12.794Z · score: 19 (10 votes) · EA · GW

Some thoughts:

The "language" section is the strongest IMO. But it feels like "self-driving" and "pre-driven" cars probably exist on some kind of continuum. How well do the system's classification algorithms generalize? To what degree does the system solve the "distribution shift" problem and tell a human operator to take control in circumstances that the car isn't prepared for? (You call these circumstances "unforeseen", but what about a car that attempts to foresee likely situations it doesn't know what to do in and ask a human for input in advance?) What experiment would let me determine whether a particular car is self-driving or pre-driven? What falsifiable predictions, if any, are you making about the future of self-driving cars?

I was confused by this sentence: "The second pattern is superior by wide margin when it comes to present-day software".

I think leaky abstractions are a big problem in discussions of AI risk. You're doubtless familiar with the process by which you translate a vague idea in your head into computer code. I think too many AI safety discussions are happening at the "vague idea" level, and more discussions should be happening at the code level or the "English that's precise enough to translate into code" level, which seems like what you're grasping at here. I think if you spent more time working on your ontology and the clarity of your thought, the language section could be really strong.

(Any post which argues the thesis "AI safety is easily solvable" is both a post that argues for de-prioritizing AI safety and a post that is, in a sense, attempting to solve AI safety. I think posts like these are valuable; "AI safety has this specific easy solution" isn't as within the Overton window of the community devoted to working on AI safety as I would like it to be. Even if the best solution ends up being complex, I think in-depth discussion of why easy solutions won't work has been neglected.)

Re: the anchoring section, pretty sure it is well documented by psychologists that humans are overconfident in their probabilistic judgements. Even if humans tend to anchor on 50% probability and adjust from there, it seems this isn't enough to counter our overconfidence bias. Regarding the "Discounting the future" section of your post, see the "Multiple-Stage Fallacy". If a superintelligent FAI gets created, it can likely make humanity's extinction probability almost arbitrarily low through sufficient paranoia. Regarding AI accidents going "really really wrong", see the instrumental convergence thesis. And AI safety work could be helpful even if countermeasures aren't implemented universally, through creation of a friendly singleton.

Comment by john_maxwell_iv on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-13T03:53:46.962Z · score: 1 (1 votes) · EA · GW

Presumably the programmer will make some effort to embed the right set of values in the AI. If this is an easy task, doom is probably not the default outcome.

AI pessimists have argued human values will be difficult to communicate due to their complexity. But as AI capabilities improve, AI systems get better at learning complex things.

Both the instrumental convergence thesis and the complexity of value thesis are key parts of the argument for AI pessimism as it's commonly presented. Are you claiming that they aren't actually necessary for the argument to be compelling? (If so, why were they included in the first place? This sounds a bit like justification drift.)

Comment by john_maxwell_iv on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-12T06:29:13.219Z · score: 0 (2 votes) · EA · GW
the original texts are very clear that the massive jump in AI capability is supposed to come from recursive self-improvement, i.e. the AI helping to do AI research

...because that AI research is useful for some other goal the AI has, such as maximizing paperclips. See the instrumental convergence thesis.

At any rate, though, what does it matter whether the goal is put in after the capability growth, or before/during? Obviously, it matters, but it doesn't matter for purposes of evaluating the priority of AI safety work, since in both cases the potential for accidental catastrophe exists.

The argument for doom by default seems to rest on a default misunderstanding of human values as the programmer attempts to communicate them to the AI. If capability growth comes before a goal is granted, it seems less likely that misunderstanding will occur.

Comment by john_maxwell_iv on Open Thread #43 · 2019-02-09T07:35:35.436Z · score: 2 (2 votes) · EA · GW

OK. I went ahead and removed it now, so the next person to create an open thread will copy/paste the correct message.

Comment by john_maxwell_iv on Open Thread #43 · 2019-02-09T06:13:45.164Z · score: 1 (1 votes) · EA · GW

Great idea! I don't think mass requests are the way to go, though. I'll bet if someone like Peter Singer, Will MacAskill, or Toby Ord sent them a proposal to write an article about EA, they'd accept. I sent Will a Facebook message to ask him what he thinks.

Comment by john_maxwell_iv on What Courses Might Be Most Useful for EAs? · 2019-02-03T03:27:12.200Z · score: 8 (6 votes) · EA · GW

I think more people should be studying statistics, machine learning, and data science, especially Bayesian methods and causal inference. Not only do these skills offer a chance to contribute to AI safety, they're also critical for evaluating scientific papers (important for any field given the replication crisis), doing predictive modeling, and generally thinking in a data-driven and evidence-based way. Math is apparently 80k's #1 recommendation, but when I was a student, I went to an event where math majors talked about their experiences in industry. Most of them said they didn't use the math they learned much and they wish they had studied more statistics. So I would suggest applied math with a statistics emphasis.

Comment by john_maxwell_iv on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-30T01:40:11.736Z · score: 6 (4 votes) · EA · GW
If we're choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.

Tractability matters. Scott Alexander has been critiquing Vox for years. It might be that improving Vox is a less tractable goal than getting EAs to share their articles less.

they went out on a limb to hire Piper, and they've sacrificed some readership to maintain EA fidelity.

My understanding is that Future Perfect is funded by the Rockefeller Foundation. Without knowing the terms of their funding, I think it's hard to ascribe either virtue or vice to Vox. For example, if the Rockefeller Foundation is paying them per content item in the "Future Perfect" vertical, I could ascribe vice to Vox by saying that they are churning out subpar EA content in order to improve their bottom line.

Comment by john_maxwell_iv on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T10:52:17.024Z · score: 10 (4 votes) · EA · GW

This is an interesting essay. My thinking is that "coalition norms", under which politics operate, trade off instrumental rationality against epistemic rationality. I can argue that it's morally correct from a consequentialist point of view to tell a lie in order to get my favorite politician elected so they will pass some critical policy. But this is a Faustian bargain in the long run, because it sacrifices the epistemology of the group, and causes the people who have the best arguments against the group's thinking to leave in disgust or never join in the first place.

I'm not saying EAs shouldn't join political coalitions. But I feel like we'd be sacrificing a lot if the EA movement began sliding toward coalition norms. If you think some coalition is the best one, you can go off and work with that coalition. Or if you don't like any of the existing ones, create one of your own, or maybe even join one & try to improve it from the inside.

Comment by john_maxwell_iv on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T10:40:57.498Z · score: 14 (5 votes) · EA · GW
But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

Is "taking EA mainstream" the best thing for Future Perfect to try & accomplish? Our goal as a movement is not to maximize the people of number who have the "EA" label. See Goodhart's Law. Our goal is to do the most good. If we garble the ideas or epistemology of EA in an effort to maximize the number of people who have the "EA" self-label, this seems like it's potentially an example of Goodhart's Law.

Instead of "taking EA mainstream", how about "spread memes to Vox's audience that will cause people in that audience to have a greater positive impact on the world"?

Comment by john_maxwell_iv on Vocational Career Guide for Effective Altruists · 2019-01-29T10:27:12.379Z · score: 1 (1 votes) · EA · GW

I don't have stats, it's just something I hear from vegans when I suggest an organization to provide welfare standards for meat providers. They say it has been tried before and the organization always gets co-opted by the industry. I'm actually kinda skeptical.

Comment by john_maxwell_iv on Vocational Career Guide for Effective Altruists · 2019-01-27T10:18:46.516Z · score: 4 (3 votes) · EA · GW

If you work as an agricultural inspector and err on the side of making recommendations which happen to improve animal welfare, that seems like it could be high-impact. Also: An argument I hear from vegans is that we can't have happy meat because any organization which purports to enforce some standard of animal welfare will essentially get bribed by factory farms. If this is true, a way to address it would be to funnel un-bribable people with a passion for animal welfare into those roles.

WRT earning to give, the US Bureau of Labor Statistics maintains an Occupational Outlook Handbook with info on wages and job growth for loads of different jobs. Air traffic controller looks pretty good, although the BLS seems to think you typically need a 2-year degree, so maybe it doesn't count as "vocational".

I also think it is worth specifically thinking in terms of jobs which aren't on the radar of other people, because lower supply is going to mean a higher salary. These reddit threads might be worth checking out. Finally, it might be worthwhile to try to get access to publicly available salary data in order to determine which municipalities pay a lot of money for jobs like being a police officer. (You probably also want to take a careful look at the pension plan in that municipality to ensure that it's on solid ground fiscally.) BTW, Tyler Cowen likes to argue that hiring more cops and imprisoning fewer people would be good for the USA on both crime reduction and humanitarian grounds; here is one presentation of the argument.

Comment by john_maxwell_iv on A Concrete Model for Running an EA Group · 2019-01-27T09:20:46.309Z · score: 1 (1 votes) · EA · GW

Here's another

Comment by john_maxwell_iv on Combination Existential Risks · 2019-01-16T08:00:43.245Z · score: 1 (1 votes) · EA · GW

Has there been any research into ways to address Allee effects? Seems like that could address a range of combination existential risks simultaneously.

Comment by john_maxwell_iv on Critique of Superintelligence Part 1 · 2019-01-14T06:15:28.468Z · score: 1 (1 votes) · EA · GW
To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.

Seems a little anthropomorphic. A possibly less anthropomorphic argument: If we possess the algorithms required to construct an agent that's capable of achieving decisive strategic advantage, we can also apply those algorithms to pondering moral dilemmas etc. and use those algorithms to construct the agent's value function.

Comment by john_maxwell_iv on EA orgs are trying to fundraise ~$10m - $16m · 2019-01-08T02:15:41.980Z · score: 7 (3 votes) · EA · GW

I think many organizations try to raise money at the end of the tax year ("giving season"), because many people put off their giving until then. If they are still trying to raise money in 2019, that suggests that they didn't hit their fundraising targets for 2018's giving season.

Maybe for each organization, we could gather info on how long that organization has been doing fundraising for its latest round, and how successful it has been so far?

Comment by john_maxwell_iv on EA Hotel Fundraiser 1: the story · 2018-12-31T07:27:40.618Z · score: 1 (1 votes) · EA · GW

Switzerland also has a sizable EA community.

Comment by john_maxwell_iv on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-23T22:43:38.528Z · score: 10 (5 votes) · EA · GW

Another problem with this article: implicitly making inferences about populations based on what you read on Twitter. There's no particular reason to believe that Twitter users are a representative sample of the population (or that your Twitter feed is a representative sample of Twitter users, even).

Comment by john_maxwell_iv on Why You Should Invest In Upgrading Democracy And Give To The Center For Election Science · 2018-12-17T03:28:22.081Z · score: 2 (2 votes) · EA · GW

Congrats on your win in Fargo! Here are some thoughts.

Like you, I favor approval voting over IRV (based on a cursory assessment). However, I actually like the idea of some states using approval voting and some states using IRV. My feeling is that every voting method, including approval voting/score voting, likely has its own set of flaws. Some flaws may only become apparent when the method is used in a sequence of large elections over an extended period of time, as strategic voting, polling behavior, party politics, and election coverage reach an equilibrium.

In machine learning, a common strategy to improve performance is to make use of a collection of different machine learning algorithms. The idea is that any individual algorithm makes mistakes, but by taking the average output across all the different algorithms, individual mistakes often cancel each other out. In the same way, if different states make use of different voting methods, hopefully the flaws in each individual voting method will tend to cancel each other out. The best strategy becomes being the best candidate possible instead of trying to game an array of different voting systems simultaneously.

(This doesn't diminish the urgency of making sure that a decent proportion of the electorate makes use of approval voting. Just saying it might not be a tragedy if IRV gets a decent fraction of the vote share, and IRV/approval voting advocates should work together to expand use of better voting systems, especially in swing states! Actually, speaking of swing states, is there any technique that could be used by a state to allocate its electoral votes strategically? Suppose Gary Johnson gets the most approvals in Florida. Seems like Florida residents could be pretty upset by their electoral votes being "thrown away" if Johnson isn't given electoral votes by any other state. Is their any legal provision for Florida waiting until other states have had their electoral votes allocated before they get allocated in Florida? Though, even if there is, this gets complicated if multiple states were using such a procedure...)


Another random thought. Has anyone studied the properties of the following "instant elimination voting" procedure:

  • Figure out which candidate was ranked last by the largest number of voters.
  • Eliminate that candidate. For every voter who ranked that candidate last, their second-to-last choice is now treated as their last choice.
  • Repeat until all but one candidate has been eliminated.

I presented it as IRV in reverse, but one can imagine variants. The objective here is to minimize our probability of electing a heavily disliked candidate as opposed to maximize our probability of electing a heavily liked candidate. IMO this is a reasonable objective--I suspect in general, it's easier for a bad candidate to do a lot of harm than a good candidate to do a lot of good.

I know "disapproval voting" is mathematically equivalent to approval voting, but it seems like from a behavioral economics standpoint, they might not be equivalent. If disapproval voting causes voters to "default approve" candidates they're unfamiliar with, and approval voting causes voters to "default disapprove" candidates they're unfamiliar with (see: research into opt-in vs opt-out organ donation), that suggests obscure 3rd party or independent compromise candidates will do better in disapproval voting. Not sure if this is a good or bad thing. And I suppose a reddit style upvote/downvote scheme would average out the biases of disapproval and approval voting.

Comment by john_maxwell_iv on Open Thread #43 · 2018-12-09T08:22:22.164Z · score: 3 (3 votes) · EA · GW

Here are some resources I've been consuming:

Comment by john_maxwell_iv on Open Thread #43 · 2018-12-08T05:51:49.905Z · score: 5 (5 votes) · EA · GW

I came across a quote from biostatistician Andrew Vickers that I really like:

A mistake in the operating room can threaten the life of one patient; a mistake in statistical analysis or interpretation can lead to hundreds of early deaths. So it is perhaps odd that, while we allow a doctor to conduct surgery only after years of training, we give [the SPSS statistical software] to almost anyone.

I can think of a number of caveats. For example, if you're an amateur trying to conduct a statistical analysis of some phenomenon where no statistical analysis currently exists, maybe you should not let the perfect be the enemy of the good by suppressing your analysis altogether, if it means people will continue thinking about the phenomenon in an intuitive and non-data-driven way.

But I do think it'd be valuable for the EA community to connect with or create more EAs with a deep understanding of statistics. Improving my own statistical skills has been a big project of mine recently, and the knowledge I've gained feels very generally useful.

Open Thread #43

2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Comment by john_maxwell_iv on Earning to Save (Give 1%, Save 10%) · 2018-11-27T06:28:35.847Z · score: 7 (6 votes) · EA · GW

I'm inclined to agree with this post, but...

And... this was just the wrong way to go about it. If you have a million dollars, one of the whole points of being able to donate that much is you can direct it to seed fund early stage projects. If you are an early stage project, you can just fund yourself.

Steelmanning the opposite case:

  • If you set up a charity, donations to that charity are tax-deductible. If you're the only person donating to it, the government might consider that a suspicious tax dodge. (I Am Not An Accountant)
  • Getting funding usually means that senior EAs/domain experts are willing to sign off on your project. People who self-fund projects may not have checked whether this is the case.
Comment by john_maxwell_iv on Why we have over-rated Cool Earth · 2018-11-27T01:31:48.279Z · score: 17 (9 votes) · EA · GW
We all knew that the Cool Earth recommendation was low-confidence.

I just glanced at the part of Doing Good Better that discusses Cool Earth. It doesn't seem that low-confidence to me, and does seem a bit self-congratulatory/hubristic.

We haven't solved it, but I feel like that's because it's hard, not because nobody's thought about it.

I think I've observed information cascades in Effective Altruism relating to both global poverty and AI risk. The thinking seems to go something like: EA is a great community full of smart people. If a lot of smart people agree on something, it's probably right. It's hard to beat a big group of smart people, so it's probably not worth my time personally to consider this issue in detail, so I will use the opinion of a big group of smart people instead. Then when I offer arguments against some position, the person is like: "I know you're wrong because a big group of smart people disagrees with you", without considering my arguments.

Big groups of smart people are great, but only because they have intelligent disagreements in order to come to correct opinions. You should trust an idea because it has survived a lot of diverse challenges, not because a lot of people profess it (and especially not because a lot of people in your social circle profess it, since social circles are self-selected). IMO, if you aren't personally knowledgable regarding a diverse set of challenges an idea has survived, you shouldn't confidently say that idea is correct, otherwise you are part of the problem. Instead you can say: "some people I trust believe X".

If I'm right, fixing this problem just requires talking about it more. I'm not saying it is a huge problem, but I think I'd prefer that people discussed it a bit more on the margin.

Comment by john_maxwell_iv on Effective Altruism Making Waves · 2018-11-20T02:38:56.775Z · score: 2 (2 votes) · EA · GW
(If you decide to set up your own IFTTT rule for Twitter or anywhere else, my personal opinion is that it's better to avoid jumping into random conversations with strangers, especially if your goal is to "correct" a criticism they made. It won't work.)

Depending on the context, there could be many more people reading the conversation than the person who had the misconception. (IIRC, research into lurker:participant ratios in online conversations often comes up with numbers like 10:1 or 100:1.) If the misconception goes uncorrected then many more people could acquire it. I think correcting misconceptions online can be a really good use of time.

Comment by john_maxwell_iv on Burnout: What is it and how to Treat it. · 2018-11-13T00:41:32.436Z · score: 3 (3 votes) · EA · GW

(I originally posted this comment in a subthread that got deleted)

At the EA Hotel we eat dinner together as a house every evening. Sometimes we play board games after. I think this has worked really well for providing social support/'regularly being together', and I highly recommend experimenting with shared meals if you live in a group house/work at the same organization and don't already do it. The key prerequisites are: a system for figuring out who's cooking, and a way to notify everyone when the food is ready (we ring a bell and put a message in the house Facebook chat).

Elizabeth responded with:

"The following is presented as an example of how organizing social support is hard, not that this is a bad idea: I find shared meals really stressful, and I know other people do as well. The fact that the default bonding activity is shared meals seems really bad to me. Between eating disorders, medical food restrictions, and simple preferences, just choosing a restaurant or menu can be really fraught."

Comment by john_maxwell_iv on Thoughts on short timelines · 2018-11-08T14:20:22.254Z · score: 2 (2 votes) · EA · GW

Greg Brockman makes a case for short timelines in this presentation: Can we rule out near-term AGI? My prior is that deep neural nets are insufficient for AGI, and the increasing amounts of compute necessary to achieve breakthroughs reveals limits of the paradigm, but Greg's outside view argument is hard to dismiss.

Comment by john_maxwell_iv on Burnout: What is it and how to Treat it. · 2018-11-08T10:26:09.044Z · score: 3 (3 votes) · EA · GW

At the EA Hotel we eat dinner together as a house every evening. Sometimes we play board games after. I think this has worked really well for providing social support/'regularly being together', and I highly recommend experimenting with shared meals if you live in a group house/work at the same organization and don't already do it. The key prerequisites are: a system for figuring out who's cooking, and a way to notify everyone when the food is ready (we ring a bell and put a message in the house Facebook chat).

Comment by john_maxwell_iv on Additional plans for the new EA Forum · 2018-09-09T23:31:29.445Z · score: 1 (1 votes) · EA · GW

Another risk is demoralizing anyone who is encouraged to make a post based on the presence of the awards but doesn't actually end up winning.

Comment by john_maxwell_iv on Good news that matters · 2018-09-03T19:36:28.328Z · score: 1 (1 votes) · EA · GW

A friend of mine also wrote a good post with this theme:

Comment by john_maxwell_iv on Open Thread #41 · 2018-09-03T02:49:46.891Z · score: 10 (10 votes) · EA · GW

[Intercultural online communication]

The EA Hotel recently hosted EA London's weeklong retreat, and I got a chance to meet lots of EAs in Europe, which was great! One of the many interesting discussions I had was about intercultural communication differences in online discussion. Apparently my habit of spending a few minutes thinking about someone's post and writing the first thing that comes into my head as a comment is "very American". It seems that some EAs in the UK like to be fairly certain about their ideas before sharing them online, and when they do share their ideas, they put more effort into hedging their statements to communicate the correct level of confidence. I thought this was important for forum readers to know; I would hate for people to think that the thoughts I have off the top of my head are carefully considered, and similarly, it seems worth knowing that some forum users comment infrequently because they want the thoughts they do share to carry more weight. This is plausibly more of a UK vs US cultural difference than a cultural difference between the UK & US EA communities specifically, but it still seems worth knowing.

Open Thread #41

2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Comment by john_maxwell_iv on EA Hotel with free accommodation and board for two years · 2018-08-27T00:29:34.999Z · score: 2 (2 votes) · EA · GW

(I'm assuming that the counterfactual here is someone who wants to do unpaid direct work full time, has some funds available that could be used to either support themselves or could be donated to something high impact, and could either live in SF or Blackpool.)

If you have a high income, though, you can pay other people to do them: for example, instead of cooking you could buy frozen food, buy restaurant food, or hire a cook.

These options don't go away if you move to Blackpool. But your rent does get a lot cheaper.

It seems like maybe there are two questions here which are more or less orthogonal: the value of hiring a very talented full-time manager for your group house (someone who is passing up a job that pays $75K+ in order to be manager), and the value of moving to Blackpool. I think the value of having a very talented full-time manager for your group house is not about reducing expenses, it's about creating a house culture that serves to multiply the impact of all the residents. If that's not possible then it probably makes less sense to hire a manager whose opportunity cost is high.

Comment by john_maxwell_iv on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-08-27T00:20:11.376Z · score: 0 (0 votes) · EA · GW

Well if we believe 80K that talent gaps matter more than funding gaps, maybe it's good for excited EAs to worry less about donating and more about direct impact?

Comment by john_maxwell_iv on EA Hotel with free accommodation and board for two years · 2018-08-21T21:30:15.711Z · score: 2 (2 votes) · EA · GW

Whether you live in a hotel or not, there are certain chores that need to be done for your life to run smoothly: grocery shopping, cooking, laundry, etc. These chores don't go away if you live in an expensive housing market or make a high income. But if you live with roommates, it's possible to coordinate with your roommates to achieve economies of scale in these tasks. Right now at the EA hotel, we are trading off so we each take turns cooking for the entire hotel (currently ~6 people) one night per week. This creates economies of scale because cooking for 6 people is much less than 6 times as hard as cooking for one person. I expect that these economies of scale effects will become even more valuable as the number of people in the hotel grows.

Comment by john_maxwell_iv on Bangladesh is in Desperate Need For Effective Altruists · 2018-08-09T00:04:16.372Z · score: 0 (0 votes) · EA · GW

You're welcome to message me, but I don't feel like I have much to offer beyond what I wrote in my comment. Maybe try emailing some researchers in this field asking for advice?

Comment by john_maxwell_iv on Bangladesh is in Desperate Need For Effective Altruists · 2018-08-07T17:43:48.363Z · score: 1 (3 votes) · EA · GW

Man, sounds like a tough situation, I'm so sorry you are going through this.

In addition to all the other stuff in this thread, it might be valuable to read some history in order to try & acquire perspective. I don't know very much history myself, but perhaps a good analogy would be the recent Arab Spring protests. My vague understanding is that a lot of the Arab Spring countries ended up worse off than they were to start with, despite the good intentions of the people protesting. "Color revolutions" in the former Soviet Union could be another analogy--here is an article I found on Google. Perhaps you could gather examples of countries which did/did not succeed in peacefully reforming their government, and try to understand which separates the successful countries from the unsuccessful ones. (Or see if some academic has already attempted this.)

This is a really fascinating video which attempts to show that the bad behavior of autocratic governments is simply a matter of all the individuals involved following their incentives. Maybe the book that inspired the video has some solutions to the problem? This post might have ideas? Chapter 14 of this book? Paying higher salaries is another interesting idea for tackling corruption. If getting books is hard, you might try this free online course created by some prominent economists. The sections on corruption & democracy could be relevant, and maybe the "people" section?

Comment by john_maxwell_iv on Problems with EA representativeness and how to solve it · 2018-08-04T14:30:39.388Z · score: 26 (26 votes) · EA · GW

Maybe this is off topic, but can any near future EAs recommend something I can read to understand why they think the near future should be prioritized?

As someone focused on the far future, I'm glad to have near future EAs: I don't expect the general public to appreciate the value of the far future any time soon, and I like how the near future work makes us look good as a movement. In line with the idea of moral trade, I wish there was something that the far future EAs could do for the near future EAs in return, so that we would all gain through cooperation.

Comment by john_maxwell_iv on EA Forum 2.0 Initial Announcement · 2018-07-20T23:37:29.086Z · score: 4 (4 votes) · EA · GW

Yeah maybe they could just select whatever karma tweaks would require the minimum code changes while still being relatively sane. Or ask the LW2.0 team what their second choice karma implementation would look like and use it for the EA forum.

Comment by john_maxwell_iv on EA Forum 2.0 Initial Announcement · 2018-07-20T03:16:31.390Z · score: 6 (6 votes) · EA · GW

Great point. I think it's really interesting to compare the blog comments on to the reddit comments on /r/slatestarcodex. It's a relatively good controlled experiment because both communities are attracted by Scott's writing, and slatestarcodex has a decent amount of overlap with EA. However, the character of the two communities is pretty different IMO. A lot of people avoid the blog comments because "it takes forever to find the good content". And if you read the blog comments, you can tell that they are written by people with a lot of time on their hands--especially in the open threads. The discussion is a lot more leisurely and people don't seem nearly as motivated to grab the reader's interest. The subreddit is a lot more political, maybe because reddit's voting system facilitates mobbing.

Digital institution design is a very high leverage problem for civilization as a whole, and should probably receive EA attention on those grounds. But maybe it's a bad idea to use the EA forum as a skunk works?

BTW there is more discussion of the subforums thing here.

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-20T02:48:53.316Z · score: 1 (1 votes) · EA · GW

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

Lol, like I said, I'm not completely sure. Posts & comments seem to go into greater depth, posts sometimes get referenced long after they are written?

I'm not certain subfora are a terrible idea, I just wanted this risk to be on peoples' radar. One possible compromise is to let people tag their posts (perhaps restricted to a set of tags chosen by moderators) and allow users to subscribe to RSS feeds associated with particular tags.

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-17T22:43:12.389Z · score: 4 (4 votes) · EA · GW

Yeah. I feel like the EA community already has a discussion platform with very granular topic divisions in Facebook, and yet here were are. I'm not exactly sure why the EA forum seems to me like it's working better than Facebook, but I figure if it's not broken don't fix it. Also, I think something like the EA Forum is inherently a bit more fragile than Facebook... any Facebook group is going to benefit from Facebook's ubiquity as a communication tool/online distraction.

You made a list of posts that we’re missing out on now... those kinda seem like the sort of posts I see on EA facebook groups, but maybe you disagree?

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-16T01:02:38.912Z · score: 0 (0 votes) · EA · GW

Do you think you're in significant disagreement with this Givewell blog post?

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-16T00:47:29.831Z · score: 0 (0 votes) · EA · GW

It's surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we've managed to document regularities in how the world works. It's true that as you move "up the stack", say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.

Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-16T00:30:48.435Z · score: 0 (0 votes) · EA · GW

I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?

We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)

Comment by john_maxwell_iv on Open Thread #40 · 2018-07-13T19:40:21.957Z · score: 4 (4 votes) · EA · GW

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."

Comment by john_maxwell_iv on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-13T19:27:33.211Z · score: 2 (2 votes) · EA · GW

Hmmm... One thought is that if projects are half-baked due to a shortage of work hours being thrown at them, consolidating all the work hours into a single project might help address the problem. I also think having more people on the project could help from a motivation perspective, if any given project worker feels responsible for fulfilling their delegated responsibilities and is motivated by a shared vision. But ultimately it's the people who are doing any given project who will figure out how to organize themselves.

Five books to make you super effective

2015-04-02T02:31:48.509Z · score: 6 (6 votes)