Posts

Concrete project lists 2017-03-25T18:12:50.765Z · score: 38 (37 votes)
Announcing the Good Technology Project 2016-01-12T07:44:30.920Z · score: 15 (15 votes)
Effective altruism reading list 2013-10-28T19:10:43.000Z · score: 6 (6 votes)

Comments

Comment by richard_batty on Concrete project lists · 2017-03-27T14:33:33.616Z · score: 3 (1 votes) · EA · GW

Not sure, it's really hard to make volunteer-run projects work and often a small core team do all the work anyway.

This half-written post of mine contains some small project ideas: https://docs.google.com/document/d/1zFeSTVXqEr3qSrHdZV0oCxe8rnRD8w912lLw_tX1eoM/edit

Comment by richard_batty on Concrete project lists · 2017-03-27T13:37:27.397Z · score: 4 (4 votes) · EA · GW

A lot of these would be good for a small founding team, rather than individuals. What do you mean by 'good for an EA group?'

Comment by richard_batty on What Should the Average EA Do About AI Alignment? · 2017-03-25T18:18:18.928Z · score: 11 (8 votes) · EA · GW

No tasty money for you: http://effective-altruism.com/ea/18p/concrete_project_lists/

Comment by richard_batty on CEA's strategic update for February 2017 · 2017-03-19T11:52:19.467Z · score: 8 (8 votes) · EA · GW

I was just looking at the EA funds dashboard. To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?

I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?

Comment by richard_batty on Introducing CEA's Guiding Principles · 2017-03-16T20:51:45.264Z · score: 8 (8 votes) · EA · GW

You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted - ones where you're very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9

Well-explained criticisms won't get downvoted this much.

Comment by richard_batty on What Should the Average EA Do About AI Alignment? · 2017-03-02T18:52:02.617Z · score: 3 (3 votes) · EA · GW

See http://effective-altruism.com/ea/174/introducing_the_ea_funds/a2m?context=1#a2m

Comment by richard_batty on What Should the Average EA Do About AI Alignment? · 2017-03-02T09:56:16.632Z · score: 13 (13 votes) · EA · GW

This is really helpful, thanks.

Whilst I could respond in detail, instead I think it would be better to take action. I'm going to put together an 'open projects in EA' spreadsheet and publish it on the EA forum by March 25th or I owe you £100.

Comment by richard_batty on What Should the Average EA Do About AI Alignment? · 2017-02-28T16:04:46.769Z · score: 8 (8 votes) · EA · GW

I think we have a real problem in EA of turning ideas into work. There have been great ideas sitting around for ages (e.g. Charity Entrepreneurship's list of potential new international development charities, OpenPhil's desire to see a new science policy think tank, Paul Christiano's impact certificate idea) but they just don't get worked on.

Comment by richard_batty on Some Thoughts on Public Discourse · 2017-02-24T16:44:22.966Z · score: 13 (12 votes) · EA · GW

Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They've saved me lots of time and blind alleys.

OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other's findings.

It may be possible to have information sharing through people's networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.

Comment by richard_batty on Anonymous EA comments · 2017-02-11T11:25:09.020Z · score: 7 (7 votes) · EA · GW

Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours - 1 hour for the conversation, 1 hour for writing up.

Comment by richard_batty on Introducing the EA Funds · 2017-02-10T00:00:52.824Z · score: 6 (6 votes) · EA · GW

Thanks, that clarifies.

I think I was confused by 'small donor' - I was including in that category friends who donate £50k-£100k and who fund small organisations in their network after a lot of careful analysis. If the fund is targeted more at <$10k donors that makes sense.

OpenPhil officers makes sense for MVP.

On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest? So are you saying that even for high-quality new projects, funder interest was low, suggesting risk-aversion? If so, that seems to be an important problem to solve if we want a pipeline of new potentially high-impact projects.

On creating promising new projects, myself and Michael Peyton Jones have been thinking a lot about this recently. This thinking is for the Good Technology Project - how can we create an institution that helps technology talent to search for and exploit new high-social-impact startup opportunities. But a lot of our thinking will generalise to working out how to help EA get better at exploration and experimentation.

Comment by richard_batty on Introducing the EA Funds · 2017-02-09T10:50:59.880Z · score: 12 (12 votes) · EA · GW

Small donors have played a valuable role by providing seed funding to new projects in the past. They can often fund promising projects that larger donors like OpenPhil can't because they have special knowledge of them through their personal networks and the small projects aren't established enough to get through a large donor's selection process. These donors therefore act like angel investors. My concern with the EA fund is that:

  • By pooling donations into a large fund, you increase the minimum grant that it's worth their time to make, thus making it unable to fund small opportunities
  • By centralising decision-making in a handful of experts, you reduce the variety of projects that get funded because they have more limited networks, knowledge, and value variety than the population of small donors.

Also, what happened to EA Ventures? Wasn't that an attempt to pool funds to make investments in new projects?

Comment by richard_batty on Anonymous EA comments · 2017-02-08T01:52:10.974Z · score: 8 (8 votes) · EA · GW

What communities are the most novel/talented/influential people gravitating towards? How are they better?

Comment by richard_batty on EA should invest more in exploration · 2017-02-06T10:04:12.513Z · score: 7 (7 votes) · EA · GW

This is really exciting, looking forward to these posts.

The Charity Entrepreneurship model is interesting to me because you're trying to do something analogous to what we're doing at the Good Technology Project - cause new high impact organisations to exist. Whereas we started meta (trying to get other entrepreneurs to work on important problems) you started at the object level (setting up a charity and only later trying to get other people to start other charities). Why did you go for this depth-first approach?

Comment by richard_batty on EA should invest more in exploration · 2017-02-06T09:53:08.637Z · score: 4 (4 votes) · EA · GW

Exploration through experimentation might also be neglected because it's uncomfortable and unintuitive. EAs traditionally make a distinction between 'work out how to do the most good' and 'do it'. We like to work out whether something is good through careful analysis first, and once they're confident enough of a path they then optimise for exploitation. This is comforting because we then get to do only do work when we're fairly confident of it being the right path. But perhaps we need to get more psychologically comfortable with mixing the two together in an experimental approach.

Comment by richard_batty on Changes in funding in the AI safety field · 2017-02-03T16:17:41.089Z · score: 6 (6 votes) · EA · GW

Is there an equivalent to 'concrete problems in AI' for strategic research? If I was a researcher interested in strategy I'd have three questions: 'What even is AI strategy research?', 'What sort of skills are relevant?', 'What are some specific problems that I could work on?' A 'concrete problems'-like paper would help with all three.

Comment by richard_batty on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-17T16:01:09.289Z · score: 0 (0 votes) · EA · GW

What sort of discussion of leadership would you like to see? How was this done in the Army?

Comment by richard_batty on Effective Altruism is Not a Competition · 2017-01-05T23:16:56.684Z · score: 13 (13 votes) · EA · GW

I know some effective altruists who see EAs like Holden Karnofsky or what not do incredible things, and feel a little bit of resentment at themselves and others; feeling inadequate that they can’t make such a large difference.

I think there's a belief that people often have when looking at successful people which is really harmful, the belief that "I am fundamentally not like them - not the type of person who can be successful." I've regularly had this thought, sometimes explicitly and sometimes as a hidden assumption behind other thoughts and behaviours.

It's easy to slip into believing it when you hear the bios of successful people. For example, William MacAskill's bio includes being one of the youngest associate professors of philosophy in the world, co-founder of CEA, co-founder of 80,000 Hours, and a published author. Or you can read profiles of Rhodes Scholars and come across lines like "built an electric car while in high school and an electric bicycle while in college".

When you hear these bios it's hard to imagine how these people achieved these things. Cal Newport calls this the failed simulation effect - we feel someone is impressive if we can't simulate the steps by which they achieved their success. But even if we can't immediately see the steps they're still there. They achieved their success through a series of non-magic practical actions, not because they're fundamentally a different sort of person.

So a couple of suggestions:

If you're feeling like you fundamentally can't be as successful as some of the people you admire, start by reading Cal Newport's blog post. It gives the backstory behind a particularly impressive student, showing the exact (non-magical) steps he took to achieve an impressive bio. Then, when you hear an impressive achievement, remind yourself that there is a messy practical backstory to this that you're not hearing. Maybe read full biographies of successful people to see their gradual rise. Then go work on the next little increment of your plan, because that's the only consistent way anyone gets success.

If you're a person others look up to as successful, start communicating some of the details of how you achieved what you did. Show the practicalities, not just the flashy bio-worthy outcomes.

Comment by richard_batty on Tell us how to improve the forum · 2017-01-03T12:20:00.301Z · score: 2 (2 votes) · EA · GW

An EA stackexchange would be good for this. There is one being proposed: http://area51.stackexchange.com/proposals/97583/effective-altruism

But it needs someone to take it on as a project to do all that's necessary to make it a success. Oli Habryka has been thinking about how to make it a success, but he needs someone to take on the project.

Comment by richard_batty on Using a Spreadsheet to Make Good Decisions: Five Examples · 2016-11-28T10:15:42.599Z · score: 1 (1 votes) · EA · GW

Is it worth cross-posting this to LessWrong? Anna Salamon is leading an effort to get LessWrong used again as a locus of rationality conversation, and this would fit well there.

Comment by richard_batty on Is the community short of software engineers after all? · 2016-09-24T16:59:40.845Z · score: 1 (1 votes) · EA · GW

Apart from 80k, do you know if the other organisations have had few applicants to these jobs or lots of applicants but no-one good enough?

Comment by richard_batty on Is the community short of software engineers after all? · 2016-09-24T16:55:48.826Z · score: 2 (2 votes) · EA · GW

In response to b, I think that's true for the 80k job. I decided not to apply for the 80k job because it was WordPress, which is horrible to work with and bad for career capital as a developer. Other developers I spoke to about it felt similarly.

But this isn't true of all of the jobs.

For example, the GiveDirectly advert says "GiveDirectly is looking for a full-stack developer who is ready to own, develop, and refine a broad portfolio of products, ranging from mobile and web applications to backend data integrations. As GiveDirectly’s only full-time technologist they will be responsible for developing solutions to the organization's most challenging technical problems, and owning the resolution from end to end."

When I unsuccessfully applied to Wave it similarly sounded like a standard backend web development job, not WordPress or tying together google sheets.

Comment by richard_batty on Review of EA Global 2016 · 2016-09-23T22:13:14.972Z · score: 3 (3 votes) · EA · GW

In addition to AGB's point about the forum data, the EA Hub map in its default zoom state shows 746 in Europe, 669 in Eastern US, and 460 in Western US.

For the EA survey in its default zoom state, you get 298 in Europe, 377 in Eastern US, and 289 in Western US.

Comment by richard_batty on Improving the Effective Altruism Network · 2016-09-01T20:44:59.450Z · score: 0 (0 votes) · EA · GW

I agree that changing the framing away from meetings would be good, I'm just not sure how to do that.

Do you fancy running a virtual party?

Comment by richard_batty on Improving the Effective Altruism Network · 2016-09-01T00:21:17.721Z · score: 2 (2 votes) · EA · GW

Video calls could help overcome geographic splintering of EAs. For example, I've been involved in EA for 5 years and I still haven't met many bay area EAs because I've always been put off going by the cost of flights from the UK.

I've considered skyping people but here's what puts me off:

  • Many EAs defend their time against meetings because they're busy. I worry that I'd be imposing by asking for a skype
  • I feel bad asking for a skype without a clear purpose
  • Arranging and scheduling a meeting feels like work, not social

However, at house parties I've talked to the very same people I'd feel awkward about asking to skype with because house parties overcome these issues.

The ideal would be to somehow create the characteristics of a house party over the internet:

  • Several people available
  • You can join in with and peel off from groups
  • You can overhear other conversations and then join in
  • There are ways to easily and inoffensively end a conversation when it's run its course
  • You can join in with a group that contains some people you know and some people you don't
  • The start time and end time are fuzzy so you can join in when you want
  • You can randomly decide to go without much planning, and can back out without telling anyone

Some things that have come closer to this than a normal skype:

  • The complice EA study hall: this has chat every pomodoro break. It's informal, optional, doesn't require arranging, and involves several people. It's really nice but it's only in pomodoro breaks and is via chat rather than voice.
  • Phone calls and skypes with close friends and family where it's not seen as weird to randomly phone them

Maybe a MVP would be to set up a google hangouts party with multiple hangouts. Or I wonder if there's some better software out there designed for this purpose.

Comment by richard_batty on Ideas for Future Effective Altruism Conferences: Open Thread · 2016-08-28T14:27:37.770Z · score: 3 (3 votes) · EA · GW

I'm not sure if this discussion has changed your view on using deceptive marketing for EA Global, but if it has, what do you plan to do to avoid it happening in future work by EA Outreach?

Also, it's easy for EAs with mainly consequentialist ethics to justify deception and non-transparency for the greater good, without considering consequences like the ones discussed here about trust and cooperation. Would it be worth EAO attempting to prevent future deception by promoting the idea that we should be honest and transparent in our communications?

Comment by richard_batty on Promoting EA in Russia: Barriers and opportunities · 2016-08-21T02:04:54.934Z · score: 5 (5 votes) · EA · GW

This may just be the way you phrased it, but you talk about spreading "EA and earning-to-give" as if earning-to-give is the primary focus of EA. I'm not sure if this is your view, but if it is, it's worth reading 80,000 Hours' arguments on why only a small proportion of people should earn to give in the long term.

Given these arguments and the low salaries in Russia, it might be better to concentrate on encouraging other sorts of effective altruist activity such as direct work, research, or advocacy. And there may be some altruistic work that is easier to do in Russia than in other countries. Unfortunately I don't know enough about Russia to suggest anything, but I'm sure you'd have some good ideas.

Comment by richard_batty on Starting a conversation about Effective Environmentalism · 2016-08-08T19:10:19.012Z · score: 9 (9 votes) · EA · GW

I can understand why we should care about climate change (because of the impact on humans) but I'm confused about what the purpose of environmentalism that focusses on preventing destruction of natural habitats is. Here are some possibilities:

  1. Ecosystems with less human interference are intrinsically good, so we should save and increase them
  2. Biodiversity (whether that's species diversity, genetic diversity, ecological diversity) is intrinsically good and so we should prevent reductions in biodiversity through e.g. species extinction
  3. The welfare of wild animals matters so we shouldn't harm them through e.g. by destroying their habitat
  4. Relatively undisturbed natural areas provide humans with beneficial things - i.e. ecosystem services

These are very different purposes that would lead to us optimising for very different things, so I think it's important to clarify what the end goal of an effective environmentalist would be.

If I were to evaluate these different possible end goals, I would think:

1 and 2 don't make much sense to me because I mainly value the happiness (and avoidance of suffering) of humans and animals. 3 could actually go against environmentalism because of wild animal suffering. 4 seems to fit in with the rest of EA well. Could have implications for poverty and global catastrophic risks.

Comment by richard_batty on Update on the New EA Hub · 2016-05-10T10:18:31.869Z · score: 1 (1 votes) · EA · GW

Here are a few data sources for finding cities with a culture or sub-culture that has EA-potential:

Comment by richard_batty on Update on the New EA Hub · 2016-05-10T07:38:41.902Z · score: 2 (2 votes) · EA · GW

That makes sense, you're not preventing your own moving by doing the analysis as you have other reasons for not moving yet.

Can I suggest an amalgamation of our approaches then:

Phase 1: Exploration. In this phase, those that can move in the next 4 months move to a location that would be good for them and try to join together with other EAs in doing this. They also try to explore more than one location and report back their findings to the whole group. Those that can't move that soon but are interested in the idea can contribute through online research. Everyone can help those who are interested in moving with location choice.

Phase 2: Clumping. In this phase, we take the findings from phase 1 and choose one (or a few) standout locations to concentrate on. We encourage more people to move there, including EAs that have gone to other locations.

Phase 3: Community-building. Once we've got a group of > 15 people we can start to invest in community-building projects such as coliving and coworking spaces and outreach to the local community.

Each of these phases is useful even if it doesn't progress to the next phase.

This approach gets the early adopters moving and gathering useful information whilst also creating the seed group effect that could attract more people in the future.

Comment by richard_batty on Update on the New EA Hub · 2016-05-08T21:44:49.103Z · score: 1 (1 votes) · EA · GW

I agree that you have to do some thinking in advance - you have to choose at least one place to go. However, I don't think this is a very hard a choice for someone to make because the digital nomad scene has already identified a handful of good places. From my reading of recommended places in digital nomad forums, here are the places that stand out for cutting your living costs whilst doing remote internet-based work if you are from a Western country:

  • Chiang Mai, Thailand
  • Ubud, Bali, Indonesia
  • Medellín, Colombia
  • Prague, Czech Republic
  • Budapest, Hungary
  • Las Palmas, Canary Islands, Spain

There are only a few of them, and personal preferences will play a big role in which of them each person would prefer. Each person who is seriously interested in being part of this project can choose the location that's best for them from this or similar lists, and then report back about how it's going once they're there.

My plan is to go to one of the European locations this summer. And if it doesn't work out, I can always go somewhere else.

Comment by richard_batty on Update on the New EA Hub · 2016-05-08T21:23:32.951Z · score: 2 (2 votes) · EA · GW

https://teleport.org is another source of data on which cities to move to, similar to nomadlist.

Comment by richard_batty on Update on the New EA Hub · 2016-05-06T16:27:33.372Z · score: 7 (7 votes) · EA · GW

I'm glad you're doing work on this - it's a potentially very valuable project. I think we could go about it in a different way though. There's a risk of analysis paralysis in trying to find the optimal location in advance so that we can commit to something as big as buying and converting property. Instead we could just find the people who are likely to move somewhere cheaper in the next few months (I'm one of those people) and see if we can do it together. We might also want to drop the framing of it as 'A new EA hub' at this stage because that makes the task seem big, important, and intimidating. Let's just experiment with some locations and see how it goes. We'll learn something about living abroad and we'll be able to observe existing coworking and coliving setups to see what works.

Comment by richard_batty on Effective Altruism London – a request for funding · 2016-02-06T15:13:03.487Z · score: 5 (7 votes) · EA · GW

Yes, Sam is very good at meeting new people and getting them excited about EA. And already in his spare time he's achieved a great deal with EA London.

Comment by richard_batty on Effective Altruism Prediction Registry · 2016-01-29T20:22:59.043Z · score: 2 (2 votes) · EA · GW

Augur (http://www.augur.net/) - a decentralised prediction market.

Comment by richard_batty on Announcing the Good Technology Project · 2016-01-17T13:47:39.513Z · score: 2 (2 votes) · EA · GW

Your suggestions are good and we can imagine doing them in the future, but I think we should prioritise the research problem for reasons I'll explain.

For your matching developers with projects scenarios (e.g. conference or prizes), they would make sense if:

  • We already knew what the most effective software projects were
  • There was an undersupply of software developers taking them up, perhaps because they didn't know about them

We think that there is some truth in this - it's hard to find lists of tech orgs of any type, and there aren't many lists of tech orgs that plausibly have a high positive impact. However, I don't think we're anywhere close to knowing what the most high impact software projects or organisations are. We are planning to publish a list of altruistic tech organisations, although we'll be unable to prioritise them until we have made more progress on research.

There's an analogy with early 80,000 Hours or GiveWell here. Early 80,000 Hours could have put all its effort into promoting what it thought at the time was the best way to have impact - earning to give. As we've found out, this would have been a mistake. By focussing on research they've developed much better advice than 'everyone should do earning to give'.

Similarly GiveWell or Giving What We Can could have just picked a few charities that on the face of it seemed high impact and then worked on finding donors for them. If they'd done this and then stopped researching, they probably wouldn't have found the options that they have now, nor would they be as credible for donors.

On running informal meetings:

This could have a couple of purposes:

  1. Matching people with orgs or other people so they can work on important projects
  2. Getting people talking about high impact tech so that we can make progress on working out what tech is high impact

I've addressed point 1 already. On point 2, I don't think meetups or conferences are the best way to make progress. The questions we are trying to answer are very difficult and I don't think people informally talking will cause much progress to happen.

Imagine if EA had started with some people asking 'How can we have the most impact?' and then instead of setting up organisations like GiveWell and 80,000 Hours, they had immediately concentrated on community, running conferences and meetups. I think we might have ended up like the conventional ethical sector - lots of people doing things and lots of ideas, but not much progress on prioritisation.

A stronger version of this option would be a more formal structure. There could be a forum (in person or online) for dedicated people to try to make progress on these questions. I think this could be a good option although we'd need to think about how to keep quality high.

I read Givewell's 'Science policy and infrastructure' proposal but I don't see how it relates to our project. What kinds of software regulation might we lobby politicians to change?

Comment by richard_batty on Announcing the Good Technology Project · 2016-01-16T16:52:03.578Z · score: 1 (1 votes) · EA · GW

I'm a little unclear on what your project involves, could you email me at richard@goodtechnologyproject.org and we can talk further.

Comment by richard_batty on Announcing the Good Technology Project · 2016-01-13T20:34:51.094Z · score: 4 (4 votes) · EA · GW

I agree that this can be a problem. I've previously found myself demoralised after suggesting ideas for projects only to be immediately met with questions like 'Why you, not someone else?', 'Wouldn't x group do this better?' I think having a cofounder helps greatly with handling this. It's also something that founders just have to learn to deal with.

In this case though, I think Gleb_T's question was good. We explicitly asked for feedback and we wanted to get questions like this so that we were forced to think through things we may not have properly considered. On a post like this, I'd rather have lots of feedback and criticism so that we know where the potential weaknesses of the project are.

I'd suggest the heuristic: If you're friend is enthusiastically telling you about a new idea, hold off on criticism for a while whilst you help them develop it. If someone asks for feedback, or if you've been discussing the project for a bit longer, give the most useful feedback you can, even if it's negative.

Thanks for your comments about the benefits of staying independent.

Comment by richard_batty on Announcing the Good Technology Project · 2016-01-13T13:27:43.334Z · score: 2 (2 votes) · EA · GW

Thanks for asking this as it's made me think more carefully about it.

Partly it's separate just because of how we got started. It's a project that Michael and I thought up because we needed it ourselves, and so we just got going with it. Given that we don't work for 80,000 Hours, it wasn't part of it.

But the more important question is 'Should it become part of 80,000 Hours in the future?' We talked to Ben Todd from 80,000 Hours and asked him what he thought of the potential for overlap. He thought it wasn't an issue as 80,000 Hours doesn't have time to go in depth into technology for good. I think if we became a subproject of 80,000 Hours, it would harm them because they'd have to spend management time on it and they should focus instead on their core priorities. It's costly to build our own brand, but I think it's better than disrupting an existing organization with an experimental project outside their own priorities. We can also find other ways of cooperating short of merging. I imagine 80,000 Hours will want to use our research if it becomes good enough, and we will want to talk to advisees of theirs who are interested in tech for good. We'll also be looking for ways to collaborate with other EA orgs like .impact and the London Good Code meetup.

There are also advantages to being independent of an existing project. We can target our brand more precisely at technologists and prioritize building relationships with people and orgs in the tech community. There's also value in thinking and researching independently of existing EA orgs because we might be able to come up with different ideas and ways of doing things.

I think there's a good chance that we'll look less and less like 80,000 Hours as we go on. I used to work for them, which means I'm prone to copy their way of doing things. As we go on, we might find that it's better to have a strategy less like 80,000 Hours than it is now.

Do you think it would be better if we were part of 80,000 Hours? What would that look like?

Comment by richard_batty on An embarrassment of riches · 2015-11-22T11:34:50.280Z · score: 1 (1 votes) · EA · GW

This is a really interesting and well-written post. I particularly liked the Jane Addams story and the Paul Farmer quote - they really drive home the message.

Comment by richard_batty on Permanent Societal Improvements · 2015-09-07T11:43:17.754Z · score: 2 (2 votes) · EA · GW

Have you seen Nick Beckstead's slides on 'How to compare broad and targeted attempts to shape the far future'?

He gives a lot of ideas for broad interventions, along with ways of thinking about them.

Comment by richard_batty on Permanent Societal Improvements · 2015-09-07T11:36:35.093Z · score: 2 (2 votes) · EA · GW

I not sure I understand your argument. Could you help me out with some examples of:

  • Effective Altruists "wandering in a vast intellectual wasteland wearing blindfolds and talking about how great it would be if you could direct humanity to the Great Lakes"
  • How an understanding of human evolution would help us to find out what we ought to do.
Comment by richard_batty on The first .impact Workathon · 2015-07-10T11:59:04.860Z · score: 3 (3 votes) · EA · GW

And if Australia joins in then the sun will never set on the .impact workathon :)

I'll join in for a bit this week and then for the next one I'll publicise to more people and set up a London location for it.

Comment by richard_batty on The first .impact Workathon · 2015-07-09T14:14:53.061Z · score: 3 (3 votes) · EA · GW

We held a similar thing in London a few weeks ago and I was planning to organise another one. But we could join in with your .impact workathon instead.

The only difficult thing is that Pacific time is quite far from UK time. We could at least have some overlap though.

Comment by richard_batty on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-02-24T19:10:12.982Z · score: 6 (6 votes) · EA · GW

Do you have any examples of successful 'broad tent' social movements that we can learn from?

One example would be science, which is like effective altruism in that it is defined more by questions and methods than by answers.

One counterexample might be liberal christianity, which is more accepting of a diversity of views but has grown much more slowly than churches with stricter theology. This phenomenon has been studied by sociologists, one paper is here: http://www.majorsmatter.net/religion/Readings/RationalChoice.pdf

Comment by richard_batty on EA housemates: a good idea? Plus how to find them · 2015-01-21T19:38:13.127Z · score: 8 (8 votes) · EA · GW

I started the ball rolling on a London EA house just by posting to the London EA facebook group asking if anyone was interested. Lots were, and we ended up with two houses. It's been a huge boost to my happiness and productivity.

One piece of advice: don't overanalyse things, just get going. I watched a huge email thread develop on the London LessWrong google group about setting up a rationalist house. It never got anywhere because they just spent ages arguing about the best way to handle housework, resolve disputes etc. With the London EA house we just worked out which locations we wanted to live in and then we started looking.

Comment by richard_batty on Supportive scepticism in practice · 2015-01-16T21:14:35.904Z · score: 10 (9 votes) · EA · GW

This is nice and practical - it's good that it focusses on specific behaviours that people can practice rather than saying anything that could come across as "you're alienating people and you should feel bad".

One thing I'd add to this is to try to debate less and be curious more. Often discussions can turn into person A defending one position and person B rebutting this position and defending their own. I've found that it is often more helpful for both people to collaborate on analysing different models of the world in a curious way. Person A proposes a model of how the world works and person B the starts trying to understand person A's model - what its assumptions are, where it applies, and where it doesn't. They can then contrast this with other models of the world and try to work together to find out which is best. If you want to get really into it, drawing diagrams can help both because it helps you think and because it increases the sense that you are working together on a problem, rather than arguing against one another. But it doesn't have to be this formal - it could just be a friendly discussion about the strengths and weaknesses of different ideas.

On a related note, I think it's important to realise that people don't always believe the positions they're arguing for. I'll often tell my friends an idea because I'm interested in working out it's strengths, weaknesses, and implications. If they're dismissive and try to argue against it I feel that they're missing the point - it would be more helpful to explore the idea's strengths and weaknesses together rather than turning it into a debate. This would help us to be more accepting of new ideas that don't come from the usual EA sources.