Posts

The Values-to-Actions Decision Chain: a lens for improving coordination 2018-06-30T09:26:44.363Z · score: 23 (22 votes)
The first AI Safety Camp & onwards 2018-06-07T18:49:06.249Z · score: 20 (18 votes)
The Values-to-Actions Decision Chain: a rough model 2018-03-02T14:54:30.803Z · score: 3 (3 votes)
Proposal for the AI Safety Research Camp 2018-02-02T08:07:31.869Z · score: 11 (9 votes)
Reflections on community building in the Netherlands 2017-11-02T22:01:17.922Z · score: 10 (12 votes)
Effective Altruism as a Market in Moral Goods – Introduction 2017-08-06T02:29:28.683Z · score: 2 (2 votes)
Testing an EA network-building strategy in the Netherlands 2017-07-03T11:28:33.393Z · score: 11 (11 votes)

Comments

Comment by remmelt on Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) · 2019-09-08T12:57:33.109Z · score: 1 (1 votes) · EA · GW

First off, I really appreciate the straightshooter conclusion of 'QC is unlikely to be helpful to address current bottlenecks in AI alignment.' even while you both spent many hours looking into it.


Second, I'm curious to hear any thoughts on the amateur speculation I threw at Pablo in a chat at the last AI Safety Camp:

Would quantum computing afford the mechanisms for improved prediction of the actions that correlated agents would decide on?

As a toy model, I'm imagining hundreds of almost-homogenous reinforcement learning agents within a narrow distribution of slightly divergent maps of the state space, probability weightings/policies, and environmental inputs. Would current quantum computing techniques, assuming the hardware to run them on is available, be able to more quickly/precisely derive the % portions of those agents at say State1 would take Action1, Action2, or Action3?

I have a broad vague sense that if that set-up works out, you could leverage that to create a 'regulator agent' for monitoring some 'multi-agent system' composed of quasi-homogenous autonomous 'selfish agents' (e.g. each negotiating on behalf of their respective human interest group) that has a meaningful influence on our physical environment. This regulator would interface directly with a few of the selfish agents. If that selfish agent subset are about to select Action1, it will predict what % of other, slightly divergent algorithms would also decide Action1. If the regulator prognoses that an excessive number of Action1s will be taken – leading to reduced rewards to or robustness of the collective (e.g. Tragedy of the Commons case of overutilisation of local resources) – it would override that decision by commanding a compensating number of the agents to instead select the collectively-conservative Action2.

That's a lot of jargon, half of which I feel I have little clue about... But curious to read any arguments you have on how this would (not) work.

Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-31T12:55:04.284Z · score: 3 (2 votes) · EA · GW

Thank for clarifying 'the similar wins' point. You seem to imply that these coaching/software/ops support/etc. wins compound on each other.


On the shared Asana space, I'll keep checking in with the EA Netherlands/Rethink/CE coaches working with EA groups/charity start-ups on how time-(in)efficient/(in)convenient it is to keep track of team tasks with the leaders they are mentoring.

From my limited experience, a shared coaching GDoc already works reasonably well for that:

  • Upside: Everyone uses GDoc. Easy to co-edit texts + comment-assign questions and tasks that pop up in email inbox. On the other hand, the attentional burden of one party switching over to the other's task management system to track say biweekly check-ins over half a year doesn't seem worth it.
  • Downsides: GDocs easily suck away the first ten minutes of a call when you need to update each other on two weeks of progress in one swoop. It also relies on the leader/coach actively reminding each other to check medium-term outcomes and key results. This 'update/remind factor' felt like a demotivating drag for me in my coach or accountability check-ins – all with people who I didn't see day to day and therefore lacked a shared context with.

The way you arrange the format together seems key here. Also, you'd want to be careful about sharing internal data – for Asana, I recommend leaders to invite coaches comment-only to projects, rather than entire teams.


On other software or services, curious if any 'done deals' come to mind for you.


Regarding your forecasting platform, I'm curious if anything comes to mind on fitting forecasts there with EA project planning over the next years.


Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-20T17:28:58.646Z · score: 1 (1 votes) · EA · GW

Good to hear your thoughts on this!

What do you mean here with a ‘portfolio of similar wins’? Any specific example of such a portfolio that comes to mind?

Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-15T11:07:56.697Z · score: 5 (2 votes) · EA · GW

Hey, I never finished my reply to you.

First off all, I thought those 4 items are a useful list of what you referred to as infrastructure for small projects.


On offering Asana Business:

  • We are now offering Asana Business teams at 90% discounts (€120/team/month) vs. usual minimum cost. This is our cost price since we're using the a 50% Nonprofit discount, and assign one organisation member slot per team facilitator. The lower cost is a clear benefit to the organisations and groups that determine to move to Asana Business
  • I'm working with ops staff from RethinkCharity and Charity Entrepreneurship (and possibly Charity Science Health) to move to a shared Asana space called 'Teams for Effective Altruism' (along with EA Netherlands and EA Cambridge). Not set in stone but all preparations are now in place.
  • This doesn't yet answer your question of why I particularly thought of Asana. Here are some reasons for why to work on building up an shared Asana Business space together:
    • Online task management is useful: I think at least half of the EA teams >5 people running small projects would benefit from tracking their tasks online for remote check-ins. For instance, when it's hard to travel to say a meeting room once a week, or you need to reliably carry out nitty-gritty ops tasks where it feels burdensome for a manager to ask 'Have you done this and this and this?'. At EA Netherlands, a lot of the project delays and time wasted seemed to emerge along the lines of someone feeling unclear of what was expected/endorsed of their role, being aware of update X, waiting for person Y to confirm, or forgetting/having to remind about task Z. It seems to make common-sense to avoid that by creating a 'single place of truth' where team members can place requests and update each other on progress asynchronously.
    • Facilitate onboarding of teams: Leaders of small projects seem to experience difficulty in getting volunteers to building the habit of updating online tasks in the first months, even if most would agree on reflection that it's worth the switching cost. In surveying EA regional groups in northern Europe, the one reason organisers kept mentioning to me why they weren't using task software came along the lines of them previously excitedly trying to track tasks online but volunteers forgetting to update their tasks a few weeks later. Both EA Netherlands and EA Oxford flopped twice at using Trello. My sense is they would have succeeded more likely than not if someone took up the role of facilitating team members to use the platform in ways that was useful to them, and reminding them to update their tasks weeks down the line. Part of the Asana team application process is assigning a facilitator, who I can guide from within our shared space.
    • Asana Business is top-notch: I personally find Asana Business' interface intuitive and well-ordered, striking a balance between powerful features and simplicity. External reviews rate Asana around 4-4.5 out of 5. Having said that, some EA teams seem to have different work styles or preferences that fit other platforms better – I've heard of people using Trello, Nozbe, Notion, GSheets, or even just NextCloud's basic task board.
    • Asana is an unexploited Schelling point for collaboration: A surprising number of established EA organisations use Asana: the Centre for Effective Altruism, RethinkCharity, Founder's Pledge, Centre for Human-compatible AI, Charity Entrepreneurship, Charity Science Health, 80,000 Hours(?), and probably a few I haven't discovered yet. That's an implicit endorsement of Asana's usefulness for 'EA work' (bias: Dustin Moskovitz co-founded it). Asana staff are now making their way into the Enterprise market, and intend to developing features that enable users to smoothly start collaborations across increasingly large organisational units (Teams...Divisions...Organisations).
    • Passing on institutional knowledge to start-ups: In a call I had with a key Asana manager, he randomly mentioned how it would be great to enable organisations to coordinate across spaces. I don't think we have to wait for that though. When EA Hub staff could offer Asana teams to local EA groups in our shared space, coach them by commenting on projects/scheduling check-in calls, and stay up to date of what's going on. Likewise, Charity Entrepreneurship could offer Asana teams to the charities they incubate and continue checking in with and supporting the start-up leaders coming out of the incubation program. People could also share project templates (e.g. conference/retreat organiser checklists), share standardised data from custom fields, etc.
    • So of your infrastructure suggestions, that seems to cover operations support and coaching/advice.
    • To make sharing the space work, we'd have to close off short-term human error/malice failure modes as well as tend to the long-term culture we create. Downsides of connecting software up to discuss work smoothly is that it's also easier for damaging ideas and intentions to cross boundaries, for people to jostle for admin positions, and for a resulting homogenous culture is build upon fragile assumptions of how the world works, and what the systematic approaches are to improving it.


Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T12:35:05.294Z · score: 1 (1 votes) · EA · GW

@Ozzie, I'm curious what kinds of infrastructures you think would be worth offering.

(I'm exploring offering Asana Business + coaching to entrepreneurs starting on projects)


Comment by remmelt on What are some neglected practices that EA community builders can use to give feedback on each other's events, projects, and efforts? · 2019-06-07T06:37:44.867Z · score: 2 (2 votes) · EA · GW

I also find the idea of recording meetings interesting. I’d worry about this not working out because of bandwidth limitations – asking an overseas organiser to watch on passively for an hour and then collect their thoughts on what happened seems to ask more of them than to interact with, query, and coach in the moment.

I wonder if there are any ways to circumvent that bottleneck. Perhaps calling in the person through Zoom and letting them respond at some scheduled moment helps somewhat? Any other ideas?

Another way for giving feedback might be to give people access to your task planning. I just emailed Asana about whether they’d be willing to offer a free Business/Enterprise team for people to run projects on.

Text: “We would like to pilot one Asana Business team for community start-ups to collaborate on tasks, link with coaches and advisors, collect feedback from the groups we service, and to be more transparent to charity seed funders.”

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T11:00:55.100Z · score: 2 (2 votes) · EA · GW

Better description of grantmaker’s Scope: the ‘problem-skills intersections’ they focus on evaluating. Staff of funds should share these with other larger funders, and publish summaries of them on their websites.

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T09:45:49.488Z · score: 9 (3 votes) · EA · GW

Been messaging with Brendon and others. I thought I’d copy-paste the – hopefully – non-inflammatory / personal parts of the considerations I last wrote about last so we can continue having collaborative truth-seeking discussions on those here as well.

To Brendon

I would clearly keep stating that you’re focused on funding early start-ups in the pilot/testing stages who are working with clearly delineated minimum viable target groups.

That cuts out a bunch of a funding categories like funding AI safety researchers, funding biotech work, or funding entire established national EA groups, and I think that’s good! (actually [...], the [...] from EA Netherlands might not like me saying that...anyway)

Those are things people at EA Grants, EA Community Building Grants, EA Funds or OpenPhil (of course!) might be focused on right now.

The Community Building Grant has some definite problems in the limited time they have to assess and give feedback to national and regional EA organisers, and their restrictive career plan changes criteria. Harri from CEA and I had a productive conversation about [that] [...] But in my opinion funding by the Angel Group there should focus on specific projects for specific target groups by the organisers. I think national and local group members should play a more active role in sharing feedback on how much the organisers work has helped them come to better reflected decisions for doing good and stick to them – and offer funding to extend the organisers’ runway. Which I hope makes it clear what kind of area I see the crowdfunding platform Heroes & Friends come in.”

And in the WhatsApp group exploring that crowdfunding platform:

On specialisation between funders

@[...], I think it’s important for funding platforms and grantmakers to clearly communicate what they’re specialised in a few paragraphs.

Especially:

  • scope in terms of cause/skill intersections
  • brightspots (funding area where their batting rate is high)
  • blindspots (where they miss promising funding opportunities, i.e. false negatives)
  • traps (failure modes of how they conduct their processes)

[added later: To “traps”, I should also add failure modes that grantmakers could see other, less experienced funders running into (so a newcomer funder can plan e.g. a Skype call with the grantmaker around that)]

This is something most grantmakers in the EA community are doing a pisspoor job at right now IMO (e.g. see our earlier Messenger exchange on online communication of EA Funds).

There’s a lot of progress to be made there. I expect building consensus around funding scopes and specialisation will significantly reduce the distractions and fracturing of groups we might each add to with scaling up the Angel Group or [possibly] collaborating with Heroes & Friends.

I’ve tried to clearly delineate with you guys what EA RELEASE (for lack of a better name for now) would be about.

Regarding the Angel Group, here is the suggestion I just shared with Brendon: [...]

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T08:10:08.176Z · score: 2 (2 votes) · EA · GW

Thanks, that clarifies a bunch of things for me.

I realise now I was actually confused by your sentence myself.

I took

Rather than hiding opportunities from other funders like venture capitalists in the for-profit world, I believe that EA funders such as EA Grants, BERI Grants...”

to mean

“EA Grants, BERI Grants, etc. should not hide opportunities from funders like VCs from the for profit sector”.

The rest of your article can be coherently read with that interpretation. To prevent that I’d split it into shorter sentences:

“Venture capitalists in the for-profit sector hide investment opportunities from others for personal monetary gain. EA grantmakers have no such reason for hiding funding opportunities from other experienced funders. Therefore, ...

Or at the very least, make it “Rather than hiding opportunities from other funders like venture capitalists in the for-profit world DO, I believe that...”

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T23:18:40.068Z · score: 3 (3 votes) · EA · GW

John Maxwell wrote an analysis on your initial post on how most platform initiatives seem to fail in the EA community and that the ones that did last seemed resulted from a long stretch of consensus building (+ attentive refinement and execution in my opinion). This was useful for me to consider that more deeply as an issue in coordinating funding in the EA community. It at least led me to take smaller, tentative steps to trying things out while incorporating the advice/goals/perspectives/needs of people with deep understandings of aspects or a clear stake in using the final product.

https://forum.effectivealtruism.org/posts/io6yLz6GtF6kvXt99/ideas-for-improving-funding-for-individual-eas-ea-projects#48ReFmNG5Zf3yhwk9

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T22:51:11.857Z · score: 1 (1 votes) · EA · GW

Another question I’m curious about: has a grantmaker from an EA-affiliated organisation you’ve been in touch with been open to the idea of sourcing ideas or incorporating applications coming in through Angel Group form? Or have they shared any worries or reservations you can share?

I think for example that a ‘just-another-universal-protocol’ worry would be very reasonable to have here. This is something I’m probably not helping with since I’m exploring an idea for a crowdfunding + feedback gathering platform for early-stage community entrepreneurs in the EA community to extend their runways (been recently in touch with Brendon on that).

To avoid that I think we need to do the hard work of reaching out to involved parties and have many conversations to incorporate their most important considerations and start mutually useful collaborations. I.e. consensus building.

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T22:03:25.040Z · score: 7 (5 votes) · EA · GW

+1 Something I could imagine being the case is that people reacted wanting to downvote after seeing this paragraph:

Rather than hiding opportunities from other funders like venture capitalists in the for-profit world, I believe that EA funders such as EA Grants, BERI Grants, and the EAF Fund should all use a shared central application so that each funder can discover and fund promising opportunities that they otherwise may not have encountered.

A possible concern people who downvoted might have is that if e.g. a venture capital funder can have free access to all applications within the EA community while being new to it they might try to e.g. fund something complex like national efffective altruism groups where they don’t understand well of how the organisers on the ground are communicating certain ideas (e.g. cause prioritisation for career planning). This might end up leading them to overconfidently fund initiatives that shouldn’t be funded.

Jargon association spray: unilateralist’s curse, reputational risks, founder’s effects, platforms fragmentation, Schelling points.

But that’s just a guess and I don’t really know. I do share in the sentiment that the option to downvote something is too easy for people who pattern-match abstract EA ideas like that, instead of putting in the somewhat strenuous and vulnerable work of sharing their impressions and asking further in the comment section about how the platform concretely works.

@Brendon, I thought you tried to address possible risks of a making applications available online in a previous post.

How do you think right now about how to address funder blindspots in built-up knowledge and evaluation frameworks – for both established EA grantmakers and new venture capitalist-style funders (who might have valuable for-profit start-up experience to build on)?

Comment by remmelt on CEA on community building, representativeness, and the EA Summit · 2018-08-15T14:36:30.015Z · score: 7 (9 votes) · EA · GW

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

Comment by remmelt on Request for input on multiverse-wide superrationality (MSR) · 2018-08-14T01:23:03.839Z · score: 3 (3 votes) · EA · GW

Hey, a rough point on a doubt I have. Not sure if it's useful/novel.

Going through the mental processes of a utilitarian (roughly defined) will correlate with others making more utilitarian decisions as well (especially when they're similar in relevant personality traits and their past exposure to philosophical ideas).

For example, if you act less scope-insensitive, ommission-bias-y, or ingroup-y, others will tend to do so as well. This includes edge cases – e.g. people who otherwise would have made decisions that roughly fall in the deontologist or virtue ethics bucket.

Therefore, for every moment you end up shutting off utilitarian-ish mental processes in favour of ones where you think you're doing moral trade (including hidden motivations like rationalising acting from social proof or discomfort in diverging from your peers), your multi-universal compatriots will do likewise (especially in similar contexts).

(In case it looks like I'm justifying being a staunch utilitarian here, I have a more nuanced anti-realism view mixed in with lots of uncertainty on what makes sense.)

Comment by remmelt on Open Thread #40 · 2018-07-18T21:39:56.058Z · score: 0 (0 votes) · EA · GW

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

The example posts I gave are on the extreme end of the kind of granularity I'd personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.

I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it's too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).

So I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Thanks for your points!

Comment by remmelt on Open Thread #40 · 2018-07-17T10:58:44.346Z · score: 2 (2 votes) · EA · GW

Another problem would be when creating extra sub-forums would result in people splitting their conversations up more between those and the Facebook and Google groups. Reminds me of the XKCD comic on the problem of creating a new universal standard.

I think you made a great point in your comment on that people need to do ‘intensive networking and find compromises’ before attempting to establish new Schelling points.

Comment by remmelt on Open Thread #40 · 2018-07-17T10:32:57.474Z · score: 0 (0 votes) · EA · GW

Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?

I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).

I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-11T05:20:37.353Z · score: 1 (1 votes) · EA · GW

Hi @Naryan,

I’m glad that this is a more powerful tool for you.

And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)

Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.

I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.

Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.

(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)

Comment by remmelt on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T04:52:41.802Z · score: 1 (1 votes) · EA · GW

Great! Cool to hear how you’re already making traction on this.

Perhaps EAWork.club has potential as a launch platform?

I’d also suggest emailing Kerry Vaughan from EA Grants to get his perspective. He’s quite entrepreneurial so probably receptive to hearing new ideas (e.g. he originally started EA Ventures, though that also seemed to take the traditional granting approach).

Let me know if I can be of use!

Comment by remmelt on Open Thread #40 · 2018-07-10T16:59:51.235Z · score: 0 (0 votes) · EA · GW

Wow, nice! Would love to learn more.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-10T12:19:26.215Z · score: 2 (2 votes) · EA · GW

First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.

So I appreciate that you actually gave specific reasons for why you'd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.

Interestingly, your interpretation that this is evidence for that there shouldn't be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It's an outside view in the sense that it weights the views of people who've decided to move into the direction of working on the long term future. It's also an inside view in that it doesn't consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).

A historical example where this went wrong is how in the 1920's Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin's authoritarian regime and Nazi Germany, respectively. Although I haven't done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we'll radically alter what we'll work on after 20 years if we'd make a concerted effort now to structure the community around enabling a significant portion of our 'members' (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).

It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I've made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I'd be curious to hear if any of these points has caused you to update any of your other intuitions:

Worldviews

  • more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences

  • research into how different humans trade off suffering and eudaimonia differently

  • a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)

Focus areas:

Global poverty

  • use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)

  • use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points

Existential risk

  • more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current "Maxipok" focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there

  • for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson's work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.

Animal welfare

  • more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target

  • shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems

Community building

  • Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust

  • However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms

Comment by remmelt on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T07:26:33.470Z · score: 9 (9 votes) · EA · GW

I’m grateful that someone wrote this post. :-)

Personally, I find your proposal of fusing three models promising. It does sound difficult to get right in terms of both technical web development and setting up the processes that actually enable users to use the grant website as it was set out to be used. It would probably require a lot of iterative testing as well as in-person meetings with stakeholders (i.e. this looks like a 3-year project).

I’d be happy to dedicate 5 hours per week for the next 3 months to contribute to working it out further with key decision makers in the community. Feel free to PM me on Facebook if you’d like to discuss it further.

Here are some further thoughts on why the EA Grants structure has severe limitations

My impression is that CEA staff have thoughtfully tried to streamline a traditional grant making approach (by, for example, keeping the application form short, deferring to organisations that have expertise in certain areas, and promising to respond in X weeks) but that they’re running up against the limitations of such a centralised system:

1) not enough evaluators specialised in certain causes and strategies who have the time to assess track records and dig into documents

2) a lack of iterated feedback between possible donors and project leaders (you answer many questions and then only hear about how CEA has interpreted your answers and what they think of you 2 months later)

Last year, I was particularly critical about that little useful feedback was shared with applicants after they were denied with a standard email. It’s valuable to know why your funding request is denied – whether it is because CEA staff lack domain expertise or because of some inherent flaws to your approach that you should be aware of.

But applicants ended up having to take the initiative themselves to email CEA questions because CEA staff never got around to emailing some brief reasoning for their decisions to the majority of the 700ish applicants that applied. On CEA’s side there was also the risk of legal liability – that someone upset by their decision could sue them if a CEA staff member shared rough notes they made that could easily be misinterpreted. So if you’re lucky you receive some general remarks and can then schedule a Skype call to discuss those further.

Further, you might discover then that a few CEA staff members have rather vague models of why a particular class of funding opportunities should not be accepted (e.g. one CEA staff member was particularly hesitant about funding EA groups last year because it would make coordinating things like outreach [edit] and having credible projects branded as EA more difficult).

Finally, this becomes particularly troublesome when outside donors lean too heavily on CEA’s accept/deny decision (which I think happened at least once with EA Netherlands, the charity I’m working at). You basically have to explain to all future EA donors that you come into contact with why your promising start-up wasn’t judged to be impactful enough to fund by one of the most respected EA organisations.

I’d be interested in someone from the EA Grants team sharing their perspective on all this.

Comment by remmelt on Open Thread #40 · 2018-07-09T07:51:21.719Z · score: 0 (0 votes) · EA · GW

Thanks, done!

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-09T06:27:31.224Z · score: 0 (0 votes) · EA · GW

I've added some interesting links to the post on near vs. far mode thinking, which I found on LessWrong and Overcoming Bias.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-08T21:36:32.847Z · score: 0 (0 votes) · EA · GW

Hmm, so here are my thoughts on this:

1) I think you’re right that the idea of going meta from the object level is an idea that’s known to many EAs. I’d argue though that the categorisations in the diagram are valuable though because I don’t know of any previous article where they’ve all been put together. For veteran EAs, they’ll probably be obvious but I still think it’s useful for them to make the implicit explicit.

2) The idea of construal levels is useful here because of how thinking in far vs. near mode affects psychology. E.g. when people think in far mode they

  • have to ignore details, and tend to be less aware that those nuances actually exist

  • tend to associate other far-mode things with whatever they think of. E.g. Robin Hanson’s point that many sci-fi/futurism books (except, of course, Age of Em) focus on values and broad populations of beings that all look similar, and have blue book covers (i.e. sky, far away)

So this is why I think referring to construal levels adds value. Come to think of it, I should have mentioned this in the post somewhere. Also my understanding of construal level theory is shoddy so would love to hear opinions of someone who’s read more into it.

BTW, my sister mentioned that I could have made the post a lot more understandable for her if I just started with ‘Some considerations like X are more concrete and other considerations like Y are more abstract. Here are some considerations in between those.’ Judging by, that I could have definitely written it more clearly.

Comment by remmelt on Open Thread #40 · 2018-07-08T20:24:24.633Z · score: 18 (18 votes) · EA · GW

The EA Forum Needs More Sub-Forums

EDIT: please go to the recent announcement post on the new EA Forum to comment

The traditional discussion forum has sub-forums and sub-sub-forums where people in communities can discuss areas that they’re particularly interested in. The EA Forum doesn’t have these and this make it hard to filter for what you’re looking for.

On Facebook on the other hand, there are hundreds of groups based around different cause areas, local groups and organisations, and subpopulations. Here it’s also hard to start rigorous discussions around certain topics because many groups are inactive and moderated poorly.

Then there are lots of other small communication platforms launched by organisations that range in their accessibility, quality standards, and moderation. It all kind of works but it’s messy and hard to sort through.

It’s hard to start productive conversations on specialised niche topics with international people because

  • 1) Relevant people won’t find you easily within the mass of posts

  • 2) You’ll contribute to that mass and thus distract everyone else.

Perhaps this a reason why some posts on specific topics only get a few comments even though the quality of the insights and writing seems high.

Examples of posts that we’re missing out on now:

  • Local group organiser Kate tried X career workshop format X times and found that it underperformed other formats

  • Private donor Bob dug into the documents of start-up vaccination charity X and wants to share preliminary findings with other donors in the global poverty space

  • Machine learning student Jenna would like to ask some specific questions on how the deep reinforcement learning algorithm of AlphaGo functions

  • The leader of animal welfare advocacy org X would like to share some local engagement statistics on vegan flyering, 3D headset demos, before sending them off in a more polished form to ACE.

Interested in any other examples you have. :-)

What to do about it?

I don’t have any clear solutions in mind for this (perhaps this could be made a key focus in the transition to using the forum architecture of LessWrong 2.0). Just want to plant a flag here that given how much the community has grown vs. 3 years ago, people should start specialising more in the work they do, and that our current platforms are woefully behind for facilitating discussions around that.

It would be impossible for one forum to handle all this adequately and it seems useful for people to experiment with different interfaces, communication processes and guidelines. Nevertheless, our current state seems far from optimal. I think some people should consider tracking down and paying for additional thoughtful, capable web developers to adjust the forum to our changing needs.

UPDATE: After reading @John Maxwell IV's comments below, I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-05T06:47:10.805Z · score: 0 (0 votes) · EA · GW

Changed it in the third paragraph. :-)

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T20:33:05.592Z · score: 1 (1 votes) · EA · GW

Hmm, I personally value say five people deeply understanding the model to be able to explore and criticise it over say a hundred people skimming through a tl;dr. This is why I didn’t write one (besides it being hard to summarise anything more than ‘construal levels matter – you should consider them in the interactions you have with others’, which I basically do in the first two paragraphs). I might be wrong of course because you’re the second person who suggested this.

This post might seem deceptively obvious. However, I put a lot of thinking into both refining categories and the connections between them and explaining them in a way that hopefully enables someone to master them intuitively if they take the time to actively engage with the text and diagrams. I probably did make a mistake by outlining both the model and its implications in the same post because it makes it unclear what it’s about and causes discussions here in the comment section to be more diffuse (Owen Cotton-Barratt mentioned this to me).

If someone prefers to not read the entire post, that’s fine. :-)

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T15:56:43.826Z · score: 2 (2 votes) · EA · GW

Hmm, I can’t think of a clear alternative to ‘V2ADC’ yet. Perhaps ‘decision chain’?

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T15:54:25.540Z · score: 1 (1 votes) · EA · GW

Hi Denise, can you give some examples of superfluous language? I tried to explain it as simply as possible (though sometimes jargon and links are needed to avoid having to explain concepts in long paragraphs) but I’m sure I still made it too complicated in places.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T06:57:57.519Z · score: 0 (0 votes) · EA · GW

I appreciate you mentioning this! It’s probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.

I admit I haven’t thought this through systematically. Let me mull over your arguments and come back to you here.

BTW, could you perhaps explain what you meant with the “There are other causes of an area...” sentence? I’m having trouble understanding that bit.

And with ‘on-reflection moral commitments’ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T06:24:17.564Z · score: 0 (0 votes) · EA · GW

@Peter, any idea how EA Grants could be used as an intermediary here? (I did apply myself to EA Grants but I’m not expecting to cover the financial runway of myself or EAN for any longer than 6 months with that)

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-02T04:03:41.990Z · score: 0 (0 votes) · EA · GW

Good question... I haven’t really thought about it but if it’s a £20,000+ donation perhaps EA Netherlands could register at the HMRC? https://www.givingwhatwecan.org/post/2014/06/tax-efficient-giving-guide-uk-donors/

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-01T21:38:50.890Z · score: 0 (0 votes) · EA · GW

Thanks for the pointers!

Would you see OODA loops translated to V2ADC as cycling up and down (parts of) the chain as quickly as possible?

I found this article on Marr’s levels of analysis: http://blog.shakirm.com/2013/04/marrs-levels-of-analysis/ Seems like a useful way of guiding the creation of algorithms (never heard of it before – I don’t know much about of coding or AI frameworks).

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-01T21:29:43.841Z · score: 0 (0 votes) · EA · GW

Ah, in this model, I see ‘effectiveness in executing actions according to values’ a result of lots of directed iteration of improving understanding at lower construal levels over time (reminds of the OODA loop that Romeo mentions above, will also look into the ‘levels of analysis’ now ). In my view, that doesn’t require an extra factor.

Which meta-ethical stance do you think this wouldn’t fit into the model? I’m curious to hear your thoughts to see where it fails to work.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-01T12:53:47.258Z · score: 0 (0 votes) · EA · GW

I'm happy to hear that it's useful for you. :-)

Could you clarify what you mean with agentive? The way I see it, at any of the levels from 'Values' to 'Actions', a person's position on the corrigibility scale could be so low to be negative. But it's not an elegant or satisfactory way of modelling it (i.e. different ways of adjusting poorly to evidence could still lead to divergent results from an extremely negative Unilateralist's Curse scenario to just sheer mediocrity)

Comment by remmelt on Announcing the second AI Safety Camp · 2018-06-17T12:10:44.186Z · score: 1 (1 votes) · EA · GW

If it would cost the same or less time to get funding via public grants and institutions, I would definitely agree (i.e. in filling in an application form, in the average number of applications that need to be submitted before the budget is covered, and in loss of time because of distractions and 'meddling' by unaligned funders).

Personally, I don't think this applies to AI Safety Camp at all though (i.e. my guess is that it would cost significantly more time than getting money from 'EA donors', which we would be better off spending on improving the camps) except perhaps in isolated cases that I have not found out about yet.

I'm also not going to spend the time to write up my thoughts in detail but here's a summary:

  • AI alignment is complicated – there's a big inferential gap in explaining to public grantmakers why this is worth funding (as well as difficulty making the case for how this is going to make them look good)
  • The AI Safety Camp is not a project of an academic institution, which gives us little credibility to other academic institutions who would be most capable of understanding the research we are building on
  • Tens of millions of dollars are being earmarked to AI alignment research by people in the EA community right now who are looking to spend that on promising projects run by reliable people. There seems to be a consensus that we need to work at finding talent to spent the money on (not more outside funders).
Comment by remmelt on Announcing the second AI Safety Camp · 2018-06-15T16:58:37.053Z · score: 0 (0 votes) · EA · GW

I’ll answer this point since I happen to know.

  • Left-over funds from the previous camp were passed on
  • Greg Colbourn is willing to make a donation
  • The funding team just submitted an application for EA Grant’s second round

The team does have plenty of back-up options for funding so I personally don’t expect financial difficulties (though it would be less than ideal I think if the Czech Association for Effective Altruism has to cover a budget deficit itself).

Comment by remmelt on Effective Advertising and Animal Charity Evaluators · 2018-06-15T06:38:18.894Z · score: 4 (4 votes) · EA · GW

Really appreciate you putting out your honest thinking behind the way you market recommended charities to people not involved in EA.

My amateur sense is that ACE is now striking the right balance between factual correctness and appeal/accessibility. My worry in the past was that ACE staff members were allowing image considerations to seep into the actual analyis that they were doing (sidenote: I’d be interested to what extent ACE now uses Bayesian reasoning in their estimates, e.g. by adjusting impact by how likely small sample studies are false positives).

When someone is already committed to EA, it tends to become difficult for them to imagine what got them originally excited about effectiveness in helping others and what might motivate new people who are not part of the ‘early adopter crowd’. There is a reason why EA pitches to newcomers also tend to be simple, snappy and focus on one ‘identifiable victim’ before expanding across populations, probabilities and time (my point being that these principles also apply to ACE’s outreach). You cannot expect people to relate to abstract analysis and take action if they have not bridged that gap yet.

However, I hope that ACE’s stance on matching donations will cause other organisations in the effective animal advocacy community to follow their lead. The newsletter by Good Food Institute in December 2017 also had a misleading header saying ‘Twice your impact’. This is an easy thing to slip into when you are focused on raising money.

This was ACE’s marketing material that originally mentioned ‘double your impact’: https://animalcharityevaluators.org/blog/updated-charity-recommendations-december-2017/

I heard this might have been a mistake by less experienced communication staff members as ACE is usually more careful (though it was concerning that outsiders had to mention it to someone working for ACE to start internal Slack discussions). You can find Marianne and I’s original conversation on that below, which we passed on to ACE:

Marianne van der Werf: Animal Charity Evaluators has released their new charity recommendations!

Updated Charity Recommendations: >December 2017 | Animal Charity Evaluators ACE updates our recommendations each year by December 1. This year, we are publishing our recommendations a few days early in order to have our most… ANIMALCHARITYEVALUATORS.ORG

Remmelt Ellen: This statement is intellectually dishonest.🙁 "A generous donor will match donations to ACE’s Recommended Charity Fund, starting today. DONATE TO THE RECOMMENDED CHARITY FUND This means that you can double the impact of your donation from now through the end of the year by donating to our Recommended Charity Fund. We will distribute all of the funds raised through the end of the year to our recommended charities in January. You can find more details about the Fund, including how donations will be divided among charities, here."

Remmelt Ellen: http://benjaminrosshoffman.com/matching-donation-fundraisers-can-be-harmfully-dishonest/

Remmelt Ellen: I'm not happy with the way they've stated that. It doesn't make me feel as confident that they've shifted their marketing-orientation to more rigour.

Remmelt Ellen: Mind you, I'd still recommend donating to one of their recommended charities if you want to donate to prevent factory farming.

Marianne van der Werf: In general that's a good point, but in the case of ACE they're aware of the dishonesty of donation drives and make a point of only doing them when the money is not going to be donated anyway. https://animalcharityevaluators.org/about/background/faq/

Marianne van der Werf ACE should probably mention it in their posts sometimes, because last year people thought less of ACE because of this as well.

Remmelt Ellen: Hmm, but even in this case 'double your impact' is a disingenuous claim to make. That donor would have made a donation to a charity anyway, and probably one in the factory farming space.

Therefore counterfactually-speaking, you can say that the donor probably wouldn't have donated to the recommended charity fund otherwise, not that another donor has doubled their impact.

Remmelt Ellen: "You're donation is being matched –> you've just doubled your impact" is a bold claim to make that's almost impossible to live up to – especially when done by a charity evaluator that should know better.

Remmelt Ellen: More on coordination matching and influence matching: https://blog.givewell.org/2011/12/15/why-you-shouldnt-let-donation-matching-affect-your-giving/

Marianne van der Werf: Good points Remmelt, you should share this conversation with ACE or ask them about their messaging in their upcoming Reddit AMA. I agree that the doubling your impact claim is overly simplistic. It would have been more accurate to just talk about doubling the donations and have people draw their own conclusions about how it influences their impact, because that also depends on people's personal values.

Comment by remmelt on The first AI Safety Camp & onwards · 2018-06-13T14:24:56.166Z · score: 1 (1 votes) · EA · GW

Thanks, yeah, perhaps we should have included that in the summary.

Personally, I was impressed with the committedness with which the researchers worked on their problems (and generally stepped in when there was dishwashing and other chores to be done). My sense is that the camp filled a ‘gap in the market’ where a small young group that’s serious about AI alignment research wanted to work with others to develop their skills and start producing output.

Comment by remmelt on Reflections on community building in the Netherlands · 2018-03-20T13:03:56.859Z · score: 1 (1 votes) · EA · GW

Hi David, only just saw your comment (I wonder how I can turn on notifications for posts).

At the moment, 80,000 Hours have even closed applications for coaching. We also haven't been able to get a referral link set up through CEA Groups, who strongly recommended to us to use successful 80K referrals as the key metric.

Most of our efforts right now are going to building a committed and active core community. For our monthly community events, we ask people registering to fill in the hours spend on EA, % of income donated and what cause area they would currently see themselves working on. Aside from that, we keep track of people that we think belong to our core community based on multiple criteria, track gender and student/non-student diversity, note down anecdotes of impactful decisions that we might have helped others in making and (new) projects that we've supported. This system is definitely not perfect since we miss important data points but I can consistently incorporate this in my work routine.

Perhaps we should have made a more concerted effort to refer to 80K coaching. Instead, the more natural thing to do in conversation seemed to be to either discuss promising cause areas and career opportunities with a good fit and to point to the career guide.

You raise a good point on replacement effects that I hadn't given thought. I haven't made an estimate on the value of 80,000 Hours referrals and would be interested in seeing one made by someone else.

(also turned on sharing for See Remmelt's considerations)

Comment by remmelt on The Values-to-Actions Decision Chain: a rough model · 2018-03-02T18:18:36.226Z · score: 0 (0 votes) · EA · GW

Yeah, I more or less agree with your interpretations.

The number (as well as scope) of decision levels are arbitrary because they can be split. For example:

  • Values: meta-ethics, normative ethics
  • Epistemology: defining knowledge, approaches to acquiring it (Bayes, Occam's razor...), applications (scientific method, crucial considerations...)
  • Causes: the domains can be made as narrow or wide as seems useful for prioritising
  • Strategies: career path, business plan, theory of change...
  • Systems: organisational structure, workflow, to-do list...
  • Actions: execute intention ("talk with Jane"), actuate ("twitch vocal chords")

(Also, there are weird interdependencies here. E.g. if you change the cause area you work on, the career skills acquired before might not be as effective there. Therefore, the multiplier changes. I'm assuming that they tend to be fungible enough for the model still to be useful.)

Your two categories of Prioritisation and Execution seem fitting. Perhaps some people lean more towards wanting to see concrete results, and others more towards wanting to know what results they want to get?

Does anyone disagree with the hypothesis that individuals – especially newcomers – in the international EA community tend to lean one way or the other in terms of attention spent and the rigour with which they make decisions?

Comment by remmelt on The Values-to-Actions Decision Chain: a rough model · 2018-03-02T17:19:34.672Z · score: 0 (0 votes) · EA · GW

Will do!

Comment by remmelt on The Values-to-Actions Decision Chain: a rough model · 2018-03-02T15:29:17.974Z · score: 0 (0 votes) · EA · GW

To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don't mean to say that they should both become generalists.

Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.

In a similar vein, I think it makes sense for CEA's Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.

However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

Comment by remmelt on Announcing Effective Altruism Community Building Grants · 2018-02-23T12:02:16.084Z · score: 4 (4 votes) · EA · GW

On EA Netherlands: a major reason why we chose to switch part-time is because we had to look for other income sources (i.e. two of us were working full-time and didn't manage to raise enough funding to cover our basic living costs).

Comment by remmelt on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-18T09:30:04.609Z · score: 3 (3 votes) · EA · GW

Just want to say I value that this topic is now openly discussed and considered. A few 'bad apples' (or to put it in more nuanced terms, people who're trying to get their sexual desires/needs met without considering the needs and feelings of the other person enough) in our community can kill off the open, supportive and trusting atmosphere I often experience myself.

An intuition I wanted to bring up: if we'd slam down too hard on the topic of rape, this might create a taboo the other way where it's hard to discuss a possible incident with someone who instigated it because of the shame and social punishment associated with that.

I don't have much experience here but here's a thought: many milder forms of harassment in the EA could plausibly arise from males having poor social awareness and encountering difficulty and frustration trying to date one of a few girls they come into contact with (this seems the most common case to me but there are others as you mentioned).

Setting out 'bright line' rules would still help them gauge when they're going to far. However, this is only one tool, and a rather crude one at that (since it reacts to incidents on the extreme end of the spectrum as they happen, rather than prevention on the lower end).

Personally, I want to work on empowering fellow men to be more emotionally involved and understanding and to seek out and build healthy relationships (such as by hosting circling sessions and practicing non-violent communication together).

Noting that I've scanned through your post and haven't gone through your arguments extensively enough.

Comment by remmelt on Reflections on community building in the Netherlands · 2017-11-06T15:41:32.178Z · score: 1 (1 votes) · EA · GW

Thanks for letting us know. I'm glad to hear that long write-ups like this one can give useful insights to other organisers.

Comment by remmelt on Rob Wiblin's top EconTalk episode recommendations · 2017-10-20T20:05:40.539Z · score: 1 (1 votes) · EA · GW

Love it. You've made an offer that's hard to refuse.

Comment by remmelt on Effective Altruism as a Market in Moral Goods – Introduction · 2017-09-18T03:38:47.477Z · score: 1 (1 votes) · EA · GW

Update: this series is going to take months – not weeks – to finish.

After further reflection, I'm doing another major overhaul of my draft. I've also had to commit to doing more other work than in the last month than expected.

(In other words: planning fallacy.)

Comment by remmelt on How We Banned Fur in Berkeley · 2017-08-12T09:40:09.086Z · score: 0 (0 votes) · EA · GW

Fair point. You seem to be opening up the way to show what's possible to larger organisations.

Having said that, can't you connect these two? Can't you one one end take practical steps to showing that real legal progress is possible while at the other end show the big picture that you're working towards and why?

Thinking big around a shared goal could the increase cohesion and ambition of the idealistic people you're connected with and work with on each new project from now on (this reminds me of Elon Musk's leadership approach, who unfortunately doesn't seem to care much about animal issues).