Posts

Some blindspots in rationality and effective altruism 2021-03-21T18:01:47.188Z
A parable of brightspots and blindspots 2021-03-21T17:31:54.549Z
Are we actually improving decision-making? 2021-02-04T23:57:49.382Z
Delegated agents in practice: How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research 2020-11-26T16:39:58.647Z
Consider paying me (or another entrepreneur) to create services for effective altruism 2020-11-03T20:50:57.689Z
The Values-to-Actions Decision Chain: a lens for improving coordination 2018-06-30T09:26:44.363Z
The first AI Safety Camp & onwards 2018-06-07T18:49:06.249Z
The Values-to-Actions Decision Chain: a rough model 2018-03-02T14:54:30.803Z
Proposal for the AI Safety Research Camp 2018-02-02T08:07:31.869Z
Reflections on community building in the Netherlands 2017-11-02T22:01:17.922Z
Effective Altruism as a Market in Moral Goods – Introduction 2017-08-06T02:29:28.683Z
Testing an EA network-building strategy in the Netherlands 2017-07-03T11:28:33.393Z

Comments

Comment by remmelt on 8+ productivity tools for movement building · 2021-04-20T06:43:44.013Z · EA · GW

re: Using Asana Business at EA Hub Teams.

You can sign up here (I see EA PH already did): https://is.gd/asanaforea

It’s also possible to ask for a fully functional team for free there, but you need at least one paid member account (€220/year) to set up new teams, custom fields, and app integrations like Slack.

Migration is arrangeable with Asana staff (note that some formatting and conversations get lost). Basically need to arrange with me to add my email to your old space, and include it in this form: https://asanaops.wufoo.com/forms/asana-migration-request/

Comment by remmelt on Some blindspots in rationality and effective altruism · 2021-03-26T12:30:03.440Z · EA · GW

I'm actually interested to hear your thoughts! 


Do throw them here, or grab a moment to call  :)

Comment by remmelt on A parable of brightspots and blindspots · 2021-03-24T08:09:37.007Z · EA · GW

Ah, good to know that my fumbled attempts at narrating were helpful! :)

I’m personally up for the audio tag. Let me see if I can create one for this post.

Comment by remmelt on Some blindspots in rationality and effective altruism · 2021-03-21T18:43:20.262Z · EA · GW

See also LessWrong Forum:

Comment 1 (on my portrayal of Eliezer's portrayal of AGl):

... saying 'later overturned' makes it sound like there is consensus, not that people still have the same disagreement they've had 13 years ago ...

Comment 2:

On 3, I'd like to see EA take sensitivity analysis more seriously.

Comment 3:

I found it immensely refreshing to see valid criticisms of EA.
...
I think I disagree on the degree to which EA folks expect results to be universal and generalizable ...

Comment 4:

The way I've tended to think about these sorts of questions is to see a difference between the global portfolio of approaches, and our personal portfolio of approaches ...

Comment by remmelt on Are we actually improving decision-making? · 2021-02-23T22:03:51.096Z · EA · GW

I'm interested in your two cents on any societal problems where a lot of of work has been done by specialists who are not directly involved in the effective altruism community.

Comment by remmelt on Are we actually improving decision-making? · 2021-02-23T20:39:05.074Z · EA · GW

Thank you too for the input, Vicky. This gives me a more grounded sense of what EA initiators with  experience in policy are up to  and thinking. Previously, I corresponded with volunteers of Dutch EA policy initiatives as well as staff from various established EA orgs that coordinate and build up particular professional  fields. Your comment and the post by your working group  made me feel less pessimistic about a lack of open consultation and consensus-building in IIDM initiatives .

I like your framing of a two-way learning process. I think it's useful to let go of one's own  theory of impact sometimes in conversations, and ask about why they're doing what they do and find relevant.

I had missed your excellent write-up so just read through it!  It seems carefully written, makes nuanced distinctions, and considers complexity in the many implicit interactions involved. I found it useful. 

Comment by remmelt on How much time should EAs spend engaging with other EAs vs with people outside of EA? · 2021-02-19T14:10:39.192Z · EA · GW

Thank you for starting a  thread on this open question! Just reading through.

I wrote some quick thoughts on the value of getting a diversity of views here.

Comment by remmelt on Are we actually improving decision-making? · 2021-02-16T15:39:57.927Z · EA · GW

Thank you too for your interesting counterarguments. Some scattered ideas on each: 

1. Your first point seems most applicable at the early stages of forming a community.
What do you think of the further argument that there are diminishing marginal returns to finding additional people who share your goals, and corresponding marginal increases in the risk of not being connected with people who will bring up  important alternative approaches and views for doing good? 

This is a rough intuition I have but I don't know how to trade off the former against the latter right now. For example, someone I called with mentioned that doing a lecture for a computer science department is going to lead to more of the audience members visiting your EA meetups than if you hold it for the anthropology department. There are trade-offs here and in other areas of outreach but it's not clear to me how to weigh up  considerations.

My sense is that as our community continues to grows bigger (an assumption) with fewer remaining STEM hubs to still reach out to, that (re-)connecting with people who are more likely to take up similar goals will yield lower returns. In the beginning days of EA, Will MacAskill and Toby Ord prioritised gathering with a core group of collaborators to motivate each other and divide up work, as well as reaching out further to amenable others in their Oxford circles. Currently my impression is that in many English-speaking countries, and particularly within professional disciplines that are (or used to be) prerequisites for pursuing 80K priority career paths, it is now quite doable for someone to find such collaborators. 

Given that we're surrounded more by like-minded others that we can easily gather with, it seems more likely to drift into forming a collective echo chamber that misses or filters out important outside perspectives. My guess is that EA initiators now get encouraged more to pursue actions that the EAs they meet or respect will re-affirm as 'high impact'. On the other hand, perhaps they are also surrounded by more comrades who are able to observe their concrete actions, comprehend their intentions more fully, and give faster and more nitty-gritty feedback. 


2. On your second point, this made me change my mind somewhat! Although it may be harder to identify specific perspectives that we are missing if we're surrounded by less non-EAs, we can still identify the people who we are missing from the community. You mentioned that we're missing conservatives, and this post on diversity also mentioned social conservatives. Spotting a gap in cognitively diverse people ('social conservatives') seems relatively easy to do in say the EA Survey, while spotting  a gap in important perspectives may be much harder if you're not already in contact with the people who have them (my skimpy attempts for social conservatives: 'more respect for hidden value of traditions, work more incrementally, build up more stable and lasting collaborations, more wary of centralised decision-making without skin in the game').

Anthropologists were  also been given as an example by 80K since they understood the burial practices that were causing Ebola to spread. I think the framing here of anthropologists having specialised skills that could turn out to be useful, or a framing  of whether you can have enough impact pursuing a career in anthropology (latter mentioned by Buck Schlegeris) misses another important takeaway for EA though:  if you seek advice from specialists who have spent a lot of time observing and thinking differently about an area similar to the one you're trying to influence through your work, they might be able to uncover what's missing in your current approach to doing good. 

I'd also be curious to read other plausible examples of professionals whose views we're missing!


3. Your third point on EAs being pretty open-minded does resonate with me, and I agree that should make us less worried about EAs insulating themselves from different outside opinions. My personal impression is that EAs tend to be most open-minded in conversations they have inside the community, but are still interested and open to having conversations with strangers they're not used to talking with. 

My guess is that EAs still come across as kinda rigid to outsiders in terms of the relevant dimensions they're willing to explore whole-heartedly  in public conversations about making a positive difference. I like this post on discussing EA with people outside the community for example, but its starting point seemed to be to look for opportunities to bring up and discuss altruistic causes with unwitting outsiders that EAs have already thought  a long time about (in other words, it starts from our own turf where we can assume to have an informational advantage). As another example, a few responses by EA leaders that I've seen to outside criticisms of tenets of EA appeared to be somewhat defensive and stuck in views already held inside EA (though often the referred-to criticism seemed to mischaracterise EA views, making it hard to steelman that criticism and wring out any insights). 

The EA community  reminds me a lot of the international Humanist community I was involved in for three years: I hung out with people who were open-minded, kind, pondered a lot, and were willing to embrace wacky science or philosophy-based beliefs. But they were also kinda stuck on expounding on certain issues they advocated for in public (e.g. atheism, right to free speech, euthanasia, living a well-reflected life, scepticism and Science, leaving money in your will for Humanist organisations). There was even a question of whether you were Humanist enough – one moment I remember feeling a little uncomfortable about was when the leader of the youth org I was part of decided to remove the transhumanists from the member list because they were 'obviously' not Humanist. From the inside Humanism felt like it was a big influential thing , but really we were a big fish in a little pond.

–> Would be curious to hear where your  impressions of EAs you've met differ here! 

Over the last years, messaging from EA does seem to have become less preachy. I.e. describing and allowing space for more nuanced and diverse opinions and relying less on big simplified claims that lack grounding in how the world actually works (e.g. claims about an intervention's effectiveness based on a metric from one study, a 100x donation effectiveness multiplier for low-income countries, leafletting costing cents per chicken saved, that once an AI is generally capable enough it will recursively improve its own design and go FOOM).

But I do worry about EAs now no longer needing to interact as much with outsiders who think about problems in fundamentally different ways. Aspiring EAs do seem to make more detailed, better grounded, and less dogmatic arguments. But for the most part, we still appear to map and assess the landscape using  similar styles of thinking as before. For example, posts recommended in the community that I've read often base their conclusions on explicit arguments that are elegant and ordered. These arguments tend to build on mutually exclusive categorisations, generalise across large physical spaces and timespans, and assume underlying structures of causation that are static. Authors figure out general scenarios and assess the relative likelihood of each, yet often don't disentangle the concrete meanings and implications of their statements nor scope out the external validity of the models they use in their writing (granted, the latter are much harder to convey). Posts usually don't cover variations across concrete contexts, the relations and overlap between various plausible perspectives, or the changes in underlying dynamics much (my posts aren't exempt here!). Furthermore, the range of environments (e.g. in Western academia, coding, engineering)  that the people involved in EA were exposed to in the past that they now generalise certain arguments from are usually very different relatively from the contexts in which beneficiaries reside whom they're trying to improve the lives of (e.g. villages in low income countries, animals in factory farms, other cultural and ethnic groups that will be affected by technological developments). 


4. That brings me to your fourth point.  What you proposed resonates with my personal experience in trying to talk with people from other groups ('EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare'). I haven't asked others about their attempts at kindling constructive dialogues but I wouldn't be surprised if many of those who did also came away somewhat disappointed by a seeming lack of altruistic or epistemic care. 

So I think this is definitely a valid point, but I still want to suggest some nuances:

  • We could be more explicit, deliberate, and targeted about seeking out and listening intently to specialists who actually do genuinely work towards making a positive difference in their field, yet take on possibly insightful views and approaches to doing good that draws from different life experience. I think we can do more than open-mindedly explore unrelated groups in our own spare time. I also think it's not necessary for a specialist to take a cosmopolitan and/or consequentialist altruistic angle to their work for us to learn from them, as long as they are somehow incentivised to convey or track true aspects of the world in their work.
  • If we stick tightly to comparing outsiders' thinking against markers used in EA to gauge say good judgement, scientific literacy, or good cause prioritisation, then we're kinda missing the point IMO. Naturally, most outside professionals are not going to measure up against standards that EAs have promoted amongst themselves and worked hard to get better at for years.  A more pertinent reason to reach out IMO is to listen to people who think differently, notice other relevant aspects of the fields they're working in, and can help us uncover our blindspots.
Comment by remmelt on Are we actually improving decision-making? · 2021-02-16T13:31:23.019Z · EA · GW

Thank you too for your interesting counterarguments. Some scattered ideas on each: 
 

Comment by remmelt on Possible gaps in the EA community · 2021-02-02T15:38:30.236Z · EA · GW

An  impression after skimming this post (not well thought through; do point out what I missed):  
Some of the tentative project ideas listed are oriented around extending EA's reach via new like-minded groups who will share our values and strategies. 

Sentences that seemed to be supporting this line of thinking:

... making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion.

...So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.

...One problem with area specific community building is that in order to be taken seriously and know enough to be helpful to people, you might yourself need to be doing object level work in the area.


I'm unsure how much I misinterpreted specific project ideas listed in this post. 

Leaving that aside, I  generally worry about encouraging further outreach focused on creating like-minded groups of influential professionals (and even more about encouraging initiators to focus their efforts on making such groups look 'prestigious'). I expect that will discourage efforts in outreach to integrate importantly diverse backgrounds, approaches, and views. I would expect EA field builders to involve fewer of the specialists who developed their expertise inside a dissimilar context, take alternative approaches to understanding and navigating their field, or have insightful but different views that complement views held in EA.

A field builder who simply aims to increase EA's influence over decisions made by professionals will tend to select for and socially reward members that line up with their values/cause prio/strategy as a default tactic, I think. Inversely, taking the tactic of connecting EAs who like to talk with other EAs who are climbing similar career ladders leads to those gathered themselves agreeing to and approving each other more for exerting influence in stereotypically EA ways. Such group dynamics can lead to a kind of impoverished homogenisation of common knowledge and values.

I imagine a corporate, academic, or bureaucratic decision maker getting involved in an EA-aligned group and consulting their collaborators on how to make an impact. Given that they're surrounded by like-minded EAs, they may not become aware of shared blindspots in EA. Conversely, they'd less often reach out and listen attentively to outside stakeholders who can illuminate them on those blindspots. 

Decision makers who lose touch with other important perspectives will no longer spot certain mistakes they might make, and may therefore become (even more) overconfident about certain ways of making impact on the world. This could lead to more 'superficially EA-good' large-scale decisions that actually negatively impact persons far removed from us.  


In my opinion, it would be awesome if 

  1. along with existing field-building initiatives focused on expanding the influence of EA thought,
  2. we encourage corresponding efforts to really get in touch and build shared understandings with specialised stakeholders (particularly, those with skin in the game) who have taken up complementary approaches and views to doing good in their field.

Some reasons:

  • Dedicated EA field builders seem to naturally incline towards type 1 efforts. Therefore, it's extra important for strategic thinkers and leaders in the EA community to be  deliberate and clear about encouraging type 2 efforts in the projects they advise.
  • 1 is challenging to implement but EA field builders have been making steady progress in scaling up initiatives there (e.g. staff at Founder's Pledge, Global Priorities Institute, Center for Human-Compatible AI).
  • 2  seems much more challenging intellectually. They require us to build bridges that allow EA and non-EA-identifying organisations to complement each other: complex, nuanced  perspectives that allow us to  traverse between general EA principles and arguments, and the  contextual awareness and domain-specific know-how (amongst others) of experienced specialists. I have difficulty recalling EA initiatives that were explicitly intended for coordinating type 2 efforts.


At this stage, I would honestly prefer if field builders start paying much deeper attention to 2. before they go out changing other people's minds and the world. I'm not sure how much credence to put in this being a better course of action though. I have little experience reaching out to influential professionals myself. It also feels I'm speculating here on big implications in a way that seems unnecessary or exaggerated. I'd be curious to hear more nuanced arguments from an experienced field-builder.

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-11T08:50:20.638Z · EA · GW

Got it!

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-11T08:45:38.373Z · EA · GW

This sounds reasonable to me actually. The rest of the post was about making a specific case for funding my entrepreneurial work, rather than expounding on widespread bottlenecks entrepreneurs seem to face to get funded for doing good work and developing it further.  

I started writing a 10-page draft to try to more detachedly analyse work by and interactions between entrepreneurs and funders.

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-11T08:36:18.072Z · EA · GW

This does resonate with me. There are quite some projects that I worked on making happen behind the scenes that I wouldn't want to stamp my name on. I've talked with others who mentioned similar bottlenecks (e.g. GoodGrowth people in 2019). 

Thank you for your good wishes, JJ!

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-10T17:13:10.975Z · EA · GW

Thank you for the clarification! This makes a lot of sense.

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-05T15:56:55.401Z · EA · GW

The Forum's moderators have had some discussion in the past on whether job listings should ever appear on Frontpage; it was a close call, but we think a few such posts once in a while is okay. However, I expect that there are many more potential job applicants than potential grantmakers on the Forum, so posts like this are less likely to be relevant to a random reader than a job listing. 


Could you disambiguate some terms here? I see I misread this paragraph before. I'm more confused now about what you're specifically saying. 

E.g. 
- were you trying to say that there are 'many more potential grantees than grantmakers' (clearly true though this post was more aimed at smaller funders looking for an argued case)

- or were you implying I was posting as a job applicant (that doesn't seem right, as explained two comments above)

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-05T15:20:38.200Z · EA · GW

Hence, posts like this should be "Personal Blog" unless they involve discussion of other topics as well.

Most of the introductory paragraphs of this post were pointing to more general gaps in entrepreneurial support (i.e. other topics).

To be clear, I think the decision you made may have been reasonable. However, this post doesn't match the criteria you stated for setting posts as Personal Blog. I think for moderation to be credible here, the criteria and underlying reasons must be clear to readers.

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-05T15:00:22.272Z · EA · GW

Thank you for sharing your reasoning.

I empathise with that a post like mine could trigger a series of other people basically posting open requests for jobs. From a purely pragmatic standpoint, I get where the Forum's moderators are coming from – drawing the line before it becomes a slippery slope.

Note that this post does not seem to be a job listing (edit: I misread that – I'm confused what you actually mean with posts of this type), unless you really stretch the meaning of that category. 

  • I'm not soliciting for a job (i.e. a paid position of regular employment).
  • The I'm an entrepreneur framing could be changed into a Proposal-for-a-small-incubator-of-new-EA-services framing while changing very little of the content (I'd have just added in the name of my sole proprietorship). I  chose not to do that because I don't like hiding behind an official entity to get paid when it convolutes what's actually going on, gives off an impression that I have less conflict of interest, and reduces my skin in the game.

I would appreciate if Forum moderators work out specifically how to deal with edge cases like this one. It would set a bad precedent if your decision convinces readers more that for their future write-up they should come up with a snazzy new project name and sprinkle in opaque orgspeak. 

Note: Rupert is a friend of mine, but I wasn't aware that he had read this post before he posted his.

Comment by remmelt on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-04T16:51:59.808Z · EA · GW

Interesting! Let me watch it

Comment by remmelt on Donor Lottery Debrief · 2020-08-10T19:14:15.421Z · EA · GW
Looking for more projects like these

AI Safety Camp is seeking funding to professionalise management.

Feel free to email me on remmelt[at}effectiefaltruisme.nl. Happy to share an overview of past participant outcomes + sanity checks, and a new strategy draft.

Comment by remmelt on Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) · 2019-09-08T12:57:33.109Z · EA · GW

First off, I really appreciate the straightshooter conclusion of 'QC is unlikely to be helpful to address current bottlenecks in AI alignment.' even while you both spent many hours looking into it.


Second, I'm curious to hear any thoughts on the amateur speculation I threw at Pablo in a chat at the last AI Safety Camp:

Would quantum computing afford the mechanisms for improved prediction of the actions that correlated agents would decide on?

As a toy model, I'm imagining hundreds of almost-homogenous reinforcement learning agents within a narrow distribution of slightly divergent maps of the state space, probability weightings/policies, and environmental inputs. Would current quantum computing techniques, assuming the hardware to run them on is available, be able to more quickly/precisely derive the % portions of those agents at say State1 would take Action1, Action2, or Action3?

I have a broad vague sense that if that set-up works out, you could leverage that to create a 'regulator agent' for monitoring some 'multi-agent system' composed of quasi-homogenous autonomous 'selfish agents' (e.g. each negotiating on behalf of their respective human interest group) that has a meaningful influence on our physical environment. This regulator would interface directly with a few of the selfish agents. If that selfish agent subset are about to select Action1, it will predict what % of other, slightly divergent algorithms would also decide Action1. If the regulator prognoses that an excessive number of Action1s will be taken – leading to reduced rewards to or robustness of the collective (e.g. Tragedy of the Commons case of overutilisation of local resources) – it would override that decision by commanding a compensating number of the agents to instead select the collectively-conservative Action2.

That's a lot of jargon, half of which I feel I have little clue about... But curious to read any arguments you have on how this would (not) work.

Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-31T12:55:04.284Z · EA · GW

Thank for clarifying 'the similar wins' point. You seem to imply that these coaching/software/ops support/etc. wins compound on each other.


On the shared Asana space, I'll keep checking in with the EA Netherlands/Rethink/CE coaches working with EA groups/charity start-ups on how time-(in)efficient/(in)convenient it is to keep track of team tasks with the leaders they are mentoring.

From my limited experience, a shared coaching GDoc already works reasonably well for that:

  • Upside: Everyone uses GDoc. Easy to co-edit texts + comment-assign questions and tasks that pop up in email inbox. On the other hand, the attentional burden of one party switching over to the other's task management system to track say biweekly check-ins over half a year doesn't seem worth it.
  • Downsides: GDocs easily suck away the first ten minutes of a call when you need to update each other on two weeks of progress in one swoop. It also relies on the leader/coach actively reminding each other to check medium-term outcomes and key results. This 'update/remind factor' felt like a demotivating drag for me in my coach or accountability check-ins – all with people who I didn't see day to day and therefore lacked a shared context with.

The way you arrange the format together seems key here. Also, you'd want to be careful about sharing internal data – for Asana, I recommend leaders to invite coaches comment-only to projects, rather than entire teams.


On other software or services, curious if any 'done deals' come to mind for you.


Regarding your forecasting platform, I'm curious if anything comes to mind on fitting forecasts there with EA project planning over the next years.


Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-20T17:28:58.646Z · EA · GW

Good to hear your thoughts on this!

What do you mean here with a ‘portfolio of similar wins’? Any specific example of such a portfolio that comes to mind?

Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-08-15T11:07:56.697Z · EA · GW

Hey, I never finished my reply to you.

First off all, I thought those 4 items are a useful list of what you referred to as infrastructure for small projects.


On offering Asana Business:

  • We are now offering Asana Business teams at 90% discounts (€120/team/month) vs. usual minimum cost. This is our cost price since we're using the a 50% Nonprofit discount, and assign one organisation member slot per team facilitator. The lower cost is a clear benefit to the organisations and groups that determine to move to Asana Business
  • I'm working with ops staff from RethinkCharity and Charity Entrepreneurship (and possibly Charity Science Health) to move to a shared Asana space called 'Teams for Effective Altruism' (along with EA Netherlands and EA Cambridge). Not set in stone but all preparations are now in place.
  • This doesn't yet answer your question of why I particularly thought of Asana. Here are some reasons for why to work on building up an shared Asana Business space together:
    • Online task management is useful: I think at least half of the EA teams >5 people running small projects would benefit from tracking their tasks online for remote check-ins. For instance, when it's hard to travel to say a meeting room once a week, or you need to reliably carry out nitty-gritty ops tasks where it feels burdensome for a manager to ask 'Have you done this and this and this?'. At EA Netherlands, a lot of the project delays and time wasted seemed to emerge along the lines of someone feeling unclear of what was expected/endorsed of their role, being aware of update X, waiting for person Y to confirm, or forgetting/having to remind about task Z. It seems to make common-sense to avoid that by creating a 'single place of truth' where team members can place requests and update each other on progress asynchronously.
    • Facilitate onboarding of teams: Leaders of small projects seem to experience difficulty in getting volunteers to building the habit of updating online tasks in the first months, even if most would agree on reflection that it's worth the switching cost. In surveying EA regional groups in northern Europe, the one reason organisers kept mentioning to me why they weren't using task software came along the lines of them previously excitedly trying to track tasks online but volunteers forgetting to update their tasks a few weeks later. Both EA Netherlands and EA Oxford flopped twice at using Trello. My sense is they would have succeeded more likely than not if someone took up the role of facilitating team members to use the platform in ways that was useful to them, and reminding them to update their tasks weeks down the line. Part of the Asana team application process is assigning a facilitator, who I can guide from within our shared space.
    • Asana Business is top-notch: I personally find Asana Business' interface intuitive and well-ordered, striking a balance between powerful features and simplicity. External reviews rate Asana around 4-4.5 out of 5. Having said that, some EA teams seem to have different work styles or preferences that fit other platforms better – I've heard of people using Trello, Nozbe, Notion, GSheets, or even just NextCloud's basic task board.
    • Asana is an unexploited Schelling point for collaboration: A surprising number of established EA organisations use Asana: the Centre for Effective Altruism, RethinkCharity, Founder's Pledge, Centre for Human-compatible AI, Charity Entrepreneurship, Charity Science Health, 80,000 Hours(?), and probably a few I haven't discovered yet. That's an implicit endorsement of Asana's usefulness for 'EA work' (bias: Dustin Moskovitz co-founded it). Asana staff are now making their way into the Enterprise market, and intend to developing features that enable users to smoothly start collaborations across increasingly large organisational units (Teams...Divisions...Organisations).
    • Passing on institutional knowledge to start-ups: In a call I had with a key Asana manager, he randomly mentioned how it would be great to enable organisations to coordinate across spaces. I don't think we have to wait for that though. When EA Hub staff could offer Asana teams to local EA groups in our shared space, coach them by commenting on projects/scheduling check-in calls, and stay up to date of what's going on. Likewise, Charity Entrepreneurship could offer Asana teams to the charities they incubate and continue checking in with and supporting the start-up leaders coming out of the incubation program. People could also share project templates (e.g. conference/retreat organiser checklists), share standardised data from custom fields, etc.
    • So of your infrastructure suggestions, that seems to cover operations support and coaching/advice.
    • To make sharing the space work, we'd have to close off short-term human error/malice failure modes as well as tend to the long-term culture we create. Downsides of connecting software up to discuss work smoothly is that it's also easier for damaging ideas and intentions to cross boundaries, for people to jostle for admin positions, and for a resulting homogenous culture is build upon fragile assumptions of how the world works, and what the systematic approaches are to improving it.


Comment by remmelt on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T12:35:05.294Z · EA · GW

@Ozzie, I'm curious what kinds of infrastructures you think would be worth offering.

(I'm exploring offering Asana Business + coaching to entrepreneurs starting on projects)


Comment by remmelt on What are some neglected practices that EA community builders can use to give feedback on each other's events, projects, and efforts? · 2019-06-07T06:37:44.867Z · EA · GW

I also find the idea of recording meetings interesting. I’d worry about this not working out because of bandwidth limitations – asking an overseas organiser to watch on passively for an hour and then collect their thoughts on what happened seems to ask more of them than to interact with, query, and coach in the moment.

I wonder if there are any ways to circumvent that bottleneck. Perhaps calling in the person through Zoom and letting them respond at some scheduled moment helps somewhat? Any other ideas?

Another way for giving feedback might be to give people access to your task planning. I just emailed Asana about whether they’d be willing to offer a free Business/Enterprise team for people to run projects on.

Text: “We would like to pilot one Asana Business team for community start-ups to collaborate on tasks, link with coaches and advisors, collect feedback from the groups we service, and to be more transparent to charity seed funders.”

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T11:00:55.100Z · EA · GW

Better description of grantmaker’s Scope: the ‘problem-skills intersections’ they focus on evaluating. Staff of funds should share these with other larger funders, and publish summaries of them on their websites.

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T09:45:49.488Z · EA · GW

Been messaging with Brendon and others. I thought I’d copy-paste the – hopefully – non-inflammatory / personal parts of the considerations I last wrote about last so we can continue having collaborative truth-seeking discussions on those here as well.

To Brendon

I would clearly keep stating that you’re focused on funding early start-ups in the pilot/testing stages who are working with clearly delineated minimum viable target groups.

That cuts out a bunch of a funding categories like funding AI safety researchers, funding biotech work, or funding entire established national EA groups, and I think that’s good! (actually [...], the [...] from EA Netherlands might not like me saying that...anyway)

Those are things people at EA Grants, EA Community Building Grants, EA Funds or OpenPhil (of course!) might be focused on right now.

The Community Building Grant has some definite problems in the limited time they have to assess and give feedback to national and regional EA organisers, and their restrictive career plan changes criteria. Harri from CEA and I had a productive conversation about [that] [...] But in my opinion funding by the Angel Group there should focus on specific projects for specific target groups by the organisers. I think national and local group members should play a more active role in sharing feedback on how much the organisers work has helped them come to better reflected decisions for doing good and stick to them – and offer funding to extend the organisers’ runway. Which I hope makes it clear what kind of area I see the crowdfunding platform Heroes & Friends come in.”

And in the WhatsApp group exploring that crowdfunding platform:

On specialisation between funders

@[...], I think it’s important for funding platforms and grantmakers to clearly communicate what they’re specialised in a few paragraphs.

Especially:

  • scope in terms of cause/skill intersections
  • brightspots (funding area where their batting rate is high)
  • blindspots (where they miss promising funding opportunities, i.e. false negatives)
  • traps (failure modes of how they conduct their processes)

[added later: To “traps”, I should also add failure modes that grantmakers could see other, less experienced funders running into (so a newcomer funder can plan e.g. a Skype call with the grantmaker around that)]

This is something most grantmakers in the EA community are doing a pisspoor job at right now IMO (e.g. see our earlier Messenger exchange on online communication of EA Funds).

There’s a lot of progress to be made there. I expect building consensus around funding scopes and specialisation will significantly reduce the distractions and fracturing of groups we might each add to with scaling up the Angel Group or [possibly] collaborating with Heroes & Friends.

I’ve tried to clearly delineate with you guys what EA RELEASE (for lack of a better name for now) would be about.

Regarding the Angel Group, here is the suggestion I just shared with Brendon: [...]

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-21T08:10:08.176Z · EA · GW

Thanks, that clarifies a bunch of things for me.

I realise now I was actually confused by your sentence myself.

I took

Rather than hiding opportunities from other funders like venture capitalists in the for-profit world, I believe that EA funders such as EA Grants, BERI Grants...”

to mean

“EA Grants, BERI Grants, etc. should not hide opportunities from funders like VCs from the for profit sector”.

The rest of your article can be coherently read with that interpretation. To prevent that I’d split it into shorter sentences:

“Venture capitalists in the for-profit sector hide investment opportunities from others for personal monetary gain. EA grantmakers have no such reason for hiding funding opportunities from other experienced funders. Therefore, ...

Or at the very least, make it “Rather than hiding opportunities from other funders like venture capitalists in the for-profit world DO, I believe that...”

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T23:18:40.068Z · EA · GW

John Maxwell wrote an analysis on your initial post on how most platform initiatives seem to fail in the EA community and that the ones that did last seemed resulted from a long stretch of consensus building (+ attentive refinement and execution in my opinion). This was useful for me to consider that more deeply as an issue in coordinating funding in the EA community. It at least led me to take smaller, tentative steps to trying things out while incorporating the advice/goals/perspectives/needs of people with deep understandings of aspects or a clear stake in using the final product.

https://forum.effectivealtruism.org/posts/io6yLz6GtF6kvXt99/ideas-for-improving-funding-for-individual-eas-ea-projects#48ReFmNG5Zf3yhwk9

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T22:51:11.857Z · EA · GW

Another question I’m curious about: has a grantmaker from an EA-affiliated organisation you’ve been in touch with been open to the idea of sourcing ideas or incorporating applications coming in through Angel Group form? Or have they shared any worries or reservations you can share?

I think for example that a ‘just-another-universal-protocol’ worry would be very reasonable to have here. This is something I’m probably not helping with since I’m exploring an idea for a crowdfunding + feedback gathering platform for early-stage community entrepreneurs in the EA community to extend their runways (been recently in touch with Brendon on that).

To avoid that I think we need to do the hard work of reaching out to involved parties and have many conversations to incorporate their most important considerations and start mutually useful collaborations. I.e. consensus building.

Comment by remmelt on EA Angel Group: Applications Open for Personal/Project Funding · 2019-03-20T22:03:25.040Z · EA · GW

+1 Something I could imagine being the case is that people reacted wanting to downvote after seeing this paragraph:

Rather than hiding opportunities from other funders like venture capitalists in the for-profit world, I believe that EA funders such as EA Grants, BERI Grants, and the EAF Fund should all use a shared central application so that each funder can discover and fund promising opportunities that they otherwise may not have encountered.

A possible concern people who downvoted might have is that if e.g. a venture capital funder can have free access to all applications within the EA community while being new to it they might try to e.g. fund something complex like national efffective altruism groups where they don’t understand well of how the organisers on the ground are communicating certain ideas (e.g. cause prioritisation for career planning). This might end up leading them to overconfidently fund initiatives that shouldn’t be funded.

Jargon association spray: unilateralist’s curse, reputational risks, founder’s effects, platforms fragmentation, Schelling points.

But that’s just a guess and I don’t really know. I do share in the sentiment that the option to downvote something is too easy for people who pattern-match abstract EA ideas like that, instead of putting in the somewhat strenuous and vulnerable work of sharing their impressions and asking further in the comment section about how the platform concretely works.

@Brendon, I thought you tried to address possible risks of a making applications available online in a previous post.

How do you think right now about how to address funder blindspots in built-up knowledge and evaluation frameworks – for both established EA grantmakers and new venture capitalist-style funders (who might have valuable for-profit start-up experience to build on)?

Comment by remmelt on CEA on community building, representativeness, and the EA Summit · 2018-08-15T14:36:30.015Z · EA · GW

What are some open questions that you’d like to get input on here (preferably of course from people who have enough background knowledge)?

This post reads to me like an explanation of why your current approach makes sense (which I find mostly convincing). I’d be interested in what assumptions you think should be tested the most here.

Comment by remmelt on Request for input on multiverse-wide superrationality (MSR) · 2018-08-14T01:23:03.839Z · EA · GW

Hey, a rough point on a doubt I have. Not sure if it's useful/novel.

Going through the mental processes of a utilitarian (roughly defined) will correlate with others making more utilitarian decisions as well (especially when they're similar in relevant personality traits and their past exposure to philosophical ideas).

For example, if you act less scope-insensitive, ommission-bias-y, or ingroup-y, others will tend to do so as well. This includes edge cases – e.g. people who otherwise would have made decisions that roughly fall in the deontologist or virtue ethics bucket.

Therefore, for every moment you end up shutting off utilitarian-ish mental processes in favour of ones where you think you're doing moral trade (including hidden motivations like rationalising acting from social proof or discomfort in diverging from your peers), your multi-universal compatriots will do likewise (especially in similar contexts).

(In case it looks like I'm justifying being a staunch utilitarian here, I have a more nuanced anti-realism view mixed in with lots of uncertainty on what makes sense.)

Comment by remmelt on Open Thread #40 · 2018-07-18T21:39:56.058Z · EA · GW

Could you give a few reasons why you the EA Forum seems to works better than the Facebook groups in your view?

The example posts I gave are on the extreme end of the kind of granularity I'd personally like to see more of (I deliberately made them extra specific to make a clear case). I agree those kinds of posts tend to show up more in the Facebook groups (though the writing tends to be short there). Then there seems to be stuff in the middle that might not fit well anywhere.

I feel now that the sub-forum approach should be explored much more carefully than I did when I wrote the comment at the top. In my opinion, we (or rather, Marek :-) should definitely still run contained experiments on this because on our current platform it's too hard to gather around topics narrower than being generally interested in EA work (maybe even test a hybrid model that allows for crossover between the forum and the Facebook groups).

So I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Thanks for your points!

Comment by remmelt on Open Thread #40 · 2018-07-17T10:58:44.346Z · EA · GW

Another problem would be when creating extra sub-forums would result in people splitting their conversations up more between those and the Facebook and Google groups. Reminds me of the XKCD comic on the problem of creating a new universal standard.

I think you made a great point in your comment on that people need to do ‘intensive networking and find compromises’ before attempting to establish new Schelling points.

Comment by remmelt on Open Thread #40 · 2018-07-17T10:32:57.474Z · EA · GW

Hmm, would you think Schelling points would still be destroyed if it was just clearer where people could meet to discuss certain specific topics besides a ‘common space’ where people could post on topics that are relevant to many people?

I find the comment you link to really insightful but I doubt whether it neatly applies here. Personally, I see a problem with that we should have more well-defined Schelling points as the community grows but that currently the EA Forum is a vague place to go to ‘to read and write posts on EA’. Other places for gathering to talk about more specific topics are widely dispersed over the internet – they’re both hard to find and disconnected from each other (i.e. it’s hard to zoom in and out of topics as well as explore parallel topics that once can work on and discuss).

I think you’re right that you don’t want to accidentally kill off a communication platform that actually kind of works. So perhaps a way of dealing with this is to maintain the current EA Forum structure but then also test giving groups of people the ability to start sub-forums where they can coordinate around more specific Schelling points on ethical views, problem areas, interventions, projects, roles, etc. – conversations that would add noise for others if they did it on the main forum instead.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-11T05:20:37.353Z · EA · GW

Hi @Naryan,

I’m glad that this is a more powerful tool for you.

And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)

Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.

I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.

Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.

(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)

Comment by remmelt on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T04:52:41.802Z · EA · GW

Great! Cool to hear how you’re already making traction on this.

Perhaps EAWork.club has potential as a launch platform?

I’d also suggest emailing Kerry Vaughan from EA Grants to get his perspective. He’s quite entrepreneurial so probably receptive to hearing new ideas (e.g. he originally started EA Ventures, though that also seemed to take the traditional granting approach).

Let me know if I can be of use!

Comment by remmelt on Open Thread #40 · 2018-07-10T16:59:51.235Z · EA · GW

Wow, nice! Would love to learn more.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-10T12:19:26.215Z · EA · GW

First off, I was ambiguous in that paragraph about the level I actually thought decisions should be revised or radically altered. i.e. in say the next 20 years, did I think OpenPhil should revise most of the charities they fund, most of the specific problems they funded or broad focus areas? I think I ended up just expressing a vague sense of ‘they should change their decisions a lot if they put in much more of the community’s brainpower into analysing data from a granular level upwards’.

So I appreciate that you actually gave specific reasons for why you'd be surprised to see a new focus area being taken up by people in the EA community in the next 10 years! Your arguments make sense to me and I’m just going to take up your opinion here.

Interestingly, your interpretation that this is evidence for that there shouldn't be a radical alteration in what causes we focus can be seen both as an outside view and inside view. It's an outside view in the sense that it weights the views of people who've decided to move into the direction of working on the long term future. It's also an inside view in that it doesn't consider roughly what percentage of past cosmopolitan movements where members converged on working on a particular set of problems were seen as wrong by their successors decades later (and perhaps judged to have been blinded by some of the social dynamics you mentioned: groupthink, information cascades and selection effects).

A historical example where this went wrong is how in the 1920's Bertrand Russell and other contemporary intelligentia had positive views on communism and eugenics, which later failed in practice under Stalin's authoritarian regime and Nazi Germany, respectively. Although I haven't done a survey of other historical movements (has anyone compiled such a list?), I think I still feel slightly more confident than you that we'll radically alter what we'll work on after 20 years if we'd make a concerted effort now to structure the community around enabling a significant portion of our 'members' (say 30%) to work together to gather, analyse and integrate data at each level (whatever that means).

It does seems that we share some intuitions (e.g. the arguments for valuing future generations similarly to current generations seem solid to me). I've made a quick list on research that could lead to fundamental changes in what we prioritise at various levels. I'd be curious to hear if any of these points has caused you to update any of your other intuitions:

Worldviews

  • more neuroscience and qualia research, possibly causing fundamental shifts in our views on how we feel and register experiences

  • research into how different humans trade off suffering and eudaimonia differently

  • a much more nuanced understanding of what psychological needs and cognitive processes lead to moral judgements (e.g. the effect on psychological distance on deontologist vs. consequentialist judgements and scope sensitivity)

Focus areas:

Global poverty

  • use of better metrics for wellbeing – e.g. life satisfaction scores and future use of real-time tracking of experiential well-being – that would result in certain interventions (e.g. in mental health) being ranked higher than others (e.g. malaria)

  • use of better approaches to estimate environmental interactions and indirect effects, like complexity science tools, which could result in more work being done on changing larger systems through leverage points

Existential risk

  • more research on how to avoid evolutionary/game theoretical “Moloch” dynamics instead of the current "Maxipok" focus on ensuring that future generations will live and hope that they have more information to assess and deal with problems from there

  • for AI safety specifically, I could see a shift in focus from a single agent produced out of say a lab that presumably gets so powerful to outflank all other agents to analysing systems of more similarly capable agents owned by wealthy individuals and coalitions that interact with each other (e.g. like Robin Hanson's work on Ems) or perhaps more research on how a single agent could be made out of specialised sub-agents representing the interests of various beings. I could also see a shift in focus to assessing and ensuring the welfare of sentient algorithms themselves.

Animal welfare

  • more research on assessing sentience, including that of certain insects, plants and colonial ciliates that do more complex information processing, leading to changed views on what species to target

  • shift to working on wild animal welfare and ecosystem design, with more focus on marine ecosystems

Community building

  • Some concepts like high-fidelity spreading of ideas and strongly valuing honesty and considerateness seem robust

  • However, you could see changes like emphasising the integration of local data, the use of (shared) decision-making algorithms and a shift away from local events and coffee chats to interactions on online (virtual) platforms

Comment by remmelt on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T07:26:33.470Z · EA · GW

I’m grateful that someone wrote this post. :-)

Personally, I find your proposal of fusing three models promising. It does sound difficult to get right in terms of both technical web development and setting up the processes that actually enable users to use the grant website as it was set out to be used. It would probably require a lot of iterative testing as well as in-person meetings with stakeholders (i.e. this looks like a 3-year project).

I’d be happy to dedicate 5 hours per week for the next 3 months to contribute to working it out further with key decision makers in the community. Feel free to PM me on Facebook if you’d like to discuss it further.

Here are some further thoughts on why the EA Grants structure has severe limitations

My impression is that CEA staff have thoughtfully tried to streamline a traditional grant making approach (by, for example, keeping the application form short, deferring to organisations that have expertise in certain areas, and promising to respond in X weeks) but that they’re running up against the limitations of such a centralised system:

1) not enough evaluators specialised in certain causes and strategies who have the time to assess track records and dig into documents

2) a lack of iterated feedback between possible donors and project leaders (you answer many questions and then only hear about how CEA has interpreted your answers and what they think of you 2 months later)

Last year, I was particularly critical about that little useful feedback was shared with applicants after they were denied with a standard email. It’s valuable to know why your funding request is denied – whether it is because CEA staff lack domain expertise or because of some inherent flaws to your approach that you should be aware of.

But applicants ended up having to take the initiative themselves to email CEA questions because CEA staff never got around to emailing some brief reasoning for their decisions to the majority of the 700ish applicants that applied. On CEA’s side there was also the risk of legal liability – that someone upset by their decision could sue them if a CEA staff member shared rough notes they made that could easily be misinterpreted. So if you’re lucky you receive some general remarks and can then schedule a Skype call to discuss those further.

Further, you might discover then that a few CEA staff members have rather vague models of why a particular class of funding opportunities should not be accepted (e.g. one CEA staff member was particularly hesitant about funding EA groups last year because it would make coordinating things like outreach [edit] and having credible projects branded as EA more difficult).

Finally, this becomes particularly troublesome when outside donors lean too heavily on CEA’s accept/deny decision (which I think happened at least once with EA Netherlands, the charity I’m working at). You basically have to explain to all future EA donors that you come into contact with why your promising start-up wasn’t judged to be impactful enough to fund by one of the most respected EA organisations.

I’d be interested in someone from the EA Grants team sharing their perspective on all this.

Comment by remmelt on Open Thread #40 · 2018-07-09T07:51:21.719Z · EA · GW

Thanks, done!

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-09T06:27:31.224Z · EA · GW

I've added some interesting links to the post on near vs. far mode thinking, which I found on LessWrong and Overcoming Bias.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-08T21:36:32.847Z · EA · GW

Hmm, so here are my thoughts on this:

1) I think you’re right that the idea of going meta from the object level is an idea that’s known to many EAs. I’d argue though that the categorisations in the diagram are valuable though because I don’t know of any previous article where they’ve all been put together. For veteran EAs, they’ll probably be obvious but I still think it’s useful for them to make the implicit explicit.

2) The idea of construal levels is useful here because of how thinking in far vs. near mode affects psychology. E.g. when people think in far mode they

  • have to ignore details, and tend to be less aware that those nuances actually exist

  • tend to associate other far-mode things with whatever they think of. E.g. Robin Hanson’s point that many sci-fi/futurism books (except, of course, Age of Em) focus on values and broad populations of beings that all look similar, and have blue book covers (i.e. sky, far away)

So this is why I think referring to construal levels adds value. Come to think of it, I should have mentioned this in the post somewhere. Also my understanding of construal level theory is shoddy so would love to hear opinions of someone who’s read more into it.

BTW, my sister mentioned that I could have made the post a lot more understandable for her if I just started with ‘Some considerations like X are more concrete and other considerations like Y are more abstract. Here are some considerations in between those.’ Judging by, that I could have definitely written it more clearly.

Comment by remmelt on Open Thread #40 · 2018-07-08T20:24:24.633Z · EA · GW

The EA Forum Needs More Sub-Forums

EDIT: please go to the recent announcement post on the new EA Forum to comment

The traditional discussion forum has sub-forums and sub-sub-forums where people in communities can discuss areas that they’re particularly interested in. The EA Forum doesn’t have these and this make it hard to filter for what you’re looking for.

On Facebook on the other hand, there are hundreds of groups based around different cause areas, local groups and organisations, and subpopulations. Here it’s also hard to start rigorous discussions around certain topics because many groups are inactive and moderated poorly.

Then there are lots of other small communication platforms launched by organisations that range in their accessibility, quality standards, and moderation. It all kind of works but it’s messy and hard to sort through.

It’s hard to start productive conversations on specialised niche topics with international people because

  • 1) Relevant people won’t find you easily within the mass of posts

  • 2) You’ll contribute to that mass and thus distract everyone else.

Perhaps this a reason why some posts on specific topics only get a few comments even though the quality of the insights and writing seems high.

Examples of posts that we’re missing out on now:

  • Local group organiser Kate tried X career workshop format X times and found that it underperformed other formats

  • Private donor Bob dug into the documents of start-up vaccination charity X and wants to share preliminary findings with other donors in the global poverty space

  • Machine learning student Jenna would like to ask some specific questions on how the deep reinforcement learning algorithm of AlphaGo functions

  • The leader of animal welfare advocacy org X would like to share some local engagement statistics on vegan flyering, 3D headset demos, before sending them off in a more polished form to ACE.

Interested in any other examples you have. :-)

What to do about it?

I don’t have any clear solutions in mind for this (perhaps this could be made a key focus in the transition to using the forum architecture of LessWrong 2.0). Just want to plant a flag here that given how much the community has grown vs. 3 years ago, people should start specialising more in the work they do, and that our current platforms are woefully behind for facilitating discussions around that.

It would be impossible for one forum to handle all this adequately and it seems useful for people to experiment with different interfaces, communication processes and guidelines. Nevertheless, our current state seems far from optimal. I think some people should consider tracking down and paying for additional thoughtful, capable web developers to adjust the forum to our changing needs.

UPDATE: After reading @John Maxwell IV's comments below, I've changed my mind from a naive 'we should overhaul the entire system' view to 'we should tinker with it in ways we expect would facilitate better interactions, and then see if they actually do' view.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-05T06:47:10.805Z · EA · GW

Changed it in the third paragraph. :-)

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T20:33:05.592Z · EA · GW

Hmm, I personally value say five people deeply understanding the model to be able to explore and criticise it over say a hundred people skimming through a tl;dr. This is why I didn’t write one (besides it being hard to summarise anything more than ‘construal levels matter – you should consider them in the interactions you have with others’, which I basically do in the first two paragraphs). I might be wrong of course because you’re the second person who suggested this.

This post might seem deceptively obvious. However, I put a lot of thinking into both refining categories and the connections between them and explaining them in a way that hopefully enables someone to master them intuitively if they take the time to actively engage with the text and diagrams. I probably did make a mistake by outlining both the model and its implications in the same post because it makes it unclear what it’s about and causes discussions here in the comment section to be more diffuse (Owen Cotton-Barratt mentioned this to me).

If someone prefers to not read the entire post, that’s fine. :-)

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T15:56:43.826Z · EA · GW

Hmm, I can’t think of a clear alternative to ‘V2ADC’ yet. Perhaps ‘decision chain’?

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T15:54:25.540Z · EA · GW

Hi Denise, can you give some examples of superfluous language? I tried to explain it as simply as possible (though sometimes jargon and links are needed to avoid having to explain concepts in long paragraphs) but I’m sure I still made it too complicated in places.

Comment by remmelt on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T06:57:57.519Z · EA · GW

I appreciate you mentioning this! It’s probably not a minor point because if taken seriously, it should make me a lot less worried about people in the community getting stuck in ideologies.

I admit I haven’t thought this through systematically. Let me mull over your arguments and come back to you here.

BTW, could you perhaps explain what you meant with the “There are other causes of an area...” sentence? I’m having trouble understanding that bit.

And with ‘on-reflection moral commitments’ do you mean considerations like population ethics and trade-offs between eudaimonia and suffering?