I gave feedback about another subforum (software engineering) via email but thought it might be useful to add as a comment here about this one.
The 'subforum' seems to be more of a chatroom than a traditional subforum with separate threads where posts can be voted and promoted to the main forum.
It will be harder to create an internal subculture and to grow as a community.
This will make it much more unlikely to succeed and stop future subforum experiments even though this isn't a true subforum.
Another benefit of designing it as a traditional subforum is that there are already lots of related EA forum posts that could be placed in the subforum and allow people to engage with content straight away.
I think with a more professional focused model there wouldn't be asks for people to switch professions, but for there to be larger career networks that can support EA by shifting directly into impactful roles using the skills they've already got.
There are also lots of experienced people who have heard of EA over the last decade but haven't got more involved as there has been less outreach directed towards them and ways for them to engage.
It would be easier, and with much quicker returns, to create more ways for people who are already on the edge of EA to be more connected to others interested in EA.
There are maybe a few hundred people working in EA bubbles, but there are probably thousands of people who have been following along with EA for 3+ years who would be very happy to share their knowledge or get more involved with EA directly.
I also think even if most people already agree, there are some people haven't thought about the subject of this post and may change their behaviour after having read it. I have seen a few examples of this on Twitter and in person before.
"Its main feature is a daily notification that encourages users to share a photo of themselves and their immediate surroundings given a randomly selected two-minute window every day. Critics noted its emphasis on authenticity, which some felt crossed the line into mundanity."
"As of July 2022 the app had over 20 million global installs estimated"
I think supporting friendships in a group can be useful, but this tends to be what most community organisers are already focusing on.
There are downsides like being perceived as a friends group which make it harder for new people to get involved. Also some of the most impactful people may not be looking for new friends, but are looking for advice on where to donate/work/volunteer their time.
I would suggest to try coaching first as it will be much quicker to find out if you enjoy it/find it impactful compared to therapy which could take years before you get a good sense of your personal fit.
80,000 Hours have a section in their career guide on exploration which might be useful here.
"Later in your career, if you’re genuinely unsure between two options, you might want to try the more ‘reversible’ one first. For instance, it’s easier to move from business to nonprofits than vice versa."
I think it would be similar to the opinion people would have on the choice of film you choose to see at the cinema or which meal to buy at a restaurant.
I don't see EA as trying to maximise all donations, just maximise the impact of donations set aside for effective giving. And the donation side is just part of the larger set of actions we can take when trying to do good.
"In the first post in this series, Climate and Lifestyle: policy matters, we discussed some of the most important lifestyle decisions for the climate and saw the effect that climate policy has on each person's potential impact. However, our analysis excluded one crucial lifestyle decision: donations to effective climate non-profits.
The potential impact of effective donations
Emissions per person vary considerably even across rich countries: the average American emits 18 tonnes of CO2 per year, whereas the average Swede emits only 7 tonnes. As a guiding rule, if you live in a rich country and live a typical lifestyle, then you probably emit between 5 and 20 tonnes of CO2 each year.
Our research suggests that Founders Pledge-recommended climate charities - the Clean Air Task Force and the Coalition for Rainforest Nations - have in the past averted a tonne of CO2 for less than $10 in expectation (i.e. after weighting the impact of the changes they worked for by the probability that the organisation actually made a difference), and potentially much less. Therefore, as Figure 1 shows, the expected impact of your personal donations is much larger than any of the lifestyle decisions discussed in the first post:
figure 1: Climate impact of lifestyle decisions compared to effective donations (tonnes of CO2)
This being said, it is very important to choose carefully who you donate to. Many organisations offer surprisingly cheap carbon offsets, promising to abate a tonne of CO2 with high confidence for $1 or less. These figures are not realistic. The incentives are not set up well for organisations to provide reliable carbon emissions reductions. There is limited oversight of offsetting organisations' work, so they have an incentive to offer attractive price points without actually reducing emissions. Thus, choosing instead to donate to effective policy and research organisations is crucial. The impact is plausibly 100 times greater.
The limited ambition of offsetting
This raises the question: does donating offset the harm you do by emitting? We argue that looking at donating through the lens of offsetting is doubly flawed. Most importantly, it limits people's ambitions. People ask: how can I undo the effect of my own emissions? Instead, they should ask: how can I have the biggest possible impact on climate change?
If we only donate to offset our personal emissions and no further, then we hugely restrict our potential impact. A typical person emits 5-20 tonnes of CO2 each year. So if you assume that the most effective climate charities can abate a tonne of CO2 for less than $10, then you can offset your emissions for just $200 per year. Our recommended charities operate on budgets in the low millions but have led policy campaigns that have had a huge effect on global climate policy. Many people in wealthy countries could give more than $200 to support them, and thereby have enormous leverage.
Ethics and offsetting
Secondly, donating to effective climate charities almost never, in any meaningful sense, offsets the harm you do by emitting.
In sum, it is not usually feasible to truly offset the harm from your past emissions. So, on rights-based views, donations to climate charities do not offset any harm you have done by emitting. On consequentialist theories, offsetting is always irrelevant, and we should instead try to do the most good with our donations. If stopping climate change turns out to be the best way to do good, then donations should be a top priority for the climate-conscious individual. For a more detailed treatment of these and more considerations, refer to our full research report on Climate and Lifestyle."
I'm not so sure about the religious tendencies, at least not in comparison to other communities or social movements. Especially if the people who seem to be most interested in AI alignment are the ones least interested in tithing/special diets .
I'm not sure how clear it is that it's much better for people to hear about EA at university, especially given there is a lot more outreach and onboarding at the university level than for professionals.
When I started I told myself that if each post gets roughly 10 votes than it's providing enough value to keep on doing, and the last month and this month have been just below that, so there seems to be declining interest, and the people that are interested can just sign up to get the updates.
I don't know much about this topic and funding in this area is most likely neglected but I'm unsure how to think about the scale of this issue and how to get a sense as to whether it's getting better/worse/roughly the same year to year.
I think for most people who hadn't heard of EA, it's very unlikely that they'll start searching for it online after hearing about it briefly on daytime TV. For those that have already heard about EA it may just reinforce what they already think about it, some positive and some negative. Even just the phrase effective altruism can be interpreted as arrogant if you don't spend some time explaining what you mean.
I prefer people to have high value impressions when they come across EA, whether that's online/in person, rather than having more but less valuable touch points.
I think this would hit a few of your requirements of being meta, not earning to give, involves software knowledge, early stage project but not as stressful as a startup and with coaching/advising. Although it would involve remote work but that could be solved by having an EA Israel coworking space. Potentially this could also grow to more than 1 person if you're seeing signs of impact.
Some things I think you could do as part of building the EA Software ecosystem.
You could be the go to person when someone involved in software wants to get more involved in EA
The go to person when someone at an EA org wants to find software people or know more about the space
You could set up a mentorship network similar to WANBAM
I think if you can't find the space you are looking for you should create something (at least a low cost version) and then if someone tells you of an existing space that works, then you can inform the people who have already joined.
Even if the space isn't particularly active it gives future organisers a starting space and potential people to contact who may be interested.
I think the main case where creating a space could be wrong is if the admin is bad at moderation and not open to improving the space. This also provides an incentive for creating spaces because if you don't, someone else could create that space which then gives a bad impression for others who wanted to get more involved.
I don't think it would be bad for there to be a workspace for each major cause/career area. It seems that there probably should be somewhere in between Facebook and the EA Forum for people to have discussions about causes they care about. I've written more about this here. Ideally the forum would be able to support sub forums, but it seems unlikely to happen soon.
I set up a Slack for groups that are smaller but still want to use it for discussion. At the moment it is mainly used by the EA & metascience subcommunity and sometimes by FIRE & EA. I thought it would be a good space whilst subcommunities are small to see if there is enough demand for their own space. If you want you could use channels on there to start groups you wanted to see.
Choosing the right online space can make a difference, especially if people you want to join don't already use the product you're suggesting. Different spaces also allow for different tools/culture/vibe, there is a brief overview of some pros/cons here but it will depend on your target audience.
Maybe there are also more general question to ask if you're thinking about coaching or nuclear risk sub communities, some of which may be here. If you think there should be an online discussion space, how does that fit into the wider ecosystem for the subcommunity.
I recently read Star Maker by Olaf Stapledon after a conversation in which someone said that it inspired them to think about longtermism related ideas way before they had heard of effective altruism.
It covers a fictionalised history of the universe from the beginning to end, more like a documentary or a textbook than a novel with characters and plot. It's from the point of view of someone who is picked up from a hill in England and then moved around the galaxy and observes other civilisations in a variety of times throughout the life of the universe.
It's written in 1937 and was said to inspire people like Arthur C. Clarke, Freeman Dyson and Jorge Luis Borges and includes speculation on civilisational unity and collapse, space exploration, metaverse, future technology, genetic engineering and existential risks.
I don't think I've ever called myself an effective altruist, part of it is the small identity idea mentioned in the original post and another part is that it doesn't seem correct to call myself effective when there are large uncertainties about the prioritisation of causes and interventions, so new evidence could come up showing I was actually very ineffective.
On a more practical level, it's easier to have conversations with people who are newer to EA or are sceptical of certain aspects of it when I'm not calling myself an EA and making it seem like something you are either in or out of.
It's also probably easier to find flaws in a topic when it isn't part of your identity, it reduces the chance of defensiveness, and I think I should try and make it easy to always be open to potential problems in EA.