Posts

Optimizing Activities Fairs 2019-07-01T23:40:32.432Z · score: 23 (11 votes)

Comments

Comment by eli_nathan on Optimizing Activities Fairs · 2019-07-13T19:32:19.442Z · score: 2 (2 votes) · EA · GW

Thanks Max! I too am not certain that this is the correct approach, and think there is a good case for longer form conversations due to the reasons you give. The rough case I'd make for the "maximizing" approach is:

1. It's easy to scale: You can easily gather 5-10 members of your group, give them 10-15 minutes of guidance and put them on the stall. I slightly worry about group members who are newer to EA having long form on-boarding conversations with new and interested people (in EA Oxford, we've previously taken some time to verify that people are knowledgable enough to have formal 1-1 conversations with newcomers).

2. Activities fairs are often noisy and as such don't represent the best environment to engage in long form conversations.

3. Even if you do have long form conversations at the stall, they likely won't last longer than 5-10 minutes, which I think is generally not enough time for someone to properly understand what EA is. Often, when engaging in longer conversations at activity fairs, I've observed people come across as somewhat skeptical of EA, but in such a way that upon further reflection I could imagine them being reasonably excited about it. As such, it may be better to optimize for driving attendance at longer form events, such as a 1-1 coffee chat or a 1-hour introductory talk.

I agree that this approach could come across as unfriendly, and that it's important to make sure stall-runners are aware of this. Overall, I see this as a downside, but one that is probably worth it in the long run.

Comment by eli_nathan on EA Funds - An update from CEA · 2018-08-07T19:09:53.930Z · score: 4 (4 votes) · EA · GW

Thanks Marek,

I remember some suggestions a while back to store the EA funds cash (not crypto) in an investment vehicle rather than in a low-interest bank account. One benefit to this would be donors feeling comfortable donating whenever they wish, rather than waiting for the last possible minute when funds are to be allocated (especially if the fund manager does not have a particular schedule). Just wondering whether there's been any thinking on this front?

Comment by eli_nathan on One for the World as a potential vehicle to expand the reach of Effective Altruism · 2018-08-02T11:28:05.576Z · score: 6 (6 votes) · EA · GW

Thanks Rossa,

I'm wondering how you see 1FTW's position changing due to the presence of OpenPhil and a shift towards a more money rich, talent poor community (across certain cause areas)?

In my eyes, the comparative advantage for student groups is more about driving engagement and plan changes and less about raising funds. Of course, money still goes a long way, but I'm skeptical that group leaders should be spending their time focusing on (relatively) small donations over building communities of talented, engaged individuals.

Is your view that 1FTW will be a better outreach vehicle (than standard community building techniques) for certain demographics? It seems that 1FTW attracts similar types of people that the GWWC pledge would, but at higher quantities due to the lower barrier. However, I'm skeptical that this lower barrier is necessarily a positive thing, because it would seem that, on average, these individuals are less likely to further engage with the EA community at large.

Is this something you're concerned about, or do you think these concerns are relatively minor?

Comment by eli_nathan on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-03T15:40:26.649Z · score: 1 (1 votes) · EA · GW

Ah okay - I think I understand you, but this is entering areas where I become more confused and have little knowledge.

I'm also a bit lost as to what I meant by my latter point, so will think about it some more if possible.

Comment by eli_nathan on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-01T13:16:52.827Z · score: 0 (0 votes) · EA · GW

By agentive I sort of meant "how effectively an agent is able to execute actions in accordance with their goals and values" - which seems to be independent of their values/how aligned they are with doing the most good.

I think this is a different scenario to the agent causing harm due to negative corrigibility (though I agree with your point about how this could be taken into account with your model).

It seems possible however that you could incorporate their values/alignment into corrigibility depending on one's meta-ethical stance.

Comment by eli_nathan on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-01T12:33:09.675Z · score: 1 (1 votes) · EA · GW

I really liked this post and the model you've introduced!

With regards to your pseudomaths, a minor suggestion could be that your product notation is equal to how agentive our actor is. This could allow us to take into account impact that is negative (i.e., harmful processes) by then multiplying the product notation by another factor that takes into account the sign of the action. Then the change in impact could be proportional to the product of these two terms.