Posts

Comments

Comment by CharlotteSiegmann on [deleted post] 2021-05-29T17:55:24.261Z

Yes, I agree with you that they should be different but are related, so thanks for your edits. Beckstead uses at least the QWIRC keyboard as an example for trajectory changes in his Phd as far as I remember.

Comment by CharlotteSiegmann on [deleted post] 2021-05-29T15:51:02.713Z

As far as I understand, Beckstead and other EAs also refer to this as a "trajectory change". Hence, I would find it useful to mention this name in the tag page.

Comment by Charlotte (CharlotteSiegmann) on A new proposal for regulating AI in the EU · 2021-04-28T17:23:45.794Z · EA · GW

Also see the response from CSER here.

Comment by Charlotte (CharlotteSiegmann) on A new proposal for regulating AI in the EU · 2021-04-26T18:10:49.516Z · EA · GW

Hi Edo, instead of the leaked document, you might want to link to the official publication which is here. The European Commission published simultaneously the Coordinated Plan on AI. Some readers unfamiliar with the EU legislative process might assume that the details of the regulation are almost fixed, which is not the case. During the next months/years, the Council and the European Parliament will work on the proposal and will have trilogue meetings

Comment by CharlotteSiegmann on [deleted post] 2021-04-21T17:27:08.980Z

I am confused as to how this relates to trajectory changes (https://forum.effectivealtruism.org/tag/trajectory-changes). When Beckstead (2013) talks about ripple effects, I understand him to talk about trajectory changes, ie., a certainclass of interventions which might be very effective for longtermists, compared to x-risk mitigation. Independent of this and whether one agrees with longtermism, it might be still relevant to think about info hauards, replacability  (the bullet point). I would suggest that the first paragraph should be moved to trajectory changes instead. Sorry, if I have overseen something.

Comment by Charlotte (CharlotteSiegmann) on EA capital allocation is an inner ring · 2021-03-20T16:11:54.390Z · EA · GW

I have read all except one post you linked to. I don't understand how your post related to the two posts about children and would appreciate a comment. I agree with your argument that "EA jobs provide scarce non-monetary goods" and that it is hard to get hired by EA organisations. However, it is unclear to me that any of these posts provide a damaging critique to EA. I would be surprised if anyone managed to create a movement without any of these dynamics. However, I would also be excited to see working tackling these putative problems such as the non-monetary value of different jobs.

Comment by Charlotte (CharlotteSiegmann) on Name for the larger EA+adjacent ecosystem? · 2021-03-18T21:39:41.327Z · EA · GW

Clarification question: why do you understand longtermism to be outside of EA?

It seems to me that longtermism ( I assume you talk about the combination of believing in strong longtermism (Greaves and Macaskill, 2019) and believing in doing the most good) is just one particular kind of an effective altruist (an effective altruist with particular moral and empirical beliefs).

Comment by Charlotte (CharlotteSiegmann) on A full syllabus on longtermism · 2021-03-06T10:41:27.147Z · EA · GW

Thanks for this very interesting syllabus and thank you for mentioning the issue of diversity and for the first steps of tackling it. I don't see this issue discussed very often on the EA forum and in EA adjacent academia.

Comment by Charlotte (CharlotteSiegmann) on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-24T11:52:12.443Z · EA · GW

Great. thank you :)

Comment by Charlotte (CharlotteSiegmann) on How high impact are UK policy career paths? · 2020-12-23T23:23:41.031Z · EA · GW

Thanks for writing this. Here are two of my messy thoughts: If you believe that X is the biggest and most important problem (e.g. clean meat, poverty alleviation or AI governance), I would believe that the Head of the relevant department is a really really good job to work on the problem.

I was also wondering why you are not considering the career capital you get to later on work on projects such as Alpenglow or work in applied research job/ lobbying/policy thinkers etc.

Comment by Charlotte (CharlotteSiegmann) on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-23T23:19:40.153Z · EA · GW

Thanks for sharing. Would you be able to share more information on the top-ranked option "exploration". My thinking on this is limited (like in general regarding a cause X). Would you able to share concrete ideas people talked about or concrete proposed plans for such an organisation (a cause X organisation or an organisation focused on one particular underexplored cause area?)

 

And on a related note, will you publish the report about meta charities you describe here publish before the incubation programme application deadline (as it might be decision-relevant for some people)?

Comment by Charlotte (CharlotteSiegmann) on Careers Questions Open Thread · 2020-12-20T18:56:09.019Z · EA · GW

Heya, 

 

I am german, lead an EA group in the UK, and do EA career coaching there. I am personally interested in the policy side but I am happy to talk with you through your cause prioritisation and think about good jobs in Germany. If you are interested, pm me :)

 

https://www.linkedin.com/in/charlotte-siegmann/

Comment by Charlotte (CharlotteSiegmann) on Andreas Mogensen's "Maximal Cluelessness" · 2020-11-03T11:32:59.562Z · EA · GW

Sorry, I don't have the time to comment in-depth. However,  I think if one agrees with cluelessness, then you don't offer an objection. You might even extend their worries by saying that "almost everything has "asymmetric uncertainty"".  I would be interested in your extension of your last sentence. " They are extremely unlikely and thus not worth bearing mind". Why is this true? 

Comment by Charlotte (CharlotteSiegmann) on Andreas Mogensen's "Maximal Cluelessness" · 2020-11-01T20:08:19.427Z · EA · GW

re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options).  In this case, your examples for CC1 and CC2 don't hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don't know anything about Mondays or Tuesdays.

Comment by Charlotte (CharlotteSiegmann) on Has anyone gone into the 'High-Impact PA' path? · 2020-10-24T21:48:00.156Z · EA · GW

Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.

Comment by Charlotte (CharlotteSiegmann) on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T13:47:58.598Z · EA · GW

What is the most likely reason that s-risks are not worth working on?

Comment by Charlotte (CharlotteSiegmann) on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-06T13:46:41.859Z · EA · GW

How did you figure out that you prioritize the reduction of suffering?

I am interested in your personal life story and in the most convincing arguments or intuition pumps?

Comment by Charlotte (CharlotteSiegmann) on The case of the missing cause prioritisation research · 2020-08-16T07:33:23.387Z · EA · GW

Thank you very much for writing this up. However, I am not sure I understand your point, the things you are referring to in:

3. Policy and beyond – not happening – 2/10. Are you referring to your explanation within the subsection on The Parliament? Then, this would make sense for me.

Comment by Charlotte (CharlotteSiegmann) on What questions would you like to see forecasts on from the Metaculus community? · 2020-07-28T14:14:07.712Z · EA · GW

Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.

Comment by Charlotte (CharlotteSiegmann) on Call for feedback and input on longterm policy book proposal · 2020-07-07T17:15:18.836Z · EA · GW

Hi Maxime and Konrad, thank you for your work and the post.

I have a question with regard to the structure of the book. It seems like from your summary and the longer description that chapter 2 &3,(4) are quite distinct from 1,4,5. While the former chapters are focused on policymaking/lobbying etc in general (taking shorttermist situations, longtermist problems as examples), the other 3 are more specifically about longtermist policies. Please correct me if I am wrong. Why did you decide to include them in the same publication? It seems to me that a policy maker (especially compared to a policy researcher) would be less fascinated by chapter 2 and 3 (at least at first glance ). Also, given that you mention influencing policy debates quite a lot, I was wondering why you don't want to specifically target advocacy groups or civil society.

Comment by Charlotte (CharlotteSiegmann) on [Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public? · 2020-04-12T13:09:16.698Z · EA · GW

Copying Catherine's message from the Group Organizers Slack:

Comment by Charlotte (CharlotteSiegmann) on COVID-19 brief for friends and family · 2020-03-04T12:36:32.915Z · EA · GW

I dont know whether this is the right place to post it: But why are we caring about the risk of the coronavirus for us as EAs? Why are people thinking about canceling EAG or other local meetings?

(are we caring for selfish reasons or because this indirectly reduces the extent the virus spreads?

If we believe that a young healthy person has a 0.5 percent of doing from the virus and 5 percent of the world will be infected in expectation and all these actions (cancellation of EA events) reduces my chance of being infected by 5 percent:

(This seems super optimistic as most of the attendees wont change other behavior just because EA events are cancelled. They will just go to other events.)
then we are talking about roughly 10 micromorts. It seems like that the EA events might be worth the cost. If we want to reduce micromorts telling EAs to stop drinking alcohol seems like a better idea (1 micromort =0.5 liter wine) than changing the way we spend our time because of the coronavirus.

I am interested to hear why this argument is wrong

Comment by Charlotte (CharlotteSiegmann) on What posts you are planning on writing? · 2020-02-02T19:39:09.688Z · EA · GW

I am planning on writing a post summarizing the existing discussion of information cascades in EA and when doing and the different forms and possibilities to do something against it. Lastly, I discuss why the concept of the information cascade might disadvantageous. I would be interested in comments on the draft.

Comment by Charlotte (CharlotteSiegmann) on Space governance is important, tractable and neglected · 2020-01-15T23:17:14.129Z · EA · GW

I think I updated towards "maybe its useful if this cause area would be analysed in great depth". Is this planned at the moment? Perhaps interviewing experts etc.

Do you think that it might be important to develop clear guidelines what is meant with the first article of the outer space treaty: "The exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind. "

The German Professor for Space Law Stephan Hobe says on this German podcast that it is really important to define this right. Does this mean that countries have to give away a certain amount of their space surplus? do we include future generations in mankind? Do we include people on other planets?

Comment by Charlotte (CharlotteSiegmann) on [Part 1] Amplifying generalist research via forecasting – models of impact and challenges · 2019-12-30T14:34:28.419Z · EA · GW

Interesting idea about the "driver s license" for rationality.

You suggest that EA student groups should run tournaments . I would be interested in your reasoning. Why do you think this is better than encouraging people to join foretold.io as individuals? Do you think that we are lacking an institution or platform which helps individuals to get up to speed and interested in forecasting (so that they are good enough that foretold.io provides a positive experience)? Or do you think that these tournaments would be good signaling for students applying for future EA jobs?

Perhaps, national student forecasting tournaments were more feasible although I would intuitively say that the good forecasters might quickly leave.

Comment by Charlotte (CharlotteSiegmann) on Community vs Network · 2019-12-16T15:41:38.051Z · EA · GW

(thank you for writing this, my comment is related to Denkenberger)A consideration against the creation of groups around cause areas if they are open for younger people (not only senior professionals) who are also new to EA: (the argument does not hold if those groups are only for people who are very familiar with EA thinking - of course among others those groups could also make work and coordination more effective)

It might be that this leads to a misallocation of people and resources in EA as the cost of switching focus or careers increases with this network.

If those cause groups existed two years ago, I would have either joined the "Global Poverty Group" or the "Climate Change group" (for sure not the "Animal Welfare group" for instance) (or with some probability also a general EA group). Most of my EA friends and acquaintances would have focused on the same cause area (maybe I would have been better skilled and knowledgable about them now which is important). But the likelihood that I would have changed my cause area because other causes are more important to work on would have been smaller. This could be because it is less likely to come across good arguments for other causes as not that many people around me have an incentive to point me towards those resources. Switching the focus of my work would also be costly in a selfish sense as one would not see all the acquaintances and friends from the monthly meetups/skypes or career workshops of my old cause area anymore.

I think that many people in EA become convinced over time to focus on the longterm. If we reasonably believe that these are rational decision, then changing cause areas and ways of working on the most pressing problems (direct work, lobby work, community building, ETG) several times during one's life is one of the most important things when trying to maximize impact and should be as cheap as possible for individuals and hence encouraged. That means that the cost of information provision of other cause areas and the private costs of switching should be reduced. I think that this might be difficult with potential cause area groups (especially in smaller cities with less EAs in general).

(Maybe this is similar to the fact that many Uni groups try to not do concrete career advice before students have engaged in cause prioritization discussion. Otherwise, people bind themselves too early to cause areas which seem intuitively attractive or fit to the perceived identity of the person or some underlying beliefs they hold and never questioned ( "AI seems important I have watched SciFi movies" "I am altruistic so I will help the to reduce poverty" "Capitalism causes poverty hence I wont do earning to give").)