What are your favorite examples of moral heroism/altruism in movies and books? 2021-04-26T16:26:45.359Z
[Link] Stuart Russell will have an AMA on Reddit on 12/16 2019-12-15T08:26:33.910Z
Publication of Stuart Russell’s new book on AI safety - reviews needed 2019-10-08T05:29:04.506Z


Comment by CarolineJ on Open Thread: May 2021 · 2021-05-06T15:33:51.640Z · EA · GW

I felt the same thing when I discovered (and met) EAs :-). Welcome!

Comment by CarolineJ on Open Thread: May 2021 · 2021-05-06T15:30:44.307Z · EA · GW

I found the EA forum really lively and thriving these last few months. It's really a pleasure hanging out here!  I also feel more at ease to comment/post thanks to the aliveness and welcoming community. Congrats to the CEA team for making an awesome job at developing a great space for EA discussions!

Comment by CarolineJ on Thoughts on being overqualified for EA positions · 2021-05-06T15:26:34.289Z · EA · GW

Found this a really clear explanation (and I liked the scenario, made it more concrete).

Comment by CarolineJ on Thoughts on being overqualified for EA positions · 2021-05-04T00:54:43.734Z · EA · GW

Noting that the link you shared also shows that people who are externally hired seem to perform worse than those who are promoted. So if you care about performance more than pay, it may not be that good to switch jobs often? 

Comment by CarolineJ on Why AI is Harder Than We Think - Melanie Mitchell · 2021-05-03T16:44:54.166Z · EA · GW

Stuart Russell debated Melanie Mitchell in February 2021 in an episode of The Munk Debates, a debate series on major policy issues.

The question was “Be it resolved, the quest for true AI is one of the great existential risks of our time.” Stuart Russell argued for and Melanie Mitchell argued against.

You can listen to the debate here or on any podcast service.

Comment by CarolineJ on Retention in EA - Part I: Survey Data · 2021-02-22T19:04:27.053Z · EA · GW

I randomly found this research and thought it could be interesting to inform yours: 


Lapses in ethical conduct by those in corporate and public authority worldwide have given business researchers and practitioners alike cause to re-examine the antecedents to personal ethical values. We explore the relationship between ethical values and an individual’s long-term orientation or LTO, defined as the degree to which one plans for and considers the future, as well as values traditions of the past. Our study also examines the role of work ethic and conservative attitudes in the formation of a person’s long-term orientation and consequent ethical beliefs. Empirically testing these hypothesized relationships using data from 292 subjects, we find that long-term perspectives on tradition and planning indeed engender higher levels of ethical values. The results also support work ethic’s role in fostering tradition and planning, as well as conservatism’s positive association with planning. Additionally, we report how tradition and planning mediate the influence of conservatism and work ethic on the formation of ethical values. Limitations of the study and future research directions, as well as implications for business managers and academics, are also discussed."

Comment by CarolineJ on LessWrong/EA New Year's Ultra Party · 2021-01-02T22:21:55.595Z · EA · GW

Thank you so much for organizing this party! I had a blast. Some of my favorite moments included discussions about "Existential Hope" - our hopes and aspiration both for our personal life and for humanity in 2021, as well as catching-up with multiple individuals in the "private conversation" pods.

Thanks to Ruby, Vaidehi, Oliver as well as the other co-hosts from LessWrong and EA Everywhere! 

Comment by CarolineJ on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-20T07:29:01.710Z · EA · GW

I agree. I have read only a few but I am crying as they are very moving and inspiring. This is the combined effect of their beauty and strength with my attachment to this community that shares my values. I will keep reading them in the next few days... 

Comment by CarolineJ on Progress Open Thread: October // Student Summit 2020 · 2020-10-22T02:56:53.045Z · EA · GW

Bravo! This is fantastic and it's also great that you used the opportunity to talk about EA! The future of REG

Comment by CarolineJ on Has anyone gone into the 'High-Impact PA' path? · 2020-09-27T21:30:30.742Z · EA · GW

Tanya at FHI first took the position of executive assistant to Nick Bostrom. She explained in the 80,000 Hours podcast how very, very valuable this has been for Nick Bostrom's research - and after that, for FHI operations.

I have, in some ways, done some PA work in the past year. I do think that some tasks are really helpful to take on some mental load off a busy researcher, such as helping with scheduling, answering emails, and choosing between different opportunities. Over time, this PA function becomes more and more those of a "project manager", with the delegation of some important projects for the researcher and the organization. I believe that, for some weeks, I have saved about 10 hours of work. I also made possible some high-value projects that wouldn't have been otherwise, measured in above $50K of value.

I think being a PA to someone at the top of their fields (or to someone just doing generally doing extremely high-impact work) is indeed a very high-impact path. It also gives amazing organizational, communication, and analytical skills. If you become a PA, you should probably aim to become a top-notch one ("The Chief of Staff/The Executive Officer"). It's important to note that this is a tough and high-impact job, that is often undervalued compared to what the person brings.

Being a PA (not in the sense of "research assistant", but in the sense of personal assistant) isn't for everyone as it requires specific skills and personality traits: organization skills (being super organized with everything), communication skills (notably being excellent at emails), analytical skills (decide to say yes or no to opportunities), and being a generalist ready to roll up their sleeves on many different topics. It is also a role where you are in the shadow and let the other person shine, though there are also plenty of opportunities to grow a skill you're specifically focused on.

It's not surprising that many organizations are looking for PAs (80K, CSER, etc) as this role is truly an impact multiplier, and it's hard to find people that are really excellent PAs.

I would be very excited if more EAs took on this kind of role! If you're interested, I would strongly recommend doing a few short and longer tests to see if you like the kind of tasks the job entails. As I said, it's really high-impact but it's also a tough job and I expect not that many people would be a great fit. Also, anyone reading this: please contact me by PM if you want to talk more about it.

Comment by CarolineJ on EA Mental Health Care Navigator Pilot · 2020-09-26T23:04:23.582Z · EA · GW

Hi Danica! Thanks for putting this process together. What is the best process to recommend a therapist on this list?

Comment by CarolineJ on More empirical data on 'value drift' · 2020-09-06T17:36:32.635Z · EA · GW

Thank you, this list is a useful complement to this post.

Comment by CarolineJ on More empirical data on 'value drift' · 2020-09-02T18:34:07.095Z · EA · GW

It would be super interesting to work on how to improve "retainment" with social integration. I was thinking that having a regular "mega meeting" of EAs may be pretty nice in times of confinement to promote social interactions, project collaborations, etc.

Comment by CarolineJ on More empirical data on 'value drift' · 2020-09-02T18:31:45.725Z · EA · GW

Thanks, I find this very useful!

I guess I would refine the"weird cause area" reason with adding that some EAs may leave because of strong disagreement with some EA mainstream or public figures' views. For example, a few years ago climate change was not taken as an x-risk, and somewhat regularly dismissed, which would have put off a few longtermists. I know someone who left EA because of strong disagreement with how AI safety is handled - eg encouraging working for an organization that works on AGI development. Basically, I think that sometimes there is a "tipping point" for strong disagreement where some people leave. Ideally, EA would be able to strongly focus on "EA is a question, not an ideology" so that people who have informed different opinions still say in.

I suspect that burnout may also be another reason why people in EA orgs leave.

Comment by CarolineJ on What FHI’s Research Scholars Programme is like: views from scholars · 2020-08-26T22:09:07.477Z · EA · GW

Do you think that the RSP is interesting for people working on policy engagement - eg writing "grey literature" reports, policy propositions, and feedback on legislation - or do you think it's a better fit for people working on things in the "peer-reviewed/academic work" category?

Comment by CarolineJ on [deleted post] 2020-04-04T19:34:55.772Z

Hi Larks!

Thanks for asking!

We have been very careful since the beginning of the epidemics and were effectively in quarantine before the Bay Area shelter-in-place.

Currently, everyone stays and works from home. We maintain the food stockpile via grocery deliveries or having one person going to Trader Joes' every two weeks (with a mask and gloves). We have occasional walks/runs (while keeping a safety distance with other people).

If the Bay Area gets 50,000 reported cases or 500 such cases in Berkeley, we will stop walking outside.

We have copper-tapped common-used surfaces in the house and have a lot of resources to live comfortably in total isolation for over one month.

When people will move in, we will probably implement a quarantine for themselves for a few days/weeks.

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:07:51.445Z · EA · GW

Thanks Ben! I've edited the message to have only one question per post. :-)

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:07:21.720Z · EA · GW

Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:07:01.182Z · EA · GW

What are some directions you'd like the EA movement or some parts of the EA movement to take?

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:06:23.716Z · EA · GW

What do you like to do during your free time?

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:06:04.407Z · EA · GW

What are some of your current challenges? (maybe someone in the audience can help!)

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:05:48.959Z · EA · GW

If you've read the book 'So good they can't ignore you', what do you think are the most important skills to master to be a writer/philosopher like yourself?

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:05:28.262Z · EA · GW

What are you looking for in a research / operations colleague?

Comment by CarolineJ on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T18:50:10.199Z · EA · GW

Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You're a great model to me!

Some questions, feel free to pick:

1) What philosophers are your sources of inspiration and why?

(put my other questions in separate comments). Also, writing "Toby"!

Comment by CarolineJ on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-11T19:36:04.424Z · EA · GW

Happily, they are already available:

Comment by CarolineJ on Open Thread #39 · 2018-01-05T15:22:40.886Z · EA · GW

Hi guys, I wanted to make you aware of a global online debate on the governance of AI by a Harvard-incubated think-tank.

For background, I’m a French EA, and I recently decided to work on AI policy as it is a pressing and neglected issue. I’ve been working for The Future Society for a few weeks already and would like to share with you this opportunity to impact policy-making. The Future Society at Harvard Kennedy School is a think tank dedicated to the governance of emerging advanced technologies. It has partnerships with the Future of Life Institute and the Centre for the Study of Existential Risk.

The think-tank provides an participatory debate platform to people all around the world The objective is to craft actionable and ethical policies that will be delivered in a White Paper, to the White House, the OECD, the European Union and other policymaking institutions that the think-tank is working with.

Because we know AI policy is hard, the idea is to use collective intelligence to provide innovative and reasonable policies. The debate is hosted on an open source collective intelligence software resulting from a research project funded by the European Commission, technologically supported by MIT. It’s based on research on collective intelligence, going from open and exploratory questions to more in-depth discussions. Right now, we are in the “Ideation” phase, which is very open. You can make constructive answers and debate with other people who are also interested in crafting AI Policies with instant translation.

The platform is like an online forum articulated around several issues, both short-term and long-term oriented. You have six themes, including “AI Safety and Security”, “Reinvent Man & Machine Relationship” and “Governance Framework”.

So far, most of the answers have been very constructive. But with you guys… it can be even better.

Because you are EAs, I really wanted to pick your brains!

It would be great if you guys could participate, on the topic you’re most interested in, knowing that a) it will be impactful b) you will be able to challenge your thoughts with other people passionate about AI social impacts. Of course, you don’t have to talk about AI safety if you’d rather focus on other topics.

Also, the more EAs, the merrier. Or rather, the more impactful!

So please connect on the debate, and participate!!

Debate is here