RISC at UChicago is Seeking Ideas for Improving Animal Welfare 2021-01-11T11:58:03.636Z
Should local EA groups support political causes? 2020-07-21T19:54:32.397Z
How should longtermists think about eating meat? 2020-05-16T20:21:41.268Z
When should we use our moral intuitions? 2019-07-26T15:50:48.227Z


Comment by lukasberglund on The Folly of "EAs Should" · 2021-01-06T11:51:21.225Z · EA · GW

[Comment pointing out a minor error]  Also, great post!

Comment by lukasberglund on The Center for Election Science Appeal for 2020 · 2020-12-20T17:25:18.611Z · EA · GW

I'm impressed with the success you guys had! I'm excited to see your organization develop.

Comment by lukasberglund on Mitigating x-risk through modularity · 2020-12-20T17:13:46.697Z · EA · GW

Great post!  Thanks.

Comment by lukasberglund on Does Economic History Point Toward a Singularity? · 2020-09-03T21:35:05.633Z · EA · GW


Comment by lukasberglund on Should local EA groups support political causes? · 2020-07-23T18:55:21.375Z · EA · GW

Good point. I'll bring this up with other group leaders.

Comment by lukasberglund on Should local EA groups support political causes? · 2020-07-23T18:54:26.311Z · EA · GW

This approach is compelling and you make a good case for it, but I think what Lynch said about how not supporting a movement can feel like opposing it is significant here. On our university campus, supporting a movement like Black Lives Matter seems obvious, so when you refuse to, it makes it looks like you have an ideological reason not to.

Comment by lukasberglund on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-12T20:09:27.749Z · EA · GW

What is the best leadership structure for (college) EA clubs?

A few people in the EA group organizers slack (6 to be exact) expressed interest in discussing this.

Here are some ideas for topics to cover:

  • The best overall structure (What positions should there be etc.
  • Should there be regular meetings among all general members/ club leaders?
  • What are some mistakes to avoid?
  • What are some things that generally work well?
  • How to select leaders

I envision this as an open discussion for people to share their experiences. At the end, we could compile the result of our discussion into a forum post.

Comment by lukasberglund on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-19T07:56:16.525Z · EA · GW

In the beginning of the Christiano part it says

There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left.

Why is it unlikely that there is little to no expected value left? Wouldn't it be conceivable that there are a lot of risks in the future and that therefore there is little expected value left? What am I missing?

Comment by lukasberglund on When should we use our moral intuitions? · 2019-07-27T10:30:12.272Z · EA · GW

Thanks for pointing that out!