Open and Welcome Thread: August 2020
post by Aaron Gertler (aarongertler)
If you have something to share that shouldn't be its own post, add it here!
(You can also create a Shortform post [? · GW].)
If you're new to the EA Forum, you can use this thread to introduce yourself! You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
(You can also put this info into your Forum bio [EA · GW].)
If you're new to effective altruism, consider checking out the Motivation Series [? · GW] (a collection of classic articles on EA). You can also learn more about how the Forum works on this page [? · GW].
Comments sorted by top scores.
comment by EdoArad (edoarad) ·
2020-08-04T05:32:46.587Z · EA(p) · GW(p)
For some reason it felt quite weird for me to write a bio and I avoided doing that before, even though I think it's very important for our community to get to know each other more personally. So I thought I might use this chance to introduce myself and finally write a bio 😊
My name is Edo, I'm one of the co-organisers of EA Israel. I'm also helping out in moderation for the forum, feel free to reach out if I can help with anything.
I have studied mathematics, worked as an mathematical researcher in the IDF and was in training and leadership roles. After that I started a PhD in CS, where I helped to start a research center with the goal of advancing Biological research using general mathematical abstractions. After about 6 months I have decided to leave the center and the PhD program.
Currently, I'm mostly thinking about improving the scientific ecosystem and particularly how one can prioritize better within basic science.
Generally, I'm very excited about improving prioritisation within EA and how we conduct our research around it and EA causes in general. I'm also very interested in better coordination and initiative support within the EA community. Well, I'm pretty excited about the EA community and basically everything else that has to do with doing the most good.
My Virtue Ethic brain parts really appreciates honesty and openness, curiosity and self-improvement, caring and supporting, productivity and goal-orientedness, cooperating as the default option and fixing broken systems.Replies from: aarongertler
comment by Litawn_Gan ·
2020-08-03T21:18:09.055Z · EA(p) · GW(p)
Hi all! I'm Litawn. I found EA through my interest in rationality (Lesswrong, SSC). I was involved with Stanford Effective Altruism, but mostly am a serial lurker. I want to start engaging more with the community and EA in general, as I can see that there is a lot to learn.
My interests are usually in fundamental questions as well as pragmatic ones. I like thinking about big questions like the far future, the fate of the universe, nature of life, etc. I'm inherently drawn to these sorts of things.
I also care about improving human coordination and raising the sanity water line. I want more people to feel like they have the ability to make a positive difference, and for their efforts to be fruitful. Replies from: MichaelA, aarongertler
↑ comment by MichaelA ·
2020-08-04T02:37:20.387Z · EA(p) · GW(p)
Hi Litawn! Hope you've enjoyed the lurker life, and that you'll enjoy engaging more in future :)
And I look forward to maybe seeing your comments or posts on those big questions!
↑ comment by Aaron Gertler (aarongertler) ·
2020-08-05T05:59:10.901Z · EA(p) · GW(p)
Hello! It's good to have you here, as a lurker or a poster. I wish people used the phrase "sanity waterline" more; that may be my favorite part of the old-school CFAR mission (whether or not it's still relevant to their work now).
comment by MichaelA ·
2020-08-04T04:17:40.013Z · EA(p) · GW(p)
I just want to second the encouragements for people to consider making shortform posts - as well as just full posts (see also) - and to make Forum bios.
I've found a lot of shortform posts quite interesting.
And I like that Forum bios let me get a rough sense of people's backgrounds, their interests, and/or what they're up to. It seems like that helps a little with making this feel like a community, with making it easier to connect with people who have relevant backgrounds/interests/projects, and with reducing how often I'm left wondering who the hell are all these usernames from the void with fascinating thoughts!
comment by cfrank ·
2020-08-19T12:14:38.168Z · EA(p) · GW(p)
Hi all, I’m Celia. I first got involved with Effective Altruism a few years ago, mostly EA London. After trying a few different things to do the most good with my skill set, I settled in charity fundraising, working for WaterAid, then the NSPCC, and now in Hand in Hand International, a small, efficient charity which lifts communities out of poverty with a proven four step model that creates sustainable income. I have a particular interest in International Development and I’m also interested in how fundraising works and how to make it more effective.Replies from: aarongertler
comment by EgilElenius ·
2020-08-12T11:21:44.436Z · EA(p) · GW(p)
Has anyone looked into the implications of deepfakes? To me, it seems like a highly impactful technology that will obstruct the use of the internet as an information-gathering tool, among other things.
comment by aman-patel ·
2020-08-29T16:17:18.258Z · EA(p) · GW(p)
Hi everyone! I'm Aman, an undergrad at USC currently majoring in computational neuroscience (though that might change). I'm very new to EA, so I haven't yet had the chance to be involved with any EA groups, but I would love to start participating more with the community. I found EA after spending a few months digging into artificial general intelligence, and it's been great to read everyone's thoughts about how to turn vague moral intuitions into concrete action plans.
I have a soft spot for the standard big-picture philosophy/physics topics, like the nature of intelligence and meta-ethics and epistemology and theories of everything, but also the wildly unpragmatic questions (like whether we might consider directing ourselves into a time loop once heat death comes around, if it's possible).
As a career, I tentatively want to focus on improving global governance capacity, since I'm inclined to think that it might ultimately determine how well EA-related research and prioritization can be implemented (and also how well we are able to handle x- and s-risks, and capitalize on safe AI). I realize that this is probably one of the least tractable goals to have, so I might end up working in another area, like international development, mental health, science policy, or something else entirely. Amusingly, all the EA career advice out there has only made me more confused about what I should be doing (but I'm probably approaching it wrong).
Anyway, I'm excited to be here and grateful for the opportunity to start interacting with the EA community!
comment by EdoArad (edoarad) ·
2020-08-03T12:16:05.345Z · EA(p) · GW(p)
Curious to know what people here think about the "unusual causes" tag.
This comes across to me as a bit deprecating so I was thinking that perhaps the name should be changed to something a bit more neutral. Perhaps 'non-standard-causes'. Or even something that might be biased the other way like 'underdiscussed-causes'.
Aaron Gertler gave the answer that
In my experience, "unusual" when applied to anything other than a person is a quite neutral term. I'd think of "non-standard" as worse, since "standard" implies quality in a way I don't think "usual" does.
So I'd take his view over mine, since I'm not a native english speaker. Still, interested in what you think and what other alternatives are there.
Generally, I think that this tag could be very important for the discovery of new causes so I think that an appropriate name might be importantReplies from: Pablo_Stafforini
↑ comment by Pablo (Pablo_Stafforini) ·
2020-08-03T12:32:01.740Z · EA(p) · GW(p)
I'm also not a native English speaker; to my ears, "unusual causes" feels similar in connotation to "non-standard causes". What about simply "other causes"?Replies from: edoarad
↑ comment by EdoArad (edoarad) ·
2020-08-03T12:40:47.104Z · EA(p) · GW(p)
This feels good to me. One problem that may have (but I'm not sure about it) is that it might not capture new causes that are contained in another meta-cause. So for instance, the post about M-Risk [EA · GW] is related to policy or x-risk, but is clearly a new cause by itself and yet it may feel inappropriate to vote on it as "other causes".Replies from: MichaelA
↑ comment by MichaelA ·
2020-08-04T02:03:08.639Z · EA(p) · GW(p)
I'm a native English speaker, and to me "unusual causes" and "non-standard causes" feel similar in connotation, and neither strikes me as feeling deprecating. Though I can see how "unusual" could imply the cause is weird, whereas really we just want to say it's not usually discussed. "Non-standard" avoids that, but seems like an uncommon phrase.
I'm against "under-discussed", as that bakes in the judgement that this should be more discussed. I'd say the same about "Overlooked" or "Neglected". "Less discussed causes" or "Less commonly discussed causes" avoids that, but is perhaps a little long (though the former is only as long as "Under-discussed causes").
I'm a bit against "other causes", though I'm less sure why. Maybe "other" actually feels more deprecating to me, which is maybe in turn because I've been exposed to the term "The Other" in some social science courses.
My vote might be for "Less discussed causes".