Posts
Comments
Yeah, that’s where I asked first. No responses in the thread, but someone DMd me to suggest looking at universities: https://www.lesswrong.com/posts/kH2uJeQZMnEBKG9GZ/can-independent-researchers-get-a-sponsored-visa-for-the-us.
The pay difference between working in industry and doing a PhD was a big factor for me to avoid getting a PhD a few years ago.
These days it still plays a role, though as an independent researcher I’d like to connect with more academics so that I can get better at doing research with more rigour and publish more papers. Avoiding the PhD has made this hard and I kind of have to have a lot more initiative to develop these skills that PhD students typically develop. That said, being able to selectively learn the skills that are actually useful for solving alignment is worth the tradeoff.
EDIT: Oh, and the lower level of prestige/credibility I have (from not doing a PhD) may get in the way of some of my plans so I’m trying to be creative about how to gain that prestige without having to do a PhD.
When I say "true," I simply mean that it is inevitable that these things are possible by some future AI system, but people have so many different definitions of AGI they could be calling GPT-3 some form of weak AGI and, therefore incapable of doing the things I described. I don't particularly care about "true" or "fake" AGI definitions, but just want to point out that the things I described are inevitable, and we are really not so far away (already) from the scenario I described above, whether you call this future system AGI or pre-AGI.
Situational awareness is simply a useful thing for a model to learn, so it will learn it. It is much better at modelling the world and carrying out tasks if it knows it is an AI and what it is able to do as an AI.
Current models can already write basic programs on their own and can in fact write entire AI architecture with minimal human input.
A "true" AGI will have situational awareness and knows its weights were created with the help of code, eventually knows its training setup (and how to improve it), and also knows how to rewrite its code. These models can already write code quite well; it's only a matter of time before you can ask a language model to create a variety of architectures and training runs based on what it thinks will lead to a better model (all before "true AGI" IMO). It just may take it a bit longer to understand what each of its individual weights do and will have to rely on coming up with ideas by only having access to every paper/post in existence to improve itself as well as a bunch of GPUs to run experiments on itself. Oh, and it has the ability to do interpretability to inspect itself much more precisely than any human can.
Since I expect some people to be a bit confused as to what exactly was the bad thing that has happened after reading this post, I think it would be great if the community health team could write a post explaining and pointing out exactly what was bad here and in other similar instances.
I think there is value in being crystal clear about what were the bad things that happened because I expect people will takeaway different things from this post.
I honestly didn’t know how to talk about it either, but wanted to point at general vibes I was getting. While I’m still confused about what‘s the issue exactly, contrary to my initial comment, I don’t really think polyamory within the community is a problem anymore. Not because of Arepo’s comment specifically, but because there are healthy ways to do polyamory just like other forms of relationships. It’s something that I thought was true before writing the comment, but was a bit confused about the whole mixing of career and “free love” with everyone in the community.
Maybe only talking about “free love” mixed with power dynamics and whatever else would have been better. I don’t know really know. Maybe I shouldn’t have said anything as someone confused about all this, but still wanting to help. I felt it was the kind of thing that a lot of people were thinking, but not saying it out loud.
That said, I think Sonia’s video cleared up some things a bit for me. It points to the larger amounts of “hacker houses”, networking, sex, and money in the Bay Area. She also points to polyamory not being the problem. However, she says while those things shape the structure of the problem, it’s power dynamics that ends up being the main root issue. It sounds to me like she is pointing to people will sometimes try to become polyamorous with others by abusing power dynamics (even though this is not inherent to most polyamorous relationships at all). Are power dynamics the whole story? I don’t know.
Note that a lot of people seemed to agree with my initial comment. I’m not sure what to make of that.
People have some strong opinions about things like polyamory, but I figured I’d still voice my concern as someone who has been in EA since 2015, but has mostly only interacted with the community online (aside from 2 months in the Bay and 2 in London):
I have nothing against polyamory, but polyamory within the community gives me bad vibes. And the mixing of work and fun seems to go much further than I think it should. It feels like there’s an aspect of “free love” and I am a little concerned about doing cuddle puddles with career colleagues. I feel like all these dynamics lead to weird behaviour people do not want to acknowledge.
I repeat, I am not against polyamory, but I personally do not expect some of this bad behaviour would happen as much if in a monogamous setting since I expect there would be less sliding into sexual actions.
I’ve avoided saying this because I did not want to criticize people for being polyamorous and expected a lot would disagree with me and it not leading to anything. But I do think the “free love” nature of polyamory with career colleagues opens the door for things we might not want.
Whatever it is (poly within the community might not be part of the issue at all!), I feel like there needs to be a conversation about work and play (that people seem to be avoiding).
Consider using Conjecture’s new Verbalize (https://lemm.ai/home) STT tool for transcriptions! They’ll be adding some LLM features on top of it, and I expect it to have some cool features in coming out this year.
I’ve been also pushing (for a while) for more people within EA to start thinking of ways to apply LLMs to our work. After ChatGPT, some people started saying similar stuff so I’m glad people are starting to see the opportunity.
Are there any actual rigorous calculations on this? It's hard for me to believe someone making $2M/year and donating $1M/year (to AI Safety or top GW charities) would have less counterfactual impact than someone working at CEA.
Edit: Let's say you are donating $1M/year to AI Safety, that might be about enough to cover the salary for about 9 independent alignment researchers. Though, those 9 researchers might not be yet comparable to top-level researchers who would get funding regardless. So, it would probably end up as additional funding for getting more young people in the field (and give them at least a years worth of funding). And I guess there are some other potentially valuable things like becoming a public figure. In this case, you'd have to estimate that the value you bring to CEA is worth more than that.
Thanks for doing this!
In terms of feedback: the most annoying thing so far is that as soon as you click on any grant, going 'back' to the previous page puts you back at a completely fresh search. You can't click to open up new tabs either.
I want to say that I appreciate posts like this by parents in the community. I'm an alignment researcher and given how fast things are moving, I do worry that I'm under-weighting the amount of impact I could lose in the next 10 years if I have kids. I feel like 'short timelines' make my decision harder even though I'm convinced I want kids in 5 or so years from now.
Some considerations I've been having lately:
- Should I move far away from my parents, which would make it harder to depend on someone for childcare on the weekends and evenings? Will we be close to my future wife's parents?
- Should I be putting in some time to make additional income I can eventually use to make my life easier in 5 years? Maybe it's easier for me to do so now before AGI crunch time?
- The all-encompassing nature of AGI makes things like the share of household work a potential issue for a couple of years. I feel bad for thinking that I may have to ask my future wife if I can reduce housework in those couple of years of crunch time (let's say 2 years max). It feels selfish... Ultimately, this will just be a decision my future wife and I will have to make. I do want to do at least 50% of the housework outside of the crunch time.
- It particularly feels bizarre in the context of some wild AGI thing we aren't even confident about how it will go. But like, if someone is the CEO of a startup, it feels more reasonable for their partner to take up additional housework if things get intense for a while. Or maybe a better example is that a pandemic is starting and one of the parents is head of some bio-risk org, I would find it odd if they tried to keep the household dynamic the same throughout the crucial time to limit the impact of the pandemic?
- Overall I'm trying to be a good future husband and stuff like this weighs on me and I don't want to make the decision in some terrible and naive way like "my career is more important than yours." :/
Yeah, I agree. Though I feel I can imagine a lot of startups or businesses that require a lot of work, but don't require as much brain power. I could have chosen a sector that doesn't move as fast as AI (which has an endless stream of papers and posts to keep track of) and just requires me to build a useful product/service for people. Being in a pre-paradigmatic field where things feel like they are moving insanely fast can feel overwhelming.
I don't know, being in a field that I think feels much more high-risk and forces me to grapple with difficult concepts every day is exhausting. I could be wrong, but I doubt I'd feel this level of exhaustion if I built an ed-tech startup (while still difficult and the market is shaky).
(Actually, one of my first cash-grabs would have been to automate digital document stuff in government since I worked in it and have contacts. I don't think I'd feel the same intensity and shaky-ness tackling something like that since that's partially what I did when I was there. Part of my strategy was going to be to build something that can make me a couple million when I sell it, and then go after something bigger once I have some level of stability.)
Part of what I meant by "shaky-ness" is maybe that there's a potential for higher monetary upside with startups and so if I end up successful, there's a money safety net I can rely on (though there's certainly a period of shaky-ness when you start). And building a business can basically be done anywhere while for alignment I might be 'forced' to head to a hub I don't want to move to.
Then again, I could be making a bigger deal about alignment due to bias of being in the field.
As an example: I specifically chose to start working on AI alignment rather than trying to build startups to try to fund EA because of SBF. I would probably be making a lot more money had I took a different route and would likely not have to deal with being in such a shaky, intense field where I’ve had to put parts of my life on hold for.
I mostly agree with Buck’s comment and I think we should probably dedicate more time to this at EAGs than we have in the past (and probably some other events). I’m not sure what is the best format, but I think having conversations about it would allow us to feel it much more in our bones rather than just discussing it on the EA forum for a few days and mostly forget about it.
I believe that most of these in-person discussions seem to have mostly happened at the EA Leaders Forum, so we should probably change that.
That said, I’m concerned that a lot of people will be scared to give their opinion on some things (for a variety of reasons). There‘s some benefits to doing this in person, but there’s probably some benefits to making it anonymous too.
Context: I work as an alignment researcher. Mostly with language models.
I consider myself very risk-averse, but I also personally struggled (and still do) with the instability of alignment. There’s just so many things I feel like I’m sacrificing to work in the field and being an independent researcher right now feels so shaky. That said, I’ve weighed the pros and cons and still feel like it’s worth it for me. This was only something I truly felt in my bones a few months after taking the leap. It was in the back of my mind for ~5 years (and two 80k coaching calls) before I decided to try it out.
With respect to your engineering skills, I’m going to start to work on tools that are explicitly designed for alignment researchers (https://www.lesswrong.com/posts/a2io2mcxTWS4mxodF/results-from-a-survey-on-tool-use-and-workflows-in-alignment) and having designers and programmers (web devs) would probably be highly beneficial. Unfortunately, I only have funding for myself for the time being. But it would be great to have some people who want to contribute. I’d consider doing AI Safety mentorship as a work trade.
I honestly feel like software devs could probably still keep their high-paying jobs and just donate a bit of time and expertise to help independent researchers if they want to start contributing to AI Safety.
Here's a relevant tweet thread.


One response from Ollie:

I feel like we need stronger communication coming from event organizers regarding these things. Even though it doesn't affect me personally.
As I said in my comment, I already supplement 5g/day (I've been doing this for 15 years). My concern about supplementation is that it's not clear to me that supplementation works wholly. Even in supplementation, different forms of creatine seem to have different levels of effect. With respect to creatine's impact on cognitive performance:
Three papers suggest that creatine may improve cognition: Ling 2009 found that creatine supplementation may improve performance on some cognitive tasks, McMorris 2007 found that creatine supplementation aids cognition in the elderly, and Benton 2010 found that in vegetarians, creatine supplementation resulted in better memory. However, Avgerinos 2018 found that while creatine supplementation may improve short-term memory and intelligence/reasoning of healthy individuals, its effect on other cognitive domains remains unclear.
Anyways, I'm going through papers on Elicit using the prompt, "Does the vegan diet lead to shorter children?" I'm finding mixed results, some say no, some say yes. For example,
Vegan diets were associated with a healthier cardiovascular risk profile but also with increased risk of nutritional deficiencies and lower BMC and height. Vegetarians showed less pronounced nutritional deficiencies but, unexpectedly, a less favorable cardiometabolic risk profile.
Interesting that some people disagree. Would love to hear what you disagree with and if you can change my mind with research that goes against the above. Otherwise I have to consider that you either disagree with the above being important for you specifically or you think the claims I made are wrong.
I mostly agree with the spirit of this post because my experience with vegans is that they’ve been overconfident in positive health impacts from veganism (essentially hearing what they want to hear from the research) because of motivated reasoning and the underlying belief that animal suffering trumps any potential negative health impacts. That very well may be true, but it has always been difficult for me to trust vegans claiming good health impact (meat eaters are potentially worse, to be fair) because I’ve always had the impression that they aren’t being as rigurous and intellectually honest as possible.
If you want more people to become vegan, easing these fears is necessary. Otherwise you either get people like me who waited years to stop eating meat or people who leave veganism as soon as they form a long-term relationship or get kids.
My main concerns regarding vegan diets is the lack of creatine (and its potential effect on IQ) as well as children being raised as vegans (based on my minimal research, it seems that vegan kids tend to be shorter).
As someone who doesn’t eat meat at the moment, I’ve been debating eating meat again because 1) I don’t want it to negatively impact my intelligence/memory and then I make less progress on ai alignment 2) I’d be concerned it negatively impacts the growth (in all aspects) of my future children.
In general, I’ve been quite underwhelmed by the level of research (and written–up analysis) on the above concerns. It seems that lack of creatine does lower IQ, and I’d like more understanding as to whether the supplements actually work to resolve that issue (or is absorption a problem?). That said, I’ve read that meat eaters typically only get about 1g/day of creatine and I supplement 5g/day (my guess is that beyond 3g you probably don’t get more IQ boost).
For children, I’m having a hard time imagining the quality of research will be sufficient by the time I have kids, so I will likely default to having them eat a mediterranean diet.
Can we all just agree that if you’re gonna make some funding decision with horrendous optics, you should be expected to justify the decision with actual numbers and plans?
Would be nice if we actually knew how many conferences/retreats were going to be held at the EA castle.
It might be justifiable (I got a tremendous amount of value being in Berkeley and London offices for 2 month stints), but now we’re here talking about it, and it obviously looks bad to anyone skeptical about EA. Some will take it badly regardless, but come on. Even if other movements/institutions way overspend on bad stuff, let’s not use that as an excuse in EA.
The “EA will justify any purchase for the good of humanity” argument will just continue to pop up. I know many EAs who are aware of this and constantly concerned about overspending and rationalizing a purchase. As much as critics act like this is never a consideration and EAs are just naively self-rationalizing any purchase, it’s certainly not the case for most EAs I’ve met. It’s just that an EA castle with very little communication is easy ammo for critics when it comes to rationalizing purchases.
One failed/bad project is mostly bad for the people involved, but reputational risk is bad for the entire movement. We should not take this lightly.
I think I agree with this. I’ve thought of starting one myself. Not sure if I will yet.
Here, I wrote about how AI applied to non-profits could be neglected at the moment.
Near-Term AI capabilities probably bring low-hanging fruits for global poverty/health
I'm an alignment researcher, but I still think we should be vigilant about how models like GPT-N could potentially be used to make the world a better place. I like the work that Ought is doing with respect to the academic field (and, hopefully, alignment soon as well). However, my guess is that there are low-hanging fruits popping up because of this new technology, and the non-profit sector has yet to catch up.
This shortform is a Call To Action for any EA entrepreneur, you could potentially boost efficiency of the non-profit sector with the use of these tools. Of course, be careful since GPT-3 will hallucinate sometimes. But putting it in a larger system with checks and balances could 1) make non-profits save time and money 2) make previously inefficient or non-viable non-profits become a top charity.
I could be wrong about this, but my expectation is that there will be a lag between the time people can use GPT effectively for the non-profit sector and when they actually do.
Thank you for writing this post. I'm currently a technical alignment researcher who spent 4 years in government doing various roles, and my impression has been the same as yours regarding the current "strategy" for tackling x-risks. I talk about similar things (foresight) in my recent post. I'm hoping technical people and governance/strategy people can work together on this to identify risks and find golden opportunities for reducing risks.
"DM me if you're interested in dating me"
Before EAGSF this year, (on Twitter) I mentioned putting this on your SwapCard profile as a way to prevent the scenarios above where people ask others for meetings because they are romantically interested in them. So, instead, they could contact them off-site if interested and EAGs would hopefully have more people just focused on going to it for career reasons. My thought was that if you don't do something like this, people are just going to continue hiding their intentions (though I'm sure some would still do this regardless).
I was criticized for saying this. Some people said they have an uncomfortable feeling after hearing that suggestion because they now have it in their minds that you might be doing a 1-on-1 with them because you find them attractive. Fair enough! Even if you, let's say, link to a dating doc off-site or contact info that they can reach after the conference. I hoped that we could make it more explicit the fact that people in the community are obviously looking to date others in the community and are finding that very difficult. Instead, my guess is that we are placed in a situation where people will set-up 1-on-1s because they find someone attractive even if they don't admit it. I do not condone this, and it's not something I've done (for all the reasons listed in this thread).
Personally, I do not plan to ask anyone out from the community at any point. Initially, I had hoped to find someone with similar values, but I just don't think there is any place it seems appropriate. Not even parties. It's just not worth the effort to figure out how to ask out an EA lady in a way that's considered acceptable. This might sound extreme to some, but I just don't find it worth the mental energy to navigate my way through this and just want to be in career-mode (and, at most, friendship-mode) when engaging with other EAs. And, more importantly, there's too much work and fun mixed, and it just leads to uncomfortable situations and posts like this.
I'm not making a judgement on what others should do, but hopefully whichever way the community goes, it becomes more welcoming for people who want to do good.
Here’s a comment I wrote on LessWrong in order to provide some clarification:
———
So, my difficulty is that my experience in government and my experience in EA-adjacent spaces has totally confused my understanding of the jargon. I'll try to clarify:
- In the context of my government experience, forecasting is explicitly trying to predict what will happen based on past data. It does not fully account for fundamental assumptions that might break due to advances in a field, changes in geopolitics, etc. Forecasts are typically used to inform one decision. It does not focus on being robust across potential futures or try to identify opportunities we can take to change the future.
- In EA / AGI Risk, it seems that people are using "forecasting" to mean something somewhat like foresight, but not really? Like, if you go on Metaculus, they are making long-term forecasts in a superforecaster-mindset, but are perhaps expecting their long-term forecasts are as good as the short-term forecasts. I don't mean to sound harsh, it's useful what they are doing and can still feed into a robust plan for different scenarios. However, I'd say what is mentioned in reports typically does lean a bit more into (what I'd consider) foresight territory sometimes.
- My hope: instead of only using "forecasts/foresight" to figure out when AGI will happen, we use it to identify risks for the community, potential yellow/red light signals, and golden opportunities where we can effectively implement policies/regulations. In my opinion, using a "strategic foresight" approach enables us to be a lot more prepared for different scenarios (and might even have identified a risk like SBF much sooner).
My understanding of forecasting is that you would optimally want to predict a distribution of outcomes, i.e. the cone but weighted with probabilities. This seems strictly better than predicting the cone without probabilities since probabilities allow you to prioritize between scenarios.
Yes, in the end, we still need to prioritize based on the plausibility of a scenario.
I understand some of the problems you describe, e.g. that people might be missing parts of the distribution when they make predictions and they should spread them wider but I think you can describe these problems entirely within the forecasting language and there is no need to introduce a new term.
Yeah, I care much less about the term/jargon than the approach. In other words, what I'm hoping to see more of is to come up with a set of scenarios and forecasting across the cone of plausibility (weighted by probability, impact, etc) so that we can create a robust plan and identify opportunities that improve our odds of success.
I think the information you are sharing is useful (some parts less so, I agree with pseudonym), just don't deadname/misgender them. EA is better than that.
I feel like anyone reaching out to Elon could say "making it better for the world" because that's exactly what would resonate with Elon. It's probably what I'd say to get someone on my side and communicate I want to help them change the direction of Twitter and "make it better."
Honestly, I’m happy with this compromise. I want to hear more about what ‘leadership’ is thinking, but I also understand the constraints you all have.
This obviously doesn’t answer the questions people have, but at least communicating this instead of radio silence is very much appreciated. For me at least, it feels like it helps reduce feelings of disconnectedness and makes the situation a little less frustrating.
Personally, I’ve mostly seen people confused and trying to demonstrate willingness to re-evaluate what might have led to these bad outcomes. They may overly sway in one direction, but this only just happened and they are re-assessing their worldview in real-time. Some are just asking questions about how decisions were made in the past so we just have more information and can improve things going forward (which might mean doing nothing differently in some instances). My impression is that a lot of the criticism about EA leadership are overblown and most (if not all) were blindsided.
That said, I haven’t really had the impression it’s as bad and widespread as this post makes it seem though. Maybe I just haven’t read the same posts/comments and tweets.
I do think that working together so we can land on our feet and continue to help those in need sounds nice and hope you’ll still be there since critical posts like this are obviously needed.
One worry one might have is the following reaction: “I don’t need mental health help, I need my money back! You con artists have ruined my life and now want to give me a pat on the back and tell me it’s going to be ok?”
Then again, I do want us to do something if it makes sense. :(
The following tweet is being shared now: https://twitter.com/autismcapital/status/1590551673721991168?s=46&t=q60fxwumlq0Mq8CpGV3bxQ
This is obviously just a random unverified source, but I think it will be worth reflecting on this deeply once this is all said and done. It feeds directly into how EA’s maximizing behaviour can lead to these outcomes. Whether the above is true or not, it will certainly be painted as such by those who have been critical of EA.
I had 4 coaching calls with her for free after 80,000 Hours directed me to her.
It’s a good project because, you know, doing good is important and we should want to do good better rather than worse. It’s utterly absurd because everyone who has ever wanted to do good has wanted to do good well, and acting as though you and your friends alone are the first to hit upon the idea of trying to do it is the kind of galactic hubris that only subcultures that have metastasized on the internet can really achieve.
This seems wrong to me. Just this week, I went on a date with someone who told me the only reason she volunteers is that it makes her feel good about herself, and she doesn't particularly care much about the impact. And you know what, props to her for admitting something that I expect a lot of other people do as well. I don't think there's something wrong with it, I'm just saying that "everyone who has ever wanted to do good has wanted to do good well" seems wrong to me.
I just scraped the EA Forum for you. Contains metadata too: authors, score, votes, date_published, text (post contents), comments.
Here’s a link: https://drive.google.com/file/d/1XA71s2K4j89_N2x4EbTdVYANJ7X3P4ow/view?usp=drivesdk
Good luck.
Note: We just released a big dataset of AI alignment texts. If you’d like to learn more about it, check out our post here: https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai
Great points, here’s my impression:
Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).
Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they're saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.
To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they're intentions are.
Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.
*This is a response to both of your comments.
Aside from those already mentioned:
The Inside View has a couple of alignment relevant episodes so far.
One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.
Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.
Saving for potential future use. Thanks!
Fantastic work. And thank you for transcribing!
If anything, this is a claim that people have been bringing up on Twitter recently, the parallels between EA and religion. It’s certainly something we should be aware of since, having ”blind faith” in religion is something that be good, we don’t seem to actually want to do this within EA. I could explain why I think AI risk is different from messiah thing, but Rob Miles explains it well here:
Given limited information (but information nonetheless), I think AI risk could potentially lead to serious harm or not at all, and it’s worth hedging our bets on this cause area (among others). This feels different then choosing to have blind faith in a religion, but I can see why outsiders think this. Though we can be victims of post-rationalization, I think religious folks have reasons to believe in a religion. I think some people might gravitate towards AI risk as a way to feel more meaning in their lives (or something like that), but my impression is that this is not the norm.
At least in my case, it’s like, “damn we have so many serious problems in the world and I want to help with them all, but I can’t. So, I’ll focus on areas of personal fit and hedge my bets even though I’m not so sure about this AI thing and donate what I can to these other serious issues.”
Avast is telling me that the following link is malicious:
Ding's China's Growing Influence over the Rules of the Digital Road describes China's approach to influencing technology standards, and suggests some policies the US might adopt. #Policy
Who am I? Until recently, I worked as a data scientist in the NLP space. I'm currently preparing for a new role, but unsure if I want to:
- Work as a machine learning engineer for a few years then transition to alignment, founding a startup/org or continue working as ML engineer.
- Or, try to get a role as close to alignment as possible.
When I first approached Yonatan, I told him that my goal was to become "world-class in ml within 3 years" in order to make option 1 work. My plan involved improving my software engineering skills since it was something I felt I was lacking. I told him my plan on how to improve my skills and he basically told me I was going about it all wrong. In the end, he said I should seek mentorship with someone who has the incentive to help me improve my programming skills (via weekly code reviews) ASAP. I had subconsciously avoided this approach because my experiences with mentorship were less than stellar. I took a role with the promise that I would be mentored and, in the end, I was the one doing all the mentoring...
Anyway, after a few conversations with Yonatan, it became clear that seeking mentorship would be at least 10X more effective than my initial plan.
Besides helping me change my approach to becoming a better programmer (and everything else in general), our chats have allowed me to change my career approach in a better direction. Yonatan is good at helping you avoid spouting vague, bad arguments for why you want to do x.
I'm still in the middle of the job search process so I will update this comment in a few months once the dust has settled. For now, I need to go, things have changed recently and I need to get in touch with Yonatan for feedback. :)
I highly recommend this service. It is lightyears ahead of a lot of other "advice" I've found online.
I'd be interested in this if I moved to NYC. I'm currently at the very early beginnings of preparing for interviews and I'm not sure where I'll land yet so I won't answer the survey. Definitely a great idea, though. The decently-sized EA community in NYC is one of the reasons it's my top choice for a place to move to.
I just want to say that this course curriculum is amazing and I really appreciate that you've made it public. I've already gone through about a dozen articles. I'm an ML engineer who wants to learn more about AGI safety, but it's unfortunately not a priority for me at the moment. That said, I will still likely go through the curriculum on my own time, but since I'm focusing on more technical aspects of building ML models at the moment, I won't be applying since I can't strongly commit to the course. Anyways, again, I appreciate making the curriculum public. As I slowly go through it, I might send some questions for clarification along the way. I hope that's ok. Thanks!
Reposting what I wrote on Facebook:
Young low-risk adults doing groceries and other errands for high-risk old adults.
I wonder if it is effective and if there is a way to scale it? I talked to one EA from Italy and they said a student union is also doing this there. I am looking into how to accomplish this in Canada.
We could potentially fund an app or something so that anyone who wants to volunteer can quickly take part and accept a request.
The request could be taken via telephone for example and then placed in the app.
Or we create a simple process without any apps. Google sheet?
Dealing with superspreaders: it’s crucial to give guidelines and make sure young people are much less likely to catch the virus than old people. I think this is doable.