Posts

I scraped all public "Effective Altruists" Goodreads reading lists 2021-03-23T20:28:30.476Z
Funding essay-prizes as part of pledged donations? 2021-02-03T18:43:03.329Z
What do you think about bets of the form "I bet you will switch to using this product after trying it"? 2020-06-15T09:38:42.068Z
Increasing personal security of at-risk high-impact actors 2020-05-28T14:03:29.725Z

Comments

Comment by meerpirat on Small and Vulnerable · 2021-05-04T14:42:42.241Z · EA · GW

Thanks, I found this moving. I associate stories like yours with the urge to become stronger/smarter/richer so I can make the suffering stop. Oh, and it reminds me of the story in Harry Potter and the Methods of Rationality where Harry almost decides to sacrifice his life to destroy Azkaban.

Comment by meerpirat on Update on Board meeting transparency · 2021-04-26T08:51:37.939Z · EA · GW

Thanks for the update, this sounds pretty reasonable to me. I can well imagine that this will even increase your legibility, as written materials are easier to skim for the information one is looking for.

Comment by meerpirat on If Bill Gates believes all lives are equal, why is he impeding vaccine distribution? · 2021-04-21T07:06:01.132Z · EA · GW

I stumbled on a related IGM poll (a survey of many of the top economists from the US) the other day, and they seem to believe that economic incentives through IP rights are important:

Comment by meerpirat on Concerns with ACE's Recent Behavior · 2021-04-16T13:13:03.357Z · EA · GW

That makes a lot of sense to me, especially the points about how little time this might take and that there is not conflict with prefering the discussion to be public. Thanks! 

I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE. I'd be more worried if this would be a case where most people would just form a quick negative association and won't follow up later when this all turns out to be more or less benign.

Comment by meerpirat on Concerns with ACE's Recent Behavior · 2021-04-16T11:55:05.200Z · EA · GW

Because I'm worried that this post could hurt my future ability to get a job in EAA, I'm choosing to remain anonymous.

I personally would also find it emotionally draining to criticize possible employers and would understand if one decides against contacting them privately. Not saying this happened here, but another seemingly valid reason I’d want to keep in mind.

Comment by meerpirat on Concerns with ACE's Recent Behavior · 2021-04-16T11:49:38.456Z · EA · GW

Your question reads a bit like you disapprove of the author posting it without doing this. I agree that people criticizing an org should strongly consider contacting the org before their public criticism. But I think there are reasons to not contact an org before, besides urgency, e.g. lacking time, or predicting that private communication will not be productive enough to spend the little time we have at our disposal. So I currently think we should approve if people bring up the energy to voice honest concerns even if they don’t completely follow the ideal playbook. What do you, or others think?

Comment by meerpirat on Status update: Getting money out of politics and into charity · 2021-04-09T22:50:32.316Z · EA · GW

Good point. I suppose I could end up being more optimistic because

  • some politicians might think supporting it will, all in all, still make it more likely for them to win office
  • they might not believe that too many people would take part in this, so they could win relatively cheap virtue points
  • they might just be convinced that this is a great idea and are open to testing it out with voters
  • no idea if true, but I imagine many politicians also don’t have too close relationships with a significant proportion of their (seasonal?) campaign staff and have enough slack cutting other things if necessary? Or to rely more on volunteers?

Probably it would help if you could find ways for the politicians to reap as much positive public recognition from this as possible, e.g. trying to place things like „Voters of both Richard Roe and Jane Doe donated 30.000$ as part of the One America Charity Campaign“ in the local news. Maybe also by letting them recommend a charity they’d like to be associated with.

Another thought, I guess you might face less opposition in areas where campaigning is less professionalized and connected to the respective party‘s campaign apparatuses, who I guess will not like this idea (assuming they exist).

Comment by meerpirat on Status update: Getting money out of politics and into charity · 2021-04-09T14:04:56.981Z · EA · GW

Cool! Maybe you could reach out to politicians who have depolarization as part of their political program, who I expect to more likely want to support/be associated with projects like this.

Comment by meerpirat on Quadratic Payments: A Primer (Vitalik Buterin, 2019) · 2021-04-08T14:48:15.919Z · EA · GW

EA Hannover uses qv for choosing books for our reading club!

An online poll generator for quadratic voting is qv.geek.sg, which wasn’t too easy to find a couple months ago and might be interesting to play around with to get an impression.

Comment by meerpirat on The innocent gene · 2021-04-06T08:44:37.949Z · EA · GW

Thanks, I enjoyed reading this! I read The Selfish Gene some years ago and your post made me realize that my mind hasn‘t yet settled on how to think about all this.

One thought that came up was that we might want to distinguish between evolutionary processes and genes? This is related to the saying „Don’t hate the player, hate the game“, only that the players/the genes are not even real agents with intentions, like you argued. And furthermore we maybe shouldn’t even lay blame on evolution, as it’s just a non-agentic dynamic that probably sprang to life randomly at some point.

Comment by meerpirat on How much does performance differ between people? · 2021-04-02T15:37:53.716Z · EA · GW

Thanks, yes, that seems much more relevant. The cases in that paper feel slightly different in that I expect AI and ML to currently be much more "open" fields where I expect orders of magnitude more paths of ideas that can lead towards transformative AI than

  • paths of ideas leading to higher transistor counts on a CPU (hmm, because it's a relatively narrow technology confronting physical extremes?)
  • paths of ideas leading to higher crop yields (because evolution already invested a lot of work in optimizing energy conversion?)
  • paths of ideas leading to decreased mortality of specific diseases (because this is about interventions in extremely complex biochemical pathways that are still not well understood?)

Maybe I could empirically ground my impression of "openness" by looking at the breadth of cited papers at top ML conferences, indicating how highly branched the paths of ideas currently are compared to other fields? And maybe I could look at the diversity of PIs/institutions of the papers that report new state-of-the-art results in prominent benchmarks, which indicates how easy it is to come into the field and have very good new ideas?

Comment by meerpirat on How much does performance differ between people? · 2021-04-02T12:20:26.409Z · EA · GW

Nice, I think developing a deeper understanding here seems pretty useful, especially as I don't think the EA community can just copy the best hiring practices of existing institutions  due to lack in shared goals (e.g. most big tech firms) or suboptimal hiring practices (e.g. non-profits & most? places in academia).

I'm really interested in the relation between the increasing number of AI researchers and the associated rate of new ideas in AI. I'm not really sure how to think about this yet and would be interested in your (or anybody's) thoughts. Some initial thoughts:

If the distribution of rates of ideas over all people that could do AI research is really heavy-tailed, and the people with the highest rates of ideas would've worked on AI even before the funding started to increase, maybe one would expect less of an increase in the rate of ideas (ignoring that more funding will make those researchers also more productive).

  • my vague intuition here is that the distribution is not extremely heavy-tailed (e.g. the top 1% researchers with the most ideas contribute maybe 10% of all ideas?) and that more funding will capture many AI researchers that will end up landing in the top 10% quantile (e.g. every doubling of AI researchers will replace 2% of the top 10%?)
  • I'm not sure to which if any distribution in your report I could relate the distribution of rates of ideas over all people who can do AI research. Number of papers written over the whole career might fit best, right? (see table extracted from your report)


 

Quantity

Share of the total held by the top ...
20%10%1%0.1%0.01%

Papers written by scientist (whole career) [Sinatra et al. 2016]
39%24%4.0%.59%.083%
Comment by meerpirat on Announcing "Naming What We Can"! · 2021-04-02T05:51:39.599Z · EA · GW

The Absolutely True Diary of a Part-Time Utilitarian

Or how about...

PS, I love utility.

Comment by meerpirat on Announcing "Naming What We Can"! · 2021-04-01T15:54:07.782Z · EA · GW

Now that the first few learned how to pronounce it:

Schistosomiasis Control Initiative ➔ Schistosomiasis Control Initiative (×3)

Comment by meerpirat on Announcing "Naming What We Can"! · 2021-04-01T15:43:33.685Z · EA · GW

Parents in EA ➔ Raising for Effective Giving

Comment by meerpirat on Announcing "Naming What We Can"! · 2021-04-01T15:35:15.292Z · EA · GW

Center for Long-Term Risk ➔ Stiftung für Effektiven Altruismus

Maybe the real suffering risks we reduced were the good old times EAs could hang out with you in Berlin along the way schnief

Comment by meerpirat on Announcing "Naming What We Can"! · 2021-04-01T14:06:31.855Z · EA · GW

High-Impact Athletes ➔ EA Sports for obvious reasons

Comment by meerpirat on [New org] Canning What We Give · 2021-04-01T13:52:11.341Z · EA · GW

Looking for co-founders for a corporate canpaigning org:

Assuming an average person can can a can of leftover food within a minute, if every company would allow each employee to can excess canteen food for only 15 minutes after lunch for a 30 year career, each person can easily can 80,000Cans within their lifetime.

Comment by meerpirat on What are your main reservations about identifying as an effective altruist? · 2021-03-30T12:13:06.652Z · EA · GW

My reservation is around the idea of keeping my identity small, as Jonas suggested in his post. I feel 5/5 as a member of the EA community,  I’m just worried prominently giving myself tags like „I am an Effective Altruist“, or „Feminist“, “German“, „Man“, “Vegan“ etc. comes with baggage that will constrain my thinking and behavior without many benefits, compared to saying „I am part of the EA community, I come from Germany, my diet is vegan, I care about XYZ“.

Comment by meerpirat on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-26T08:10:32.929Z · EA · GW

I aligned it to the left, good point! :) Putting the n-column left would be even better, but only aligning the text left was already not trivial with the Pandas library.

Comment by meerpirat on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-26T08:10:02.601Z · EA · GW

Small update: 

I added the reading lists of 18 people to the database, some of whom joined in the last week, some of whom for some reason didn't yield reading lists in the first runthrough. I think this didn't change much, except that one of those 18 people already read The Scout Mindset, and now there's another Eragon book in the Lowest Rated list... 

I also uploaded the code and csv file if anybody else wants to play around with it: https://github.com/MaxRae/EAGoodreads

Glad ya'll found this interesting! :) 

ETA: I you want to look at the first version for some reason, it's archived here.

Comment by meerpirat on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-25T07:18:11.920Z · EA · GW

Cool idea! Send you a message.

Comment by meerpirat on What Makes Outreach to Progressives Hard · 2021-03-18T12:25:07.920Z · EA · GW

I agree with the last point, and I think EA is doing fairly well on the being sympathetic to Matt Yglesias front:

Comment by meerpirat on What Makes Outreach to Progressives Hard · 2021-03-14T09:02:00.774Z · EA · GW

Thanks for writing this, I think this topic is worthy of more discussion.

Of course, this does not consider important tradeoffs, such as the potential for alienating other audiences. This will therefore be most useful to people whose primary audience is progressives.

I wonder how much we should even recommend leaning into the progressive/social justice framing when the audience primarily comes from this ideological bent.

  • I often find talk about privilege unproductive and used in a hostile/shaming kind of way and feel mixed about suggesting that this is part of the motivation of EA (which I prefer seeing as sth. like „we share the desire to help others and improve the world as much as possible“) and bringing more people with that mindset into EA
  • people that are not from the social justice bent might be especially worthy to attract in situations where progressives are the main audience, in order to gain intellectual diversity

If I’d read this testimonial on the local EA website, there’d be a solid chance I‘d have been significantly less interested because it doesn’t connect to my altruistic motivations and (in my head) strongly signals a political ideology.

For me, taking the Giving What We Can pledge was an expression of my commitment to using my class privilege to contributed to a movement towards a more equitable world for current and future generations

I think some points you mention, like highlighting more that aid recipients’ feedback is strongly taken into account, don’t risk turning off non-social justice people while still connecting to their motivation and worries, so maybe I’d wish to see more of that kind.

Comment by meerpirat on Solving the moral cluelessness problem with Bayesian joint probability distributions · 2021-03-13T17:16:10.371Z · EA · GW

While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.

There are, in the real world, unforeseen contingencies: eventualities that even the educated decision maker will fail to foresee. For instance, the recent tsunami and subsequent nuclear meltdown in Japan are events that most agents would have omitted from their decision models. If a decision maker is aware of the possibility that they may not be aware of all relevant contingencies—a state that Walker and Dietz (2011) call ‘conscious unawareness’ —then they face state space uncertainty.

https://link.springer.com/article/10.1007/s10670-013-9518-4 

Comment by meerpirat on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-07T14:29:12.349Z · EA · GW

I think most Utilitarians typically don't care about extinction of some species per se, but instead more about something like how it affects the total amount of good and bad experiences that are experienced. From that perspective, some billion years of life of continued existince of animals on Earth is probably way less exciting given there's one species, humanity, that probably is headed to become or give rise to a space faring civilization with the potential to vastly exceed any Utopian imaginations. Additionally, given that animals in nature are probably living way less enjoyable lives than most people imagine, I personally don't feel so good about the idea of dragging out the current state of nature for longer than necessary. 

Comment by meerpirat on Making a collection of freely available mental health resources · 2021-03-03T16:03:36.626Z · EA · GW
  • Name: Where Should We Begin Podcast
  • What is it? 
    • relationship therapist Esther Perel anonymously interviews couples and talks them through their problems
  • Why do you like it? 
    • her advice is great and the intimate stories and problems that the couples bring with them are so wide ranging that I feel like I learned a lot
    • plus it's really touching and heart-warming when the couples have small breakthroughs and seem better able to express and enjoy their love for each other
  • Where to start? 
    • the podcast has a listening guide with a couple of suggestions for first episodes
Comment by meerpirat on Fun with +12 OOMs of Compute · 2021-03-01T23:09:58.456Z · EA · GW

Cool angle and thought experiment, makes this all a bit more concete. My timelines for transformative AI already were close to what you're handwaving at, but I'm really happy to see more thought in this direction and hope that this inspires more people to take soon-ish tansformative AI even more seriously. 

From my inside view it feels pretty unsettling to think about the changes I expect to happen in the coming decade or two. I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.

Comment by meerpirat on Actually possible: thoughts on Utopia · 2021-02-27T13:40:33.874Z · EA · GW

Thanks! I relate to the last paragraph, it felt motivating and important to think about a positive vision that we’re working for. I think your essay is a really nice introduction for that.

As a side note, I really like Scott Alexander‘s Archipelago of Civilized Communities, an Utopian-ish vision where communities can freely form on their own islands but people are free to always leave. It’s probably only one level above the „my house is made of banana pizza“ brain expansion level of monkey Utopias, but it could be a great first step! https://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/

Comment by meerpirat on Report on Running a Forecasting Tournament at an EA Retreat · 2021-02-21T11:02:01.841Z · EA · GW

Really enjoyed reading your report, I feel very motivated to organize such a tournament at our next local group unconference in Germany and think that your report and resources will make it more likely that I’ll go through with it. What the heck, let’s say 60% that I’ll organize it in the next unconference, conditional on me participating and nobody else bringing it up before me!

Comment by meerpirat on A ranked list of all EA-relevant (audio)books I've read · 2021-02-20T12:25:35.544Z · EA · GW

The EA forum is Serious Business!! Yeah, your thinking here seems pretty reasonable, I also can relate to the felt asymmetry between positive and negative karma. I think I previously noticed current karma points somehow feeding into my upvote decisions and it kinda felt like I don’t approve of it because I thought the ideal would be an independent vote of usefulness or something like that. But I also think that this is not a big factor and it doesn’t have a large impact here.

Comment by meerpirat on A ranked list of all EA-relevant (audio)books I've read · 2021-02-20T10:34:57.432Z · EA · GW

On a meta note, I wonder if it's a bad idea to think in terms of „How much total karma should this comment have?“, instead of treating it like a vote where each person only reacts in terms of how valuable he or she personally found the comment. With the former approach other people might be inclined to use their strong up- or downvotes to counteract this strategy again because they think the vote should represent what „the people“ individually think versus what a single high karma user thinks should be the correct number of points.

Comment by meerpirat on A ranked list of all EA-relevant (audio)books I've read · 2021-02-18T16:57:07.566Z · EA · GW

Another thought that came to mind: As the canonical echo chamber reading list of EA books currently seems to consist of maybe on the order of 50 books, I might be less worried about this because 50 popsci books are not that many books? This should especially hold for people who read a lot, and who relatively quickly will have to explore outside of the canon. E.g. this seems to be true for Michael already, and after roughly 6 years EA I also have covered a considerable fraction of the canon and read a bunch outside of it. This is also my impression from following roughly twenty EAs on Goodreads. And for people that don‘t read so much it could be fine to just read what the busy readers recommend?

Comment by meerpirat on A ranked list of all EA-relevant (audio)books I've read · 2021-02-17T15:48:31.593Z · EA · GW

The worry about EAs reading too much of the same ideas is a good point. I wonder if there are strategies that could help us as a community to explore more literature. For example somebody could scrape the reading lists from members of the EA goodreads group and create an exploration reading list with the books that many people have on their reading list but haven't actually read. Or maybe a reading list with non-fiction books that are suspiciously lacking from EA reading lists.

Comment by meerpirat on A ranked list of all EA-relevant (audio)books I've read · 2021-02-17T11:40:10.512Z · EA · GW

I think I was among the first three votes and upvoted, so there seems to have been one big downvote, or maybe a bug, because when I upvoted it didn't have negative karma and now again it also doesn't (with 4 votes). 

Comment by meerpirat on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-15T11:16:41.368Z · EA · GW

Great idea. What did you think about the idea to somehow streamline a process to share that Google Doc with others who might have something to say? A process that might require relatively little effort would be asking people in those forms "Would you be interested in receiving career plans from other people that are looking for feedback?". That might make it relatively effortless for people from a particular field, e.g. Cognitive Science in my case, to be matched to other people who might have valuable feedback. 

It might be a bit effortful to match people, though I suppose you have information about the general field and that might already suffice? Or you might worry that people will receive unhelpful feedback and that this might reflect badly on you? Though I suppose you could emphasize that the people who you'd share the Google Doc are not vetted at all and are only fellow 80,000Hours fans who clicked on "I'd be down to look over other people's career plans". 

Comment by meerpirat on How to discuss topics that are emotionally loaded? · 2021-02-12T12:11:25.526Z · EA · GW

Thanks for writing about this, I sometimes find myself in similar situations to your examples and feel unsure how to deal with it best.

I just read Cullen‘s post about psychological harm and thought I‘d mention it here because I think it explains part of why I sometimes experience less patience than usual when it comes to psychological harm that seems partially induced by ideological origins. https://forum.effectivealtruism.org/posts/FpMjQWaNvcPKPuhXQ/blameworthiness-for-avoidable-psychological-harms

Comment by meerpirat on Blameworthiness for Avoidable Psychological Harms · 2021-02-12T11:58:21.965Z · EA · GW

Thanks a lot for this, this feels like an important puzzle piece in a discussion I recently had and part of an intuition that is now more understandable to me.

Comment by meerpirat on Are we actually improving decision-making? · 2021-02-11T20:57:33.007Z · EA · GW

Interesting thought, it seems plausible to me that something like that could in principle become a problem. Some more thoughts that come up:

  • it seems like a rather low-hanging fruit to first connect to as many people who share your goal
  • shouldn't we be able to tell if there are specific groups  of people who's perspective might be lacking in EA? I feel like I saw this discussed before about conservatives and people from specific countires like China.
    • you seem to be thinking most about certain groups of professionals - I suppose this should be relatively easy to spot, and also I wonder if someone knows of plausible examples of professions who's thinking about the world might be lacking in EA
  • I'm maybe also less worried because EAs  generally seem pretty open-minded and willing to explore unrelated communities and are intrigued by people with different opinions
  • I could also imagine that many EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare, and there are many more ways as a community to fool themselves if they are not populated by scientifically literate people
Comment by meerpirat on 80,000 Hours one-on-one team plans, plus projects we’d like to see · 2021-02-11T07:17:29.823Z · EA · GW

I can second feeling pretty heavy-hearted after my rejection, and really like the idea of vetting a crowd of volunteers. A similar idea would be to offer rejected people to share the info from their form, plus maybe their most important questions, with people who agreed to maybe take a look, e.g. via the EA Hub, where you could also filter relevant background. Or alternatively into a private group like „AI Safety Career Discussion“. I’m one of the shy people who would probably never do something like that themselves, but if it were an „official“ and recommended thing from 80,000Hours it would feel somehow much less scary.

Comment by meerpirat on Forecasting Newsletter: January 2021 · 2021-02-10T18:25:33.286Z · EA · GW

I like it too, great idea and execution also looks pretty solid! I searched „Germany“ and got two forecasts related to Germany on spot #1 and #2 and then almost 20 seemingly unrelated forecasts only from Hypermind, which might suggest something fishy going on.

Comment by meerpirat on Rob Mather: Against Malaria Foundation — what we do, how we do it, and the challenges · 2021-02-10T15:07:49.412Z · EA · GW

Speaking as the partner of a translator, there appear to be considerably higher rates of underemployment in that business compared to others.

Comment by meerpirat on Exploratory Careers Landscape Survey 2020: Group Organisers · 2021-02-04T23:58:07.417Z · EA · GW

Cool, thanks for shining light on this! My local chapter is currently also trying to get started with some career support and we also struggled with coming up how to do this best. I think we all relate to not feeling knowledgable or competent enough to give significant career advice ourselves. Instead, we now will host a career co-working session every six weeks. The idea is to give people space and an occasion to work on anything they think is most useful and have people around that can help out with ideas and feedback.

 

Comment by meerpirat on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-03T15:06:53.489Z · EA · GW

And yes, thanks, the point about thinking with trendlines in mind is really good.

Maybe those two developments could be relevant:

  • bigger number of recent ML/CogSci/Comp. Neuroscience graduates that academically grew up in times of noticeable AI progress and much more widespread aspirations to build AGI than the previous generation
  • related to my question about non-academic open-source projects: If there is a certain level of computation necessary to solve interesting general reasoning gridworld problems with new algorithms, then we might unlock a lot of work in the coming years
Comment by meerpirat on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-03T15:04:17.300Z · EA · GW

Thanks! :) I find Grace's paper a little bit unsatisfying. From the outside, fields around like SAT, factoring, scheduling and linear optimization seem only weakly analogous to the fields around developing general thinking capabilities. It seems to me that the former is about hundreds of researchers going very deep into very specific problems and optimizing a ton to produce slightly more elegant and optimal solutions, whereas the latter is more about smart and creative "pioneers" having new insights how to frame the problem correctly and finding new relatively simple architectures that make a lot of progress.

What would be more informative for me?

  • by above logic maybe I would focus more on progress of younger fields within computer science
  • also maybe there is a way to measure how "random" praciticioners perceive the field to be - maybe just asking them how surprised they are by recent breakthroughs is a solid measures of how many other potential breakthroughs are still out there
    • also I'd be interested in solidifying my very rough impression that breakthroughs like transformers or GANs relatively simple algorithms in comparison with breakthroughs in other areas of computer science
  • evolution's algorithmic progress would maybe also be informative to me, i.e. how much trial and error was roughly invested to make specific jumps
    • e.g. I'm reading Pearls Book of Why and he makes a tentative claim that counterfactual reasoning is something that appeared at some point, and the first sign we can report of it is the lion-man from roughly 40.000 years ago
    • though of course evolution did not aim at general intelligence, e.g. saying "evolution took hundreds of millions of years to develop an AGI" in this context seems disanalogous
  • how big of a fraction of human cognition do we actually need for TAI? E.g. we might save about an order of magnitude by ditching vision and focussing on language?
Comment by meerpirat on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-02T00:14:05.060Z · EA · GW

the observation that AI isn't generating a lot of annual revenue right now

Not sure how relevant, but I saw that Gwern seems to think this comes from a bottleneck of people who can apply AI, not from current AI being insufficient:

But how absurd—to a first approximation, ML/DL has been applied to 𝘯𝘰𝘵𝘩𝘪𝘯𝘨 thus far. We're 𝘵𝘩𝘢𝘵 bottlenecked on coders!

And the lack of coders may rapidly disappear soon-ish, right? At least in Germany studying ML seems very popular since a couple of years now.

Comment by meerpirat on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T23:58:48.235Z · EA · GW

Thanks, super interesting! In my very premature thinking, the question of algorithmic progress is most load-bearing. My background is in cognitive science and my broad impression is that 

  • human cognition is not *that* crazy complex, 
  • that I wouldn’t be surprised at all if one of the broad architectural ideas I've seen floating around on human cognition could afford "significant" steps towards proper AGI 
    • e.g. how Bayesian inference and Reinforcement Learning maybe realized in the predictive coding framework was impressive to me, for example flashed out by Steve Byrnes on LessWrong
    • or e.g. rough sketches of different systems that fulfill specific functions like in the further breakdown of System 2 in Stanovich's Rationality and the Reflective Mind 
  • when thinking about how many „significant“ steps or insights we still need until AGI, I think more on the order of less than ten
    • (I've heard the idea of "insight-based forecasting" from a Joscha Bach interview)
  • those insights might not be extremely expensive and, once had, cheap-ish to implement
    • e.g. the GANs story maybe fits this, they're not crazy complicated, not crazy hard to implement, but very powerful

This all feels pretty freewheeling so far. Would be really interested in further thoughts or reading recommendation on algorithmic progress. 

Comment by meerpirat on Possible gaps in the EA community · 2021-02-01T16:02:39.866Z · EA · GW

Could you expand on what you mean by "Maybe we could have „Pledge“ badges"? E.g., where are you envisioning those badges being displayed?

I thought about people's forum accounts. There are also the EA hub accounts, but I basically never open it, not sure about others. I'd probably do it similar to Wikipedia (e.g. here), just having a small icon for the pledge and when you hover on it "GivingWhatWeCan member since April 2nd, 2020". I didn't think about other ideas, e.g. being helpful for a person deciding on a donation! I like the idea. One worry that comes up is that it could get a bit cluttered. Also, something in me feels a bit awkward when proudly displaying something, like I could become the target of the bullies of my highschool for feeling "too cool". The GWWC pledge is already so socially accepted as something cool that I don't feel this in that case.

Comment by meerpirat on Possible gaps in the EA community · 2021-01-31T14:36:06.356Z · EA · GW

Re: Make status easier accessible

One idea that just came up to me was making it easier to reap status benefits from the GWWC giving pledge, e.g. I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign. Relatedly, Subreddits and Wikipedia Projects seem to actively use badges of honor to acknowledge things like being a donor, having helped with some task etc. Maybe we could have „Pledge“ badges.

Another idea: getting access to people one holds in high regard could also be something to think about. One could promote speakers coming to local groups, or generally promote networking within the community more.

Another thought that came up: Not being chosen for 80,000Hours‘ career coaching felt like it was a symptom of my relatively low value for the community (not saying there is room for improvement how they communicated that, was years ago). I imagine it feels similar for some others. Maybe having motivated volunteers taking up the rejected applicants would be a cheap way to signal „there are people in the community that value you being here and trying to work out an EA career path“?

Comment by meerpirat on Some thoughts on risks from narrow, non-agentic AI · 2021-01-30T16:12:39.625Z · EA · GW

I think [the risk of letting single AI systems control essential products like the internet or electrical grids] is a fairly predictable problem that normal mechanisms will handle, though, especially given widespread mistrust of AI, and skepticism about its robustness.

I was wondering if this neglects the risks of some agents unilaterally using AI systems to control those services, e.g. we might worry about narrow AI finding ways to manipulate stock markets, which (speaking as someone with 0 knowledge) naively doesn‘t seem easily fixed with existing mechanisms. E.g. the flash crash from 2010 seems like evidence for the fragility

New regulations put in place following the 2010 flash crash[10] proved to be inadequate to protect investors in the August 24, 2015, flash crash — "when the price of many ETFs appeared to come unhinged from their underlying value"[10] — and ETFs were subsequently put under greater scrutiny by regulators and investors.[10] https://en.wikipedia.org/wiki/2010_flash_crash#Overview