Posts

Geopolitical Hazards of Sea-Steading 2021-12-06T02:56:44.334Z
Seeking a Collaboration to Stop Hurricanes? 2021-12-04T12:39:02.615Z
Internalizing Externalities 2021-12-04T07:30:10.898Z
Strategic Risks and Unlikely Benefits 2021-12-04T06:01:12.128Z
Voting Theory has a HOLE 2021-12-04T04:20:41.012Z

Comments

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-14T06:49:26.240Z · EA · GW

Thank you for recognizing that my concern was not addressed. I should mention, I am also not operating from an assumption of 'intrinsically against me' - it's an unusually specific reaction that I've received on this forum, in particular. So, I'm glad that you have spoken-up in favor of due consideration. My stomach knots thank you :)

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T22:46:41.955Z · EA · GW

Yes, I understand that funding can let me hire people to do that work - and I don't need funding to free my time. I understand that, if I delay for the sake of doing-it-alone, then I am responsible for that additional harm. It doesn't make sense for me to run a simulation or lobby by myself; and I've been in the position of hiring people, as well as working with people who are internally motivated. I hoped to find the internally motivated people, first - that's why I asked EA for connections, instead of just posting something on a job site.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T22:41:34.642Z · EA · GW

When did you edit your response? You were saying something else, originally...

Yes, I can imagine the world where I respond to the misrepresentations with politeness - I did that for twenty years, and the misrepresentations continued, along with so many other forms of bullying. I have seen the world from that lens, and I learned that it's better for me to stand-up to misrepresentations, even if that means the bully doesn't like me.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T22:19:39.763Z · EA · GW

I apologize for lumping your funding-suggestion along-side others' funding-misrepresentation. I see that you are looking for ways to make it possible, and funding is what came to mind. Thank you.

(I am still surprised that funding is continually the first topic, after I specify that the government is the best institution to finance such a project. EA would go bankrupt, if they tried to stop hurricanes...)

And, I understand if people don't consider my proposal promising - I am not demanding that they divert resources, especially funds which are best spent on highest guaranteed impact! Yet, there is a cliquishness in excluding diverse dialogue based upon "social capital/reputation" - I hope you can see that the social graph's connectivity falls apart when we cut those ties.

It's also odd that the only data-point used to evaluate me would be the slice of time immediately after I'd been prodded repeatedly. I wish I could hand you the video-tapes of my life, and let you evaluate me rightly. When I am repeatedly misrepresented, defending myself, then you don't see a representative slice of who I am.

Worst of all, no measure of my persona or character is a measure of the worth of a thought. If I am not a good fit for making it happen, then the best I can do I find someone who fits that well. The idea itself stands or falls on its own merits, and measuring me ignores that. I won't know if it's worth doing until I have a simulation, at least. I don't know how anyone else has certainty on the matter, especially from such a noisy proxy as "perceived tone via text message".

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:59:30.893Z · EA · GW

I agree! So, consider the scenario: I stand-up and ask "does anyone know someone I might talk to?" and the response I get is "but we don't want to give you money". I correct that misrepresentation, repeatedly, until I suspect that I am being trolled - and my self-defense is used as a reason to ignore me. If I hadn't been poked-in-the-eye repeatedly, those introductions would begin on a pleasant footing.

Core to this problem: each of you are focusing on how I can "get better results by playing nice". I am focusing on "I was misrepresented, and that should be considered first, in the moral calculus." If I roll-over every time someone bullies me, then I'll be liked by a whole lot of bullies. That doesn't sound like a win, to me.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:55:33.248Z · EA · GW

Thank you for letting me know.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:50:05.825Z · EA · GW

I should also add this note: there is a double-standard in communication, here. I was asked repeatedly to 'calm down and speak nicely, because only then will we listen' - meanwhile, the ones who misrepresented were given a pass to lead the listener by the nose along imputations such as "because you posted a lot, no one is going to listen to you." They got that pass, easily, with the header "being brutally honest"/"honestly". Should I just begin all my posts with "just being brutally honest", so that no one uses my tone as a reason to ignore the content of what I say?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:33:29.932Z · EA · GW

[Jeff deleted his response, yet it was still helpful!]

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:32:50.470Z · EA · GW

I was responding to Jeff - and thank you, Jeff, for clarifying that downvotes can hide me.

In my response to him, I was expressing my concern that a subset of the Forum has the power to hide my self-defense, so that my correction of their misrepresentation goes unnoticed, while their misrepresentations stand in full view.

Another EA Forum post, just recently ("Bad Omens in Current Community Building") was trying to bring to the community's attention that, among other things, EA is sometimes perceived as cultish or cliquish. I hope you can all see that, when my correction of others' misrepresentations are downvoted to obscurity, then that concern of cliquishness is real.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T21:29:21.707Z · EA · GW

Thank you for the clarification. It's still worrisome that a subset, by downvoting, can ensure that my correction of their misrepresentation goes un-noticed, while their misrepresentation of me stands in full view. There was another post on the Forum, recently, talking about how outsiders worry that EA is a cult or a clique - I hope you can see where that concern is coming from, when my self-defense is downvoted to obscurity, while the misrepresentations stand.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T20:11:42.372Z · EA · GW

My posts where I expressed what had been misrepresented and requested apologies have been deleted. And now, you apologize, after those deletes. I am suspicious. Why is your crew hiding the times I clarified and defended myself?

In truth, you DID talk about "Anthony getting EA funding" when you said "My hot take here is that if you spend $1B of EA money..." SO, don't lie to me. You did, in fact, say that I would take funding. I hope your apology is real, and not just covering face by pretending you did nothing to misrepresent me. You did misrepresent me. Can you admit that?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T20:07:14.583Z · EA · GW

It seems someone is deleting my posts, when I have not said anything in those posts except my own self-defense and what has been done to me. Here it is, again:

I am waiting for an apology from them - I don't know why I should be pleasant after repeatedly being disrespected. That sounds like you're asking me to "be a good little girl, and let them be mean to you, because if you're good enough, then they'll start to be nice." It's not a fault upon me that I should 'be nice until they like me' - they misrepresented me, which is the issue, NOT "lack of engagement".

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T20:05:20.829Z · EA · GW

For some reason, my original response is not showing up. I definitely did NOT make any attack on anyone, during my comment. I don't see why it would be deleted - I request a review of whoever deleted my response. Here it is, again:

"We rich white people would give you so much more respect if you poor black people spoke nicely when you complained." <--- this argument has been used a thousand times, around the world, to get people to cower while you continue to disrespect them. I won't cower; I am right to be upset, and I expect an apology for being misrepresented by them. I am not wrong for requesting this.

Further, again, I am not talking about "lack of engagaement" - ONLY your people have made that claim, and I dismiss it each time you've made it. I continue to point-out: I have been repeatedly misrepresented. I deserve an apology.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T07:37:11.660Z · EA · GW

"We rich white people would give you so much more respect if you poor black people spoke nicely when you complained." <--- this argument has been used a thousand times, around the world, to get people to cower while you continue to disrespect them. I won't cower; I am right to be upset, and I expect an apology for being misrepresented by them. I am not wrong for requesting this.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T07:32:11.032Z · EA · GW

More to the point: why is my tone the more pressing issue, compared to the fact that I've been repeatedly misrepresented? Your Us/Them priorities are showing.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T07:19:20.139Z · EA · GW

I am waiting for an apology from them - I don't know why I should be pleasant after repeatedly being disrespected. That sounds like you're asking me to "be a good little girl, and let them be mean to you, because if you're good enough, then they'll start to be nice." It's not a fault upon me that I should 'be nice until they like me' - they misrepresented me, which is the issue, NOT "lack of engagement".

Comment by Anthony Repetto on Bad Omens in Current Community Building · 2022-05-13T07:01:06.070Z · EA · GW

"Remember that the marginal value of another HEA is way lower than the marginal value of an actual legitimate criticism of EA nobody else has considered yet."

Thank you for saying it!

Comment by Anthony Repetto on Bad Omens in Current Community Building · 2022-05-13T06:52:08.018Z · EA · GW

"If it’s also sufficiently likely that some people could figure this out and put us on a better path, then it seems really bad that we might be putting off those very people."

Here! When I was twelve, I spent four years finding the best way to benefit others, then I developed my skill-set to pursue a career in it... 26 years ago. So, I might qualify as one of those motivated altruists who is turned-off by the response they've gotten from EA. I think I'm one of the people you want to listen to carefully:

I don't need funding - I already devote 100% of my time as I choose, and I'm glad to give it all to each cause. I am looking to have the 1-to-2 hour long, 2-to-5 person thoughtful conversation, on literally dozens of existing and EA-adjacent topics. I am not looking for a 30min. networking/elevator-pitch at a conference, because I'm not trying to get hired as a PA. I am not looking for the meandering, noisy, distracted banter at a brief social event. This forum, unfortunately, has presented me with consistent misrepresentations and fallacies, which the commentators refuse to address when I point them out. Slack is similarly incapable of the deeper, thoughtful conversations, with members and outsiders, that fosters insight and understanding.

There are numerous ideas, opportunities, methods, that are going un-noticed because of the barriers placed in front of thoughtful dialogue. It is a burden that should rest upon those EAs who are dismissive of deeper conversation, instead of being the "price I have to pay, to prove myself, before anyone will listen", as I was most recently told on this Forum.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T06:20:16.495Z · EA · GW

It's telling that, among four people responding to my earlier comment, all four repeatedly misrepresented me. None of them have apologized. Instead of addressing the issue I presented, my credibility as a speaker was questioned, as well. Then, one of the people responding came to the defense of the others, none of whom had apologized. This isn't rational discussion; this is a troll cave. I expect better, if you hope to be a healthy organization into the longterm.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T06:07:00.625Z · EA · GW

It's also telling that, though I pointed-out how you sought to use "repeated posting" as a proxy for my "powerlessness and vulnerability...lack of effectiveness", you made no mention of it, afterwards. Judging someone on such shallow evidence is the opposite of skeptical inquiry; it doesn't bode well for your own effectiveness. Am I being hostile when I say that to you, while you are NOT hostile, when you say it to me, first?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T05:41:30.894Z · EA · GW

When I am repeatedly misrepresented, and no one who does so responds with an apology, I am supposed to adhere to your standards of dialogue? Why are my standards not respected, first?

If specialist access is gold, then what do I need to pay them? I'll figure funding separately - who, and how much?

Exploratory work is great - yet, as Jeff was saying in this exact thread's original post - EA needs to be willing to take the leap on risky new ideas. That was, also, the part of his post that I quoted, in my original response. Do you see how they are related to what we are talking about? Perhaps EA should take a risk, and connect me to a specialist, and if EA thinks that specialist should be paid, I'll work that out, next.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T05:13:18.938Z · EA · GW

Just to note for the next person: I am now being called "powerless and vulnerable" because I stand against being mis-represented. I have been mis-represented repeatedly, and so I have responded to each - yet, the fact that I  "repeatedly post (complaints about being misrepresented)... makes sure that not many people take (me) seriously." If your clique repeatedly mis-represents me, and then they use my own self-defense as a reason to justify exclusion, you've earned the title of clique!

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T04:56:54.350Z · EA · GW

I am frustrated that I am repeatedly misrepresented, which is what I said in my responses. I am not frustrated by a lack of "people doing leg work for me". I am specifically asking if anyone has connections toward the relevant specialists, so that I can talk to those specialists. I'm not sure why that would be "something I should do on my own" - I'm literally reaching out to gather specialists, which is the first leg work, obviously. Re-inventing the wheel to impress an audience by "going it alone" is actually counter-productive.

I don't need a "fully general engine" - you are misrepresenting my request, as others have. I am asking if anyone knows someone with the relevant background. I am NOT asking for funding, nor a general protocol that addresses every post. Those are strawmen. No one has apologized for these strawmen; they just ghost the conversation.

And, if you are using the fact that I stood-up to repeated mis-representations as "telegraph a sense of powerlessness and sometimes vulnerability", and as a result, I should not be taken seriously, then you are squashing the only means of recourse available to me. When my request is repeatedly mis-represented, and I respond to each of them, I am necessarily "repeatedly posting" - I'm not sure why that noisy proxy for "lack of effectiveness" is a better signal for you than actually reading what I wrote.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T04:17:21.394Z · EA · GW

Quote from above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

Hopefully, you read this comment BEFORE saying something like "But EA shouldn't spend $1B on your idea" or "So you want us to fund this?"

I've received numerous mis-representations, each insisting that I am somehow asking for EA money. You demonstrate how poorly you pay attention; I'll copy the quote again, in case anyone forgot by now:

"I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

Why am I repeatedly addressing such an obvious and shallow misrepresentation? What is going on with these people?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T04:06:37.642Z · EA · GW

I quote from above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

I'm NOT asking to use EA money - I repeatedly clarify that, at every opportunity, and yet it is insisted-up multiple times, on this forum. No, EA only has $30B, so you can't afford stopping hurricanes, even if you spent your entire budget. I pointed to the potential value of looking for solutions at $1B, which is the actual expected-value, NOT the 'value to EA'. I'm not trying to take ANY dollars from your charities. Do you hear that, yet? I don't appreciate being repeatedly misrepresented and strawmanned.

Are you suggesting that I cold-call Sella Nevo? Do you have a way to put me in touch, so that I am not ignored, as I have been here?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T04:05:24.514Z · EA · GW

I am not looking for funding - I asked if anyone is interested in running those simulations, or knows someone they could put me in touch with.

I quote from my post directly above: "I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it."

I'm appalled that multiple times, here now and when I posted originally, after stating that I am NOT seeking funding, I am repeatedly misrepresented as "seeking funding". It's a basic respect to read what I actually wrote.

Included in my hope for connections are the relevant academics - I began my search at the EA Berkeley campus chapter. I know that the government would not listen to me without at least a first-pass simulation of my own; and I know that it is ludicrous for me to invest time into developing a skill that others possess, or re-inventing the wheel by making my own simulation. Those are all significantly more wasteful and ineffectual than asking people if they know anyone in related fields - this is because social networks are dense graphs, so only two or three steps away, I am likely to find an appropriate specialist. Your advice is not appropriate.

At no point did I ask the readers to do the work of simulation, or proposing to the government, on my behalf; you used a strawman against me. I am specifically asking if people can look in their social network for anyone with relevant skill-sets, who I might talk to - those skilled folks are the place where I'm likely to find someone who would actually do work, not here on the forums with a string of dismissive 'help' and fallacies.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T03:58:37.093Z · EA · GW

You espouse a bizarre cliquishness, by your claim. Look at the outcome it generates: you fail to hear new information from outside your bubble. Your claim does not become virtuous or correct, nor does it facilitate progress - you're just claiming it to feel right.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T03:57:15.707Z · EA · GW

I'm not asking to use EA money - I repeatedly clarify that, at every opportunity, and yet it is insisted-up multiple times. No, EA only has $30B, so you can't afford stopping hurricanes, even if you spent your entire budget. I pointed to the potential value of looking for solutions at $1B, which is the actual expected-value, NOT the 'value to EA'. I don't appreciate being repeatedly misrepresented and strawmanned.

Are you suggesting that I cold-call Sell Nevo? Do you have a way to put me in touch, so that I am not ignored, as I have been here?

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-13T03:54:20.115Z · EA · GW

I am not looking for funding - I asked if anyone is interested in running those simulations, or knows someone they could put me in touch with.

Included in that are the relevant academics - I began my search at the EA Berkeley campus chapter. I know that the government would not listen to me without at least a first-pass simulation of my own; and I know that it is ludicrous for me to invest time into developing a skill that others possess, or re-inventing the wheel by making my own simulation. Those are all significantly more wasteful and ineffectual than asking people if they know anyone in related fields - this is because social networks are dense graphs, so only two or three steps away, I am likely to find an appropriate specialist. Your advice is not appropriate.

At no point did I ask the readers to do the work of simulation, or proposing to the government, on my behalf; you used a strawman against me. I am specifically asking if people can look in their social network for anyone with relevant skill-sets, who I might talk to - those skilled folks are the place where I'm likely to find someone who would actually do work, not here on the forums with a string of dismissive 'help' and fallacies.

Comment by Anthony Repetto on EA and the current funding situation · 2022-05-11T21:37:57.821Z · EA · GW

"it can cause us to go awry if it means we don’t take chances of upside seriously, or when we focus our concern on false positives rather than false negatives"

I've encountered this problem repeatedly in my attempts to speak with EAs here in the East Bay. With one topic, for example, I can napkin the numbers for them: $5 Trillion in real estate impacted by hurricanes, in US alone - so, there's on-the-order-of a $1 Trillion wealth-effect if we can stop hurricanes. A proposal with a 1:1,000 chance of doing so would still be worth $1 Billion-ish to check for feasibility. Just running a simulation to get a sanity-check. Yet?

EAs are "busy coordinating retreats, so I don't have time to help connect you to someone for a new project." I'm NOT looking for any funding, either - there's a decent chance that the cost of the solution is lower than the Federal Government's increased tax-revenue from hurricane-prevention, so I say that the government should pay for it. They're also the ones who can negotiate with all those countries; a charity would fail, there. As I said in my EA Forum post on the topic, six months ago: I am looking for people to have a conversation. I expect that any particular individual would not be able to help, so I hope that each of the people listening would instead ask their own circle - in just two or three steps, that social network can reach almost anyone. The EAs are consistently reticent, wondering instead if I want funding, or if I am trying to get hired by them.

This is indicative of a pattern:

EAs have formats for dialogue which each serve certain needs well - the EA Forum, Slack, Conferences, Retreats, and local gatherings. Unfortunately, a vital format is missing from that list: the 1 to 2 hour, 2 to 5 person in-depth discussion, which includes people from outside that particular academic clique of sub-causes. I keep requesting this format, and I am told to "go to the Forum, or Slack", where I have been waiting months for any response, now.

Comment by Anthony Repetto on Space governance - problem profile · 2022-05-10T01:47:20.908Z · EA · GW

A stray thought; I'll stumble to the Google Doc with it in a moment - regarding minimal standards of operation for space colonies' constitutions: "If a government does not allow its people to leave, that is what makes it a prison."

Comment by Anthony Repetto on Impactful Forecasting Prize Results and Reflections · 2022-03-31T12:44:34.632Z · EA · GW

There are a few blind-spots with this competition, and I am sitting where they intersect:

  1. The Ethical Demand to Stop Bad Predictions - If I have a prediction that "this plane will crash", then it is wrong of me to make money off of that catastrophe, by allowing it to happen. I would be an accomplice. So, if I predict a negative outcome, I will attempt to prevent that outcome. Necessarily, my success would make my prediction false, yet it would be false for the wrong reason.
  2. Who Creates and Chooses the Topics - Metacalculus is generating the topics, and then your team is curating them. This leaves-out everyone who has a unique prediction. [For example: "Brick-laying robots will cut house construction costs, which will ripple-out quickly due to 'comps', the way real estate is valued. That's when real estate investors stop buying; it's a falling knife. As a result, most Americans will be underwater on their mortgage in a decade." Where do I tell people that prediction? How does it pass your curation?]

It's a troublesome sign when the boss of the company says "Here is THE problem - how do you solve it?" Better when the boss asks "What's the problem?" I don't see forecasting succeeding at its aim, if it fails to be receptive to unique outsider predictions, or those are lost in a pile due to curation-by-tiny-groups. And, waiting for bad things to happen just to make a few bucks sounds like becoming a henchmen to me. Who wants to avert calamity, instead?

Comment by Anthony Repetto on Will the next global conflict be more like World War I? · 2022-03-27T01:19:21.516Z · EA · GW

A major factor in how future wars play-out is the use of robotics and artificial intelligence. I argued that these factors do tilt toward attrition, as you mentioned, in my essays "Robot War Number One" and "Robot Combat: the Logistics of Ballistics", google-able. The main point is that this attrition will not happen as a static battlefront with masses of people. Instead, it will have depth into foe's heartland, and it will be dispersed. Yet, attrition all the same.

Comment by Anthony Repetto on 13 Very Different Stances on AGI · 2021-12-29T11:58:52.379Z · EA · GW

[[Addendum: narrow AI now only needs ten examples from a limited training set, in order to generalize outside that distribution... so, designing numerous narrow AI will likely be easy & automated, too, and they will proliferate and diversify the same way arthropods have. Even the language-model Codex can write functioning code for an AI system, so AutoML in general makes narrow AI feasible. I expect most AI should be as dumb as possible without failing often. And never let paperclip-machines learn about missiles!]]

Comment by Anthony Repetto on 13 Very Different Stances on AGI · 2021-12-29T11:10:05.424Z · EA · GW

Oh, I am well aware that most folks are skeptical of "narrow AI is good enough" and "5-10yrs to AGI"!  :) I am not bothered by sounding wrong to the majority. [For example, when I wrote in Feb. 2020, (back when the stock market had only begun to dip and Italy hadn't yet locked-down) that the coronavirus would cause a supply-chain disruption at the moment we sought to recover & re-open, which would add a few percent to prices and prolonged lag to the system, everyone else thought we would see a "sharp V recovery in a couple months". I usually sound crazy at first.]

 ...and I meant "possible" in the sense that doing so would be "within the budget of a large institution". Whether they take that gamble or not is where I focus the "playing it safe might be safe" strategy. If narrow AI is good enough, then we aren't flat-footed for avoiding AGI. Promoting narrow AI applications, as a result, diminishes the allure of implementing AGI.

Additionally, I should clarify that I think narrow AI is already starting to "FOOM" a little, in the sense that it is feeding itself gains with less and less of our own creative input. A self-accelerating self-improvement, though narrow AI still has humans-in-the-loop.

These self-discovered improvements will accelerate AGI as well. Numerous processes, from chip layout to the material science of fabrication, and even the discovery of superior algorithms to run on quantum computers, all will see multiples that feed back into the whole program of AGI development, a sort of "distributed FOOM".

Algorithms for intelligence themselves, however, probably have only a 100x or so improvement left, and those gains are likely to be lumpy. Additionally, narrow AI is likely to make enough of those discoveries soon that the work left-over for AGI is much more difficult, preventing the pot from FOOMing-over completely.

[[And, a side-note: we are only now approaching the 6 year anniversary of AlphaGo defeating Lee Sedol, demonstrating with move 37 that it could be creative and insightful about a highly intuitive strategy game. This last year, AlphaFold has found the form of every human protein, which we assumed would take a generation. Cerebras wafer-scale chips will be able to handle 120 trillion parameters, this next year, which is "brain-scale". I see this progress as a sign that narrow AI will likely do well-enough, meaning we are safe if we stick to narrow-only, AND that AGI will be achievable before 2032, so we should try to stop it with urgency.]]

Comment by Anthony Repetto on 13 Very Different Stances on AGI · 2021-12-28T20:55:59.874Z · EA · GW

Another, VERY different stance on AGI:

  • AGI will be possible very soon (5-10yrs): neural-to-symbolic work, generalization out-of-distribution, where it is able to learn from fewer and fewer examples, and with equivariance, as well as Mixtures of Experts & Hinton's GLOM are all fragments of a general intelligence.
  • AGI alignment is likely impossible. We would be pulling a Boltzmann brain out of a hat, instead of relying upon hundreds of millions of years of common sense and socialization. I'd be more likely to trust an alien, because they had to evolve and maintain a civilization, first.
  • Yet, I posit that narrow AI will likely provide comparable or superior performance, given the same quantity of compute, for almost all tasks. (While the human brain has 100 Trillion synapses, and our neurons are quite complex, we must contrast that with the most recent language AI, which can perform nearly as well as us with only 7 Billion parameters - a "brain" 14,000 times smaller! It does seem that narrow AI is a better use of resources.)
  • Because narrow AI will be generally sufficient, then we would NOT be at a disadvantage if we "play it safe" by NOT pursuing AGI. If "playing it safe is safe", that's a game humanity might win. :)
  • I understand that full compliance for an AGI-ban is unlikely, yet I see our best chances, strategically, from pursuing narrow AI until that domain is tapped-out, while vehemently fighting attempts at AGI. Only after narrow AI sees diminished returns will we have a clear understanding of "what else is there left to do, that makes AGI so important?" My bet is that AGI would not supply a large enough margin compared to narrow AI to be worth the risks, ever.
  • Side-bar Prediction: we are already in a lumpy, decades-long "FOOMishness" where narrow AI are finding improvements for us, accelerating those same narrow AI. Algorithms cannot become infinitely more powerful, however, so we are likely to see diminishing returns in coming years. That will make the job of the first AGI very difficult -because each next leap of intelligence will take vastly more resources and time than the last (especially considering the decades of brain-legions to get us this far...).
Comment by Anthony Repetto on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-23T11:56:06.839Z · EA · GW

Oh, thank you for showing me his work! As far as I can tell, yes, Comprehensive AI Services seems to be what we are entering already - with GPT-3's Codex writing functioning code a decent percentage of the time, for example! And I agree that limiting AGI would be difficult; I only suppose that it wouldn't hurt us to restrict AGI, assuming that narrow AI does most tasks well. If narrow AI is comparable in performance, (given equal compute) then we wouldn't be missing-out on much, and a competitor who pursues AGI wouldn't see an overwhelming advantage. Playing it safe might be safe. :)

And, that would be my argument nudging others to avoid AGI, more than a plea founded on the risks by themselves: "Look how good narrow AI is, already - we probably wouldn't see significant increases in performance from AGI, while AGI would put everyone at risk." If AGI seems 'delicious', then it is more likely to be sought. Yet, if narrow AI is darn-good, AGI becomes less tantalizing.

And, for the FOOMing you mentioned in the other thread of replies, one source of algorithmic efficiency is a conversion to symbolic formalism that accurately models the system. Once the over-arching laws are found, modeling can be orders of magnitude faster, rapidly. [e.g. the distribution of tree-size in undisturbed forests always follows a power-law; testing a pair of points on that curve lets you accurately predict all of them!]

Yet, such a reduction to symbolic form seems to make the AI's operations much more interpretable, as well as verifiable, and those symbols observed within its neurons by us would not be spoofed. So, I also see developments toward that DNN-to-symbolic bridge as key to BOTH a narrow-AI-powered FOOM, as well as symbolic rigor and verification to protect us. Narrow AI might be used to uncover the equations we would rather rely upon?

Comment by Anthony Repetto on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-18T22:52:47.698Z · EA · GW

Oh, and my apologies for letting questions dangle - I think human intelligence is very limited, in the sense that it is built hyper-redundant against injuries, and so its architecture must be much larger in order to achieve the same task. The latest upgrade to language models, DeepMind's RETRO architecture achieves the same performance as GPT-3 (which is to say, it can write convincing poetry) while using only 1/25th the network. GPT-3 was only 1% of a human brain's connectivity, so RETRO is literally 1/2,500th of a human brain, with human-level performance. I think narrow super-intelligences will dominate, being more efficient than AGI or us.

In regards to overall algorithmic efficiency - in only five years we've seen multiple improvements to training and architecture, where what once took a million examples needs ten, or even generalizes to unseen data. Meanwhile, the Lottery Ticket can make a network 10x smaller, while boosting performance. There was even a supercomputer simulation which neural networks sped 2 BILLION-fold... which is insane. I expect more jumps in the math ahead, but I don't think we have many of those leaps left before our intelligence-algorithms are just "as good as it gets". Do you see a FOOM-event capable of 10x, 100x, or larger gains left to be found? I would bet there is a 100x is waiting, but it might become tricky and take successively more resources, asymptotic...

Comment by Anthony Repetto on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-18T22:38:53.085Z · EA · GW

Thank you for diving into this with me :) We might be closer on the meat of the issues than it seems - I sit in the "alignment is exceptionally hard and worthy of consideration" camp, AND I see a nascent FOOM occurring already... yet, I point to narrow superintelligence as the likely mode for profit and success. It seems that narrow AI is already enough to improve itself. (And, the idea that this progress will be lumpy, with diminishing returns sometime soon, is merely my vague forecast informed by general trends of development.) AGI may be attainable at any point X, yet narrow superintelligences may be a better use of those same total resources.

More importantly, if narrow AI could do most of the things we want, that tilts my emphasis toward "try our best to stop AGI until we have a long, sober conversation, having seen what tasks are left undone by narrow AI." This is all predicated on my assumption that "narrow AI can self-iterate and fulfill most tasks competently, at lower risk than AGI, and with fewer resources." You could call me a "narrow-minded FOOMist"?  :)

Comment by Anthony Repetto on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-17T02:31:14.650Z · EA · GW

Oh, my apologies for not linking to GLOM and such! Hinton's work toward equivariance is particularly interesting because it allows an object to be recognized under myriad permutations and configurations; the recent use of his style of NN in "Neural Descriptor Fields" is promising - their robot learns to grasp from only ten examples, AND it can grasp even when pose is well-outside the training data - it generalizes!

I strongly suspect that we are already seeing the "FOOM," entirely powered by narrow AI. AGI isn't really a pre-requisite to self-improvement: Google used a narrow AI to lay their chips' architecture, for AI-specialized hardware. My hunch is that these narrow AI will be plenty, yet progress will still lurch. Each new improvement is a harder-fought victory, for a diminishing return. Algorithms can't become infinitely better, yet AI has already made 1,000x leaps in various problem-sets ... so I don't expect many more such leaps, ahead.

And, in regards to '100x faster brain'... Suppose that an AGI we'd find useful starts at 100 trillion synapses, and for simplicity, we'll call that the 'processing speed' if we run a brain in real-time. "100 trillion synapses-seconds per second" So, if we wanted a brain which was equally competent, yet also running 100x faster, then we would need 100x the computing power, running in parallel to speed operations. That would be 100x more expensive, and if you posit that you had such power on-hand today, then there must have been an earlier date when the amount of compute was only "100 trillion synapses-seconds per second", enough for a real-time brain, only.  You can't jump past that earlier date, when only a real-time brain was feasible. You wouldn't wait until you had 100x compute; your first AGI will be real-time, if not slower. GPT-3 and Dall-E are not 'instantaneous', with inference requiring many seconds. So, I expect the same from the first AGI.

More importantly, to that concept of "faster AGI is worth it" - an AGI that requires 100x more brain than a narrow AI (running at the same speed regardless of what that is) would need to be more than 100x as valuable. I doubt that is what we will find; the AGI won't have magical super-insight compared to narrow AI given the same total compute. And, you could have an AGI that is 1/10th the size, in order to run it 10x faster, but that's unlikely to be useful anywhere except a smartphone. For any given quantity of compute, you'd prefer the half-second-response super-sized brain over the micro-second-response chimp brain. At each of those quantities of compute, you'll be able to run multiple narrow AIs at similar levels of performance to the singular AGI, so those narrow AIs are probably worth more.

As for banning AGI - I have no clue! Hardware isn't really the problem; we're still far from tech which could cheaply supply human-brain-scale AI to the nefarious individual. It'd really be nations doing AGI. I only see some stiff sanctions and inspections-type stuff, a la nuclear, as ever really happening. Deployment would be difficult to verify, especially if narrow AI is really good at most things such that we can't tell them apart. If nations formed a kind of "NATO-for-AGI", declaring publicly to attack any AGI? Only the existing winners would want to play on the side of reducing options for advancement like that, it seems. What do you think?

Comment by Anthony Repetto on Ngo's view on alignment difficulty · 2021-12-17T01:26:40.335Z · EA · GW

An odd window to an unmentioned scenario:

If narrow super-intelligence is competent for almost all the things we would trust AI to do, such that a switch to AGI is expensive, risky, with a low margin: then, we wouldn't need to worry about 'missing-out' if we ban AGI. From a glance at the decision-tree, it seems better to explore narrow AI fully, so that we can see how much value is left on the table for AGI to yield us.

Additionally, I expect AGI to be possible within the next 5 years. (You can hold me to that prediction!) Looking back five years, and at the recent capabilities toward generalization from few examples, equivariance, as well as formulating & testing symbolic expressions - we might already be close to the necessary algorithms. And, with companies like Cerebras offering orders of magnitude more energy-efficient compute in this next year, then human-brain-scale networks seem to be on the doorstep already.

[[Tangent of Details: GPT-3 and the like are ~1% the network scale of a human brain, and Cerebras' chip will support AI up to 20% larger than such a 'human' connectome. You might be tempted to claim "neurons are more complex", yet the proficiencies demonstrated with GPT-3, using only 1% of our stuff, betray the argument for biological superiority. AI is satisfied with 16-bit precision, for example. Our brains are heavily redundant and jumbly, so out-performing us might take much less effort. Heck, GPT-3 level performance is now possible with 25x times smaller network... "0.04% of a human brain", yet it works as well as us.]]

So, narrow AI that uses 1/100th the compute can usually do the task fine. GPT-3 was writing convincing poetry. If someone can choose between a single AGI or a hundred narrow AIs, they'll probably choose the latter. It would let you do 100x more stuff per second, and swapping between the networks loaded in memory would still allow you to utilize myriad task-specific AI. Those narrow AI will be easier to train AND verify, as well.

Let's ban AGI, because I don't think it'd help much, anyway!

Comment by Anthony Repetto on My Overview of the AI Alignment Landscape: A Bird’s Eye View · 2021-12-16T06:07:58.873Z · EA · GW

My odd angle on your Key Considerations:

 - Prosaic AGI: Considering Geoff Hinton's GLOM, and implementations of equivariant capsules (which recently generalized to out-of-distribution grasps after only ten  demonstrations!) as well as Sparse Representations of Numenta, the Mixture of Experts models which Jeff Dean seems to support in Google's Pathways speech... it DOES seem like all the important components for a sort of general intelligence are in place. We even have networks extracting symbolic logic and constraints from a few examples. The barriers to composability, analogy, and equivariance don't seem to be that high, and once those are managed I don't see many other hinderances to AGI.

 - Sharpness: Improvements in neural networks took years, from the effort of thousands of the best brains; we're likely to have a SLOW take-off, unless the first AGI is magically thousands of times faster than us, too. (If so, why didn't we build a real-time AGI when we had 1/1,000th the processing power?) And, each new improvement is likely to be more difficult to achieve than the last, to such an extent that AGI will hit a maximum - some "best algorithm". That limit to algorithms necessitates a slowing rate of improvement, and we're likely to already be close to that peak. (Narrow AI has already seen multiple 100x and 1,000x gains in performance characteristics, and that can't go on forever.)

 - Timeline: With the next round of AI-specialized chips due 2022 (Cerebras has a single chip the size of thousands of normal chips, with memory imbedded throughout to avoid the von Neumann Bottleneck) we'll see a 100x boost to energy-efficiency, which was the real barrier to human-scale AI. Given that the latest AIs are ~1% of a human brain, then a 100x boost puts AI within striking-distance of humans, this next year! I expect AGI to be achievable within 5 years... just look at where neural networks were five years ago.

 - Hardness: I suspect that AGI alignment will be forever hard. Like an alien intelligence, I don't see how we can ever really trust it. Yet, I also suspect that narrow super-intelligences will provide us with MOST of the utility that could have been gained from AGI, and those narrow AIs will give us those gains earlier, cheaper, with greater explainability and safety. I would be happy banning AGI until narrow AI is tapped-out and we've had a sober conversation about the remaining benefits of AGI. If narrow AI turns-out to do almost everything we needed, then we can ban AGI without risk or loss. We won't know if we really even need AGI, until we see the limits of narrow AI - and we are nowhere near those limits, yet!

Comment by Anthony Repetto on Open Thread: Spring 2022 · 2021-12-14T23:52:51.389Z · EA · GW

Hello! I am slowly seeping into the Forum floorboards, dripping down the comments section, leaving meandering mumblings along an electronic thread. Most of my thoughts are obscure and dubiously specific. Expect errors; I do. And, I value dialogue not for compromise, but to send feelers out in all directions of the design-space. Those lateral extremes bind the constraints of good ideas, found only after pondering a few dozen flops! I'm glad to turn them around, to find any lucky inspirations. Most domains are a straight path up my alley; I follow specific problems into each arena, in turn.

Comment by Anthony Repetto on Creating Individual Connections via the Forum · 2021-12-14T21:56:07.121Z · EA · GW

Oh, I like that level of specificity - a "contact via email" as opposed to "zoom-compatible" for example? I suppose the biggest determinant for which method to communicate would be the number of participants; video conference being the default for the diverse web of dialogue, while emails glean slow-thinking well for one-on-one exchanges. And, most ideas seem to progress from those pairwise dialogues early in development, towards larger congregations later in their cycle. So, each new idea might find value at different points from both the "email/zoom" distinctions? I'm sure there are others which might help, too!

Comment by Anthony Repetto on Internalizing Externalities · 2021-12-14T21:48:26.188Z · EA · GW

Ah, now that I've started looking further - an assessment of those Social Impact Bonds, here. They note "“Using a single outcome to define success may miss a range of other benefits that might result from the program—benefits that also have real value but will go unmeasured” (Berlin, 2016)" which is in-line with what I'd said elsewhere on the idea: whatever we don't measure, tends to bite us in the butt. (That includes which groups we listen to!)

The report also mentions: “By design, nearly all of the early SIBs were premised on government-budget savings. Indeed, in those deals, payments to investors depended on those savings.” This would be a huge hinderance, with the only evaluated benefits being 'costs to gov't avoided'. Only by including as much of the real externality as feasibly measurable could we hope to incentivize the right solutions.

So, though SIBs are essentially the same mechanism I mention, they do seem to be falling short for reasons I'd expected and planned around. A singular legislative document, setting taxes to match whatever the percent benefits happen to be, with a devoted branch of the executive determining externalities with transparent metrics, statistical safeguards. Investors might also feel that "this is like a government bond", if we give them a stronger institutional commitment. One-off policy goals find their funding pulled regularly, in contrast, which would spoil the potential of the investment. I'd guess I'm SIB-adjacent?

Comment by Anthony Repetto on Internalizing Externalities · 2021-12-14T21:17:26.651Z · EA · GW

Thank you VERY much for bringing this to my attention!

And, I would say this is almost-exactly what I had in mind! (The "investor" I referred to is merely the role; any normal person could throw a dollar in the pot, becoming an investor; and any community could propose benefit-plans.) If those "government priorities & funding" were enshrined as a mandate to regularly-updated public concerns, funded with a singular tax-rate that is raised or lowered to match the total quantity of benefits, then I couldn't tell the difference between our plans.

I see vast change becoming possible, when you can earn a competitive return from public good, funneled through taxation for efficiency and fairness' sake. It'd bring a few orders of magnitude more resources to bear on our troubles. (And, if we manage to internalize most externalities, then pricing is a decent representation of real cost, which would provide systemic gains to efficiency.)

Comment by Anthony Repetto on Seeking a Collaboration to Stop Hurricanes? · 2021-12-14T21:06:13.783Z · EA · GW

Oh, beautiful! Thank you :) There's so much depth of water to work with, that might easily diffuse a few degrees, which is all we need.

Comment by Anthony Repetto on Seeking a Collaboration to Stop Hurricanes? · 2021-12-07T23:10:52.495Z · EA · GW

Oh, I only expect that the water spouts could be activated once the sun had accumulated enough over-heated high-humidity air within the tarp-layers... sometime late in the afternoon. Yet, the water spout removes more of the surface humidity than would convect otherwise - this allows further evaporation and cooling of surface waters. If that effect is strong enough, over a large area per water spout, then it might weaken hurricanes when they pass.

I don't expect the water spouts to carry most of their moisture high into the air, as adiabatic cooling will condense the majority of it quickly. Yet, that plume would still leave moisture high enough for mixing, and it would be hot and humid, pushing higher. If that can increase cloud cover without a thousand airplanes dumping chemicals, that sort of geo-engineering might be an easier pitch to the public & government.

[Note: The key difference between the water spout and natural convection is that a vortex will sustain itself at a higher rate of flow, fueled faster by the thermal gradient. My hope is that this would increase surface evaporation enough to cool waters, weakening the storm. Clouds would be nice, however much we can get; I just expect evaporation to play a larger role.]

Comment by Anthony Repetto on Seeking a Collaboration to Stop Hurricanes? · 2021-12-07T22:59:26.017Z · EA · GW

Thank you for the critique! I'll tone-down my emphases - my own impulse would have been to color-code with highlighters and side-bars, but I see that's not what most people want, here :)

And thank you also for calling more attention to the problem - even if water spouts don't have the muscle for it, I'm one to keep looking. A terrifying option I left behind, which still might inspire something by way of contrast: inducing the hurricane itself, further east in the Atlantic, and just try to steer it over water.

Vortices hold immense energy, yet they are relatively low-energy to nudge (research on plasmonic vortices relies upon that high ratio of held-energy to nudge-energy). Though, I doubt solutions in that vein would get as much buy-in as water spout tarps might, especially because water spouts would be a redundant system of many identical parts, instead of relying upon a single rudder for a storm.

If you have inspirations, however unusual, I am glad to hear it!

Comment by Anthony Repetto on Voting Theory has a HOLE · 2021-12-05T23:51:48.510Z · EA · GW

I can point you to where I did those things...

1] "State the exact problem setting you are addressing,"

 - "There is an unexplored domain, for which we have not made any definitive conclusions." I then hypothesize how we might gain some certainty regarding the enlarged class of voting algorithms, though I am likely wrong!     [at the top, in epistemic status]

2] "State Gibbard's theorem, and"

 - Gibbard's Theorem proved that no voting method can avoid strategic voting (unless it is dictatorial or two-choices only)     [In the TL;DR at the top]

 - He restricted his analysis to all "strategies which can be expressed as a preference n-tuple"... a ranked list.     [Second sentence of the first paragraph under "Gibbard's Theorem" header, at the beginning of the body of the post]

3] "Show how exactly machine learning has solutions for that problem."

 - This is proof by existence that "An algorithm CAN function properly when fed latent-space vectors, DESPITE that algorithm failing when given ONLY a ranked-list." So, the claim of Voting Theorists that "if ranked-lists fail, then everything must fail" is categorically false.     [The third paragraph of the "Gibbard's Theorem" section, first sentence]

4] "Rather, it applies for any situation where each participant has to choose from some set of personal actions, and there's some mechanism that translates every possible combination of actions to a social choice (of one result from a set)."

No, specifically, Gibbard frames all those choices as a particular data-type: "the domain of the function g consisting of all n-tuples...", (p.589) and he presumed that such a data-type would be expressive enough to find any voting algorithm that would be non-strategic, if any such an algorithm could exist. By restricting himself to that data-type, he missed the proof by existence I mentioned above.

5] "that doesn't mean it's the end of the world"

At no point did I claim this was an existential risk - neither is shrimp welfare. I'm not sure what point you're trying to make with this comment. At the bottom of my post, the section titled "Purpose!" I outline the value statement: "considering that governments' budgets consume a large fraction of the global economy, there are likely trillions of dollars which could be better-allocated. That's the value-statement, justifying an earnest effort to find such a voting algorithm OR prove that none can exist in this case, as well."

I'm not sure why I was able to answer all your questions with only quotes from my post. Did I clump the thoughts into paragraphs in a way that all of them were missed?