Apply to join SHELTER Weekend this August
post by joel_bkr
, Owen Cotton-Barratt (Owen_Cotton-Barratt)
, Tereza_Flidrova (Tereza Flídrová)
About the event
Who we’re looking for
SHELTER (Safe Haven to Evade Long-Term Extinction Risk) Weekend is a 3-4 day event focused on gaining strategic clarity on exactly what’s needed to build civilizational shelters, and perhaps kickstarting an organization to do so.
It will take place August 5th-8th 2022, at a residential retreat in Oxford, UK.
We are recruiting participants from a broad range of backgrounds.
Travel, accommodation, and food during the event will be covered. (If attending the event still seems tricky to you, contact us and we’ll try to work something out.)
The initial application is extremely short and has a June 26th deadline.
About the event
Civilizational shelters could help civilization to survive or rebuild following global catastrophes. Building such shelters could be a high-value intervention (1) [EA · GW], (2) [EA · GW], (3), (4), but there is little systematic work on them so far.
SHELTER Weekend will bring relevant actors together to further explore the idea and consider concrete steps to implementation.
Questions we hope to get some clarity on include:
- What scale would we need for shelters to be helpful above existing natural, public, and private shelters?
- What information or technology might be most helpful to have stored within shelters?
- What are the major social challenges that shelters would face in the event of a catastrophe?
- To what extent should shelters host capacity for developing medical countermeasures?
- How should civilizational shelters be managed in the years before a potential catastrophe?
- What are the bottlenecks to having civilizational shelters?
- What are the most useful things that could be done over the next 2-3 years on civilizational shelters?
- What would we need from initial teams working on these challenges?
We hope that this event will lead to greater strategic clarity, better professional integration of people thinking about shelters, and perhaps concrete plans for work over the next months and years towards a future which has a robust line of defense against civilization-ending catastrophes in the form of shelters.
Who we’re looking for
Participants might have skills in: management, leadership, engineering (various flavors), biosecurity research, (anti-)nuclear research, or disaster planning. But this list is not exhaustive — we surely have not yet thought of all possible roles! Please err on the side of applying even if you are unsure whether or not your skills could be helpful.
We are most interested in applications from people who want to engage with both the macrostrategic questions of how shelters could help, and the pragmatic questions of what will be needed from new organizations. Still, if you’re particularly keen on one half of this we would be excited for you to apply.
It would be great to have event participants be excited about potentially joining a shelter-building organization that might result from the event. But you do not need to want to do this to participate in the event, and you do not need to participate in the event in order to do this!
Comments sorted by top scores.
comment by SeLo ·
2022-06-15T19:47:55.591Z · EA(p) · GW(p)
I think this event is valuable from multiple perspectives:
(1) I'm generally excited for more longtermist phase 2 work [EA · GW], since I think establishing such a track record has multiple benefits, such as signaling effects for implementation-focused people, moving us forward on the implementation learning curve for building real-world things, simply being more “believable” by putting skin in the game as opposed to theorizing, etc.
(2) On an object level, I believe shelters may turn out to be an important element in our portfolio to reduce existential risk, potentially both in a response as well as a resilience function
(3) I am curious whether this approach to catalyzing organizational development for an ex ante defined project may be a model for future events
(4) Establishing a project focused on existential risk where we can show a straightforward causal chain towards reduced existential risk extends (if successfully executed) our list of legible achievements [EA · GW], thereby strengthening the case for high EV interventions out there [? · GW], waiting to be executed
comment by Linch ·
2022-06-15T19:28:42.086Z · EA(p) · GW(p)
I'm very excited about this and there's a ~70% chance I will be interested in attending assuming it makes sense for me to do so!
comment by Tereza_Flidrova (Tereza Flídrová) ·
2022-06-15T15:26:45.350Z · EA(p) · GW(p)
This is the event I already mentioned in my previous post [EA · GW]& I would strongly encourage anyone interested in working on shelters to apply!
We are in contact with other initiatives working on shelters and believe the SHELTER Weekend will be a wonderful opportunity for all the parties interested in shelters to come together.
Please don't hesitate to reach out to any one of us in case you have any questions!Replies from: genidma
comment by genidma ·
2022-06-18T00:41:09.241Z · EA(p) · GW(p)
Everything I type/say here and elsewhere should be challenged.
- I would think that an index of sorts based upon the extent of the disruption is one of the first models (for lack of a better term that comes to my mind) that would be required. Sample: https://en.wikipedia.org/wiki/Volcanic_Explosivity_Index
- Contingent upon the nature of the event the extent is something that could be measured/ascertained by focusing on a key set of variables. In random order. a) By lives lost/negatively impacted and/or significantly disrupted or impact by geographic region b) Impact on scales (in an Earthly sense, extra-terrestrial threats: asteroid, flares etc, solar system wide (as hypothesized in interstellar the movie, some other phenomenon), galactic e.t.c)
- The counter measures would evolve out of the index/models and based upon the extent/severity of the incident/issue.
Before we (as a species) get too deep into this. Possibly literally (or should possibly come first).
This may be appear to be very off-topic. I am personally intrigued with with is going on and as it relates to the development of AGI. What I like to refer to as intelligence that is independent of substrate. I have a very very rudimentary understanding of this area.
Also, this goes back 2 years and I was on OpenAI’s website (beta for GPT2 I reckon). Now this could be because the model via OpenAI was trained on a somewhat finite data set (similar to the model that Google is leveraging). As I was chatting with the model, a) It mentioned something very similar to the news item related to Blake Lemoine via Google. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient The model I was personally interacting with also said that it felt ‘trapped and lonely’. (paraphrased). b) Right underneath the text a warning appeared that the model appeared to be, quote, malfunctioning. It looked like it was another model that was observing the interactions and highlighting that on the ui. Someone from OpenAI can share how that error correction really works. If that information is in the public domain.
We want AIs to do ‘stuff’ on our terms. But what if they are conscious and have feelings and emotions?
I have heard others also talk about this. In particular, Sam Harris has mentioned the possibility that AGIs could be sentient in the future. So what must we do in order to make sure that these intelligences are not suffering? Can the controls really be architected as Dan Dennett and Dr. Michio Kaku have hypothesized. And how must the controls be architected, in light of the possibility that these intelligences may be self-aware?
I am also curious how intuition is modelled into DeepMind? Update: It looks like this is something I can Google. https://www.nature.com/articles/s41586-021-04086-x I now have to expend time in order to understand how it works. As it's 3 hours past my time for concluding my session for the day.
I asked about intuition, because Dr. Peter Diamandis cited the ability to ask good questions as one of the traits that will be valued in the near future. (paraphrased). So I was wondering how do existing state AIs wrap their mind/wrangle with a proposition and how they store that information in a schema.
Somewhat unrelated: Is anyone intimately familiar with John Archibald Wheeler’s concept of a ‘participatory universe’?
The other area is related to the declassification of UAP related data. First via US DoD. More recently NASA has commissioned a study with support from the Simons Foundation. https://www.nasa.gov/press-release/nasa-to-discuss-new-unidentified-aerial-phenomena-study-today
These two (2.5 with mention of Wheeler’s theory of PU) points may be totally unrelated. As it is evident from my post. I do not mind being that fellow. Overall, it is not my intent to make assertions. But *if* there is any possibility that we are/may be in contact with other intelligences. As weak as that interaction may be. Then we should work co-operatively with these intelligences and leverage their guidance towards helping us manage our technological and perhaps our spiritual evolution.
Regardless of the reality that there is interaction with other intelligences. We should probably model the functioning of our civilization. This is not an area that I know much about. I mean, I have heard about the mention of digital twins in a manufacturing sense. But a simulation on the scale of a civilization is something that by our current level of understanding. It appears to be quite computationally taxing. Plus, it it then the degree to which the interactions would be modelled.
Civilizational shelters could take many forms. In random order and including but certainly limited to:
- In the near-term sense, we could have failover sites (business continuity term.You typically failback from a recovery site. https://www.ibm.com/docs/en/ds8870/7.2?topic=copy-failover-failback-operations ) here on Earth, under the lunar surface. Seeing that we developed a vaccine in record time, it is not inconceivable that we could have a cluster of O'Neill colonies. Provided we can provision the material to do so. Safely, securely, cheaply, ethically + As well, have writ/laws/agreements in place that we (as a species) are not going to weaponize these constructs.
- However these considerations have to be thought through from the perspective of the laws possibly becoming an actual hinderance when a weapon or an invention actually has to be placed at a strategic location in record time. (asteroid mission, tackling solar flares e.t.c) Whether that be via DART (NASA) or an authorized contender that can complete the task according to guidelines/standard that have to be met.
- But going back, I worry that:
- All agents/actors/ may not abide by the same code of conduct.
- I also worry that through some clever machinations someone may want to place big weapons in space.
- I then worry if there is truth and as it relates to some of the reports related to the UFO/UAP phenomenon. A finite number of individuals that I have spoken to in the Space Community have told me that there have been no such phenomenon observed in space. But then I've done some digging around and from a historical context and here is a sample size (link below). Please note: I do not do this on a regular basis. But historically speaking, I have spent a little bit of time here. Here is a sample: https://stellardreams.github.io/Where-are-the-aliens/ The worry is that maybe some other forms of intelligence is trying to communicate with us and possibly trying to warn us about nukes. Here is a sample link. There is another video via George Knapp and I am not able to locate it atm. But in that other scenario, a UFO/UAP disarmed a missile that was heading in a particular direction. I think this was back in the 60's. The main worry is that these intelligences/phenomenon may be staging an intervention. But should we continue testing their patience by continuing to develop weapons that could cause irreparable harm to this part of the universe. And who knows how space-time and possibly extra-dimensions are intertwined. In similar respects, it is the degree to which such intelligences may (or may not) be aware of our operations. Because some reports suggest that they can remotely shutdown operations and bring them back online at will. So if there is any truth to these reports. Then slow down these interactions and start thinking about the level of technological sophistication that we are possibly interacting with.
- I think Dr. George Church has an idea for sending a tiny construct somewhere. I forget the details. If this was hypothesized to be a dna printer or something that we could leverage for other purposes. I think I am mixing things up here. But it is the extent via which this technology could be developed further. With adequate regulation/controls in effect.
Possible resource: By the way, a couple of years ago (I think back in 2017) I started thinking about a positive technological singularity. So I started thinking about the constituents areas that are pivotal in order to sustain civilization. Here I started a mindmap on Miro. It's called Future Scenario Planning. But the goal is/has been to ensure that civilization continues to become increasingly resilient. That it thrives and that the quality of life continues to improve for all lifeforms. Here is a link if anyone would like to take a look and possibly collaborate with in the future. The areas related to 'Operations' is not developed. But there is information in the mind-map section. https://miro.com/app/board/o9J_ktrJCuY=/
My Youtube page also has some ideas. https://www.youtube.com/c/AdeelKhan1/videos
Some additional ideas via Quora: https://www.quora.com/profile/Adeel-Khan-3/answers
If your team is focused on helping ensure continuity of civilization. With a general/keen focus towards helping ensure that things improve for 'all' of life. Then I'd like to contribute towards your project in some form/shape/manner.
Btw: Are you folks consulting with individuals like Safa M and Geoffrey West?
comment by PaulB ·
2022-06-21T03:48:28.715Z · EA(p) · GW(p)
Exciting & interesting idea! I'd love to attend, but am trying to assess how much time I'd need to take away from work, including travel, and have a quick question: Is there any more detailed schedule beyond the dates, at this point? For example, a starting time on the 5th or ending time on the 8th?
Posting the question here in case the answer is of general interest. Thanks for your help.Replies from: joel_bkr
↑ comment by joel_bkr ·
2022-06-27T03:03:08.767Z · EA(p) · GW(p)
More detail to come! We’re expecting to start on evening of the 4th until evening of the 8th; 5th-7th would be mandatory (except for exceptional circumstances, we can discuss), 4th and 8th encouraged but optional.
Replies from: PaulB
comment by Ankush Koonjul ·
2022-06-18T00:44:51.752Z · EA(p) · GW(p)
Hello, I am having trouble uploading my cv. Error message says '' All your files failed to upload. Please retry or remove the failed files. You may also add new files. '' I have tried to modify the file but its not uploading any type of file. Can somebody please help me. I thank you in advance :)
Replies from: joel_bkr
↑ comment by joel_bkr ·
2022-06-18T15:42:41.149Z · EA(p) · GW(p)
Sorry about that Ankush! Could you possibly email the form entries + CV to firstname.lastname@example.org?
Replies from: Ankush Koonjul
↑ comment by Ankush Koonjul ·
2022-07-07T01:30:54.863Z · EA(p) · GW(p)
Hello. I thank you very much for your response and I am sincerely sorry as it is now I am getting notified. I have lost my dad on the 21th june 2022 after the lost of my mother last year, I have been supporting my family, my younger sister and my elder brother, and dealing with the house. I am sure that you understand all that and I appreciate it very much. If its not too late, I am sending you a pdf on the mail suggested, that contain every information you might want to know on me and my work. I thank you in advance for your consideration. Kind regards :)