Posts

Germans' Opinions on Translations of "longtermism": Survey Results 2022-06-27T15:46:04.313Z
Contest: 250€ for translation of "longtermism" to German 2022-06-01T19:59:38.888Z
Which texts do we need in non-English languages? 2022-04-11T16:59:54.318Z

Comments

Comment by Konstantin Pilz on The first AGI will be a buggy mess · 2022-08-02T19:40:46.922Z · EA · GW

Agree that it depends a lot on the training procedure. However, I think that given high situational awareness, we should expect the AI to know its shortcomings very well. 

So I agree that it won't be able to do a backflip on the first try. But it will know that it would likely fail and thus not rely on plans that require backflips or if it needs backflips it will find a way of learning them without being suspicious. (I.e. by manipulating a human into training it to learn backflips)

I think overthrowing humanity is certainly hard. But it still seems possible for a patient AGI that slowly accumulates wealth and power by exploiting human conflicts, getting involved in crucial economic processes, and potentially gaining control of communication systems in the military with deepfakes & the wealth and power it has accumulated. (And all this can be done by just interacting with a computer interface as in Cotra's example) It's also fairly likely that there are some exploits in the way humans work that we are not aware of that the AGI would learn from being trained with tons of data that would make it even easier.

So overall, I agree the AGI will have bugs, but it will also know it likely has bugs and thus will be very careful with any attempts at overthrowing humanity.

Comment by Konstantin Pilz on The first AGI will be a buggy mess · 2022-08-01T07:06:09.736Z · EA · GW

Interesting perspective. Though leaning on Cotra's recent post, if the first AGI will be developed by iterations of reinforcement learning in different domains, it seems likely that will develop a rather accurate view of the world, as that will give the highest rewards. This means the AGI will have high situational awareness. I.e., it will know that it's an AGI and it will very likely know about human biases. I thus think it will also be aware that it contains mental bugs itself and may start actively trying to fix them (since that will be reinforced as it gives higher rewards in the longer run).
I thus think that we should expect it to contain a surprisingly low number of very general bugs such as weird ways of thinking or false assumptions in its worldview. 
That's why I believe the first AGI will already be very capable and smart enough to hide for a long time until it strikes and overthrows its owners.

Comment by Konstantin Pilz on EAs should use Signal instead of Facebook Messenger · 2022-07-21T10:31:50.972Z · EA · GW

If you seriously think switching to Notion would improve the productivity of some orgs by 10% you should write this up as fast as possible and convince them to do so!
 

Comment by Konstantin Pilz on EA for dumb people? · 2022-07-18T17:14:38.041Z · EA · GW

I honestly don't see why.
I think I'm much below 130 and still, 80k advised me. The texts they write about why AI might literally kill all of us and what I could do to prevent are not only relevant for oxford graduates but also for me who just attended an average German University. I think everyone can contribute to the world's most pressing problems. What's needed is not intelligence but ambition and open-mindedness. EA is not just math geniuses devising abstract problems it's hundreds of people running the everyday work of organizations, coming up with new approaches to community building, becoming politically active to promote animal welfare, or earning money to donate to the most important causes. None of these are only possible with an above-average IQ. 

Comment by Konstantin Pilz on Confused about "making people happy" vs. "making happy people" · 2022-07-16T20:00:09.618Z · EA · GW

I think what I ultimately care about is experiences had by a person. To me it seems unintuitive that it matters whether the person having that experience existed at the time of my decision or not.

So I want a world with as little negative experiences and as many positive experiences as possible. Having more people around is as legitimate a way of achieving that as making existing people happier.

(I think personal identity over time is a pretty confused concept, i.e. see the teleportation paradox, that's why I think the distinction between "existing" and "not-yet-existing" people is also pretty confused)

Comment by Konstantin Pilz on Do EA folks want AGI at all? · 2022-07-16T09:07:47.861Z · EA · GW

AGI without being aligned is very likely to disempower humanity irreversibly or kill all humans
Aligned AGI can be positive except for accidents, misuse, and coordination problems if several actors develop it.
I think most EAs would like to see an aligned AGI that solves almost all of our problems, it just seems incredibly hard to get there.

Comment by Konstantin Pilz on My local EA group has an unfriendly and impersonal vibe (via r/EffectiveAltruism) · 2022-07-16T08:55:46.096Z · EA · GW

It's of course hard to say much without further information, e.g. demographics, cause areas, type of events. However, I think there is a big risk of friend groups being identical to the EA local group, especially in emerging locations. This can be very alienating to new people and I think might in some cases really be net-negative because first contact with EA is super important.
Some steps to prevent this:


- If your group = you and some close friends: consider not calling yourself an EA group but presenting yourself as "just some friends all interested in EA", look for other EA-interested people you know less and start a group together with them. Be aware of founder effects. (If you start out as only computer scientists, it's likely computer scientists will be most attracted to your group.)
- Try to actively increase diversity in the group by e.g. supporting people with no friends in the group or doing outreach outside of your normal social network
- If your group only consists of men: Keep in mind that it can be very difficult for women to fit into such a group. Pay attention to how much the conversation is dominated by men. 
- Get (anonymous) feedback from irregular attendees. Ask them if they felt welcomed and what you could improve. Ask them if they would like to run a meetup and organize it their way.
- Don't see official EA meetups as hangouts with friends. Focus on EA topics and prevent conversations about personal stuff.

Comment by Konstantin Pilz on Germans' Opinions on Translations of "longtermism": Survey Results · 2022-07-07T20:11:50.919Z · EA · GW

Interesting rec. Intuitively, Zukunftsschutz sounds more actionable/ virtuous. Zukunftssicherung sounds somewhat passive.

Comment by Konstantin Pilz on There are no people to be effectively altruistic for on a dead planet: EA funding of projects without conducting Environmental Impact Assessments (EIAs), Health and Safety Assessments (HSAs) and Life Cycle Assessments (LCAs) = catastrophe · 2022-07-05T20:31:31.602Z · EA · GW

I think you present some really important ideas here but it's hard to engage with them properly. You could change that by increasing the reasoning transparency of your claims.


If you have the energy, try  stating the risks you are describing more clearly and also say how likely you think they are e.g. "human infertility may be an x-risk under some assumptions, even though it seems unlikely on others" "I think there is a ~5% chance biodiversity loss will be an existential risk"
This way we may be able to have a more fruitful discussion.

Comment by Konstantin Pilz on There are no people to be effectively altruistic for on a dead planet: EA funding of projects without conducting Environmental Impact Assessments (EIAs), Health and Safety Assessments (HSAs) and Life Cycle Assessments (LCAs) = catastrophe · 2022-07-05T20:21:13.078Z · EA · GW

Sorry for cherry-picking there. Seems like the insecticides really do have unintended consequences.

I am still skeptical of the effect though since Holden Karofsky says here that fishing with the nets seems pretty rare.

I think my point still stands though. While there will be some environmental damage from the nets, it's very unlikely to outweigh the human lives saved.

 


I think the strongest point here is that it's not at all clear whether the organisms in the streams that may be killed by the insecticides have net positive lives. See "How good is the life of an insect"

 

In any way, I encourage you to make a detailed investigation here to prove your point. I think there is a ~5% probability I would update that malaria nets are a lot less good than I thought.

Comment by Konstantin Pilz on What is the top concept that all EAs should understand? · 2022-07-05T14:17:52.994Z · EA · GW

Scope insensitivity

Comment by Konstantin Pilz on Germans' Opinions on Translations of "longtermism": Survey Results · 2022-06-28T14:00:57.686Z · EA · GW

(1)I agree that it would be good to have more data. However, I think the evidence is strong enough to conclude that framing it as "Longtermism" is likely to be accepted by most people. Furthermore, I think the point of "Longtermism" already being used by media means there should at least be some content with this framing as an answer to those critical reviews.

 I don't have the time to conduct a bigger survey, especially not for more difficult groups that have a higher bar for being surveyed. Still, I encourage others to do so. I can offer funding for a project like that as well.

I may change my mind if you a) have some clear downsides in mind with using "Longtermism" or b) think your framing would bring very different results.

re (3) I think the competition we had and this post are the best opportunities for German EAs to give their opinions on the words and I encourage more people to do so.

Comment by Konstantin Pilz on Germans' Opinions on Translations of "longtermism": Survey Results · 2022-06-27T21:13:35.797Z · EA · GW

I think both EA and longtermism are demanding a lot of critical thinking and we should thus encourage that the movement also consists of analytical people in the future.

Though I also agree that EA probably lacks many less analytical disciplines such as marketing.

What kinds of people exactly do have in mind that the community needs more?

Comment by Konstantin Pilz on Germans' Opinions on Translations of "longtermism": Survey Results · 2022-06-27T21:09:15.626Z · EA · GW

Agree with these points.

I think we might do well with the framing in outreach projects if we were e.g. aiming for a certain policy change such as pandemic preparedness. But less for outreach with the goal of getting like-minded people on board.

Comment by Konstantin Pilz on Germans' Opinions on Translations of "longtermism": Survey Results · 2022-06-27T21:03:39.442Z · EA · GW

Very good point! Thanks for looking at it so thoroughly. Agree that I should put less weight on it based on this. To be honest, I realize I might have been biased by being in favor of "Zukunftsschutz" and might have had that in the back of my mind while creating the survey.

Comment by Konstantin Pilz on "Two-factor" voting ("two dimensional": karma, agreement) for EA forum? · 2022-06-25T12:43:25.095Z · EA · GW

Seems useful, especially for critical posts. I may want to upvote them to show my appreciation and have more people read them though still disagree with e.g. the conclusion they draw.

Comment by Konstantin Pilz on US Policy Careers Speaker Series - Summer 2022 · 2022-06-24T15:57:06.186Z · EA · GW

Sounds great! However, these times are very difficult from Europe. Are the talks recorded?

Comment by Konstantin Pilz on Contest: 250€ for translation of "longtermism" to German · 2022-06-14T20:41:24.622Z · EA · GW

Thanks, everyone!

I've selected these words as most promising and am currently doing a survey in the general public to evaluate which ones sound best to people unfamiliar with longtermism.

  • Zukunftsschutz
  • Zukunfstismus
  • Langzeitismus
  • Langfristdenken
  • Ganzzeitdenken
  • Longtermismus
  • Longtermism
     
Comment by Konstantin Pilz on Contest: 250€ for translation of "longtermism" to German · 2022-06-09T13:56:38.437Z · EA · GW

I agree. I think it's interesting that the field of "Zukunftsethik" exist but I wouldn't use the term as a name for a movement

Comment by Konstantin Pilz on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T21:10:18.079Z · EA · GW

I think these are all valuable, but not much more valuable in a world with short timelines. I wanted to express that I am not sure how we should change our approach in a world with short timelines. So I think these ideas are net positive but I'm uncertain whether they are much of an update

Comment by Konstantin Pilz on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T13:02:56.062Z · EA · GW

I agree that AGI timelines may be very short and even Holden Karnofsky assigns a 10% probability to AGI in the next 15 years. I think at this time everyone should at least think about what they would do if they knew for certain that AGI was coming in the next 15 years and then do at least 10% of that (if not more since in a world where AGI comes soon, you have a lot more impact since there are fewer EAs around). However, I don't really see what to do about it yet. I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense. Curious about what others think.

Comment by Konstantin Pilz on My list of effective altruism ideas that seem to be underexplored · 2022-06-02T15:51:11.726Z · EA · GW

Thanks for the explanation!

I agree that it is great to do something to people for which they will be thankful later. But newly created people seem just as good for this and if you care a lot about preferences you could create them in a way that they will be very thankful and the pure creation is fulfilling for them. Still don't see the value of resurrection vs new people. I think my main problem with preference utilitarianism is that you can't say whether it's good or bad to create preferences since both has unintuitive conseqences.

Comment by Konstantin Pilz on Contest: 250€ for translation of "longtermism" to German · 2022-06-01T20:27:56.933Z · EA · GW

Bisherige Texte zum Stichwort auf Deutsch sind sehr negativ und man sieht immer, dass es eine fremde Idee ist. Will sichergehen, dass ich nichts übersehe.

Kann aber gut sein, dass wir am Ende "Longtermism" nehmen

Comment by Konstantin Pilz on Contest: 250€ for translation of "longtermism" to German · 2022-06-01T20:04:00.218Z · EA · GW

Ideas so far from the EA Germany Slack (anonymous)

  • Zukunftsschutz (angelehnt an Naturschutz) (möglich wäre dann sowas wie „ich bin Zukunftsschützer:in“)
  • Ethik auf lange Sicht/Frist;
  • Langzeitethik
  • Eintreten für zukünftige Generationen
  • Langfristige Nachhaltigkeit
  • Langfristiges Denken
  • Langfristigkeit
  • Vielleicht kann man mit dem Wort "Posterität" etwas anfangen? Etwas veraltetes Synonym für Nachwelt, Erbe, Nachfahren, Nachfolgegeneration. Ich denke idealerweise wollen wir ja eine Übersetzung mit der man die philosophische Position ("longtermism is the idea..") und vertreterinnen ("longtermists take seriously..") bezeichnen kann.
    Für die philosophische Position:
    Posterianismus; Für die vertreterinnen:
    Posterianten/innen; Posterianer/innen. Ist vielleicht etwas klumpig und prätentiös, aber es ist noch unbesetzt und man ahnt schnell dass es eine philosophische Position/Bewegung ist
  • Ferndenken
  • Weitblick
  • Giga-Generationen-Vertrag
Comment by Konstantin Pilz on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T05:22:22.954Z · EA · GW

Interesting, thanks! Though I don't see why you'd only ressurect humans since animals seem to have the preference to survive as well. Anyways, I think preferences are often misleading and are not a good proxy for what would really be fulfilling. To me it also seems odd to say that a preference remains even if the person is no longer existing. Do you believe in souls or how do you make that work? (Sorry for the naivety, happy about any recs on the topic)

Comment by Konstantin Pilz on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-05-31T20:48:20.432Z · EA · GW

Consider posting this idea in the 80k career planning course group: https://m.facebook.com/groups/928373221340185/

Comment by Konstantin Pilz on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-05-31T20:44:09.992Z · EA · GW

I started something like this earlier this year to do the 8 weeks career planning course by 80k with two friends and I found it incredibly valuable to the extent that I'd say it would have been almost impossible to get so much clarity on career issues without my friend's support. Strongly encourage others to do the same. (Note that it makes sense to do this with people with a similar focus)

We are planning on check-ins every three months now

Comment by Konstantin Pilz on EA Hub Berlin / German EA Content: Two meta funding opportunities · 2022-05-31T20:40:35.222Z · EA · GW

Just want to mention that I'm going to work on this part time the next months and my first project is going to be to translate some content on longtermism. Still, I think further work, ideally original German texts, would be highly desirable. Though I think we especially need to think about how to promote the ideas in Germany. Please get in touch with me if you have any ideas!

Comment by Konstantin Pilz on Who wants to be hired? (May-September 2022) · 2022-05-31T16:56:05.124Z · EA · GW
Location: Leipzig, Germany
Remote: yes
Willing to relocate: yes
Skills: generalist research, R, Java, ready to learn, native German, excellent English, biology background, AI interest
Résumé/CV/LinkedIn: linkedin.com/in/konstantin-pilz-3a6422223/
Email: mail[at]konstantinpilz.com
Notes: looking for early career longtermist research opportunities, PA, research assistant jobs
Comment by Konstantin Pilz on My list of effective altruism ideas that seem to be underexplored · 2022-05-31T16:47:25.926Z · EA · GW

Just curious: Could you make the case for resurrecting people instead of just creating new ones? (Agree that having more persons with positive welfare is desirable but don't see why resurrection would be the most cost-effective.)

Comment by Konstantin Pilz on There are no people to be effectively altruistic for on a dead planet: EA funding of projects without conducting Environmental Impact Assessments (EIAs), Health and Safety Assessments (HSAs) and Life Cycle Assessments (LCAs) = catastrophe · 2022-05-30T13:26:44.620Z · EA · GW

I'm certainly no expert, but a quick look at AMF says the bednets are treated with Pyrethroid, which is" usually broken apart by sunlight and the atmosphere in one or two days" (Wikipedia) This is let's me be skeptical of whether incorporating the environmental and health effects really make a big difference. Furthermore, though again this is just my intuition, agricultural farming involves much worse doses of insecticides that contaminate streams and so far we have not observed any large-scale effects of environmental collapse or infertility that would justify questioning the cost-effectiveness of AMF. (I'm not questioning that our use of insecticides very likely leads to the decline of insects and diminishes fertility though, I just think that AMF is a tiny factor in this.)
 

Comment by Konstantin Pilz on Contact us · 2022-05-19T10:50:06.774Z · EA · GW

Hey! How hard would it be to implement a translations plugin like on this random site? It might make the forum more inclusive for people with English as a second language. (Can also make a better case if you're not convinced)

Comment by Konstantin Pilz on [$20K In Prizes] AI Safety Arguments Competition · 2022-05-12T17:14:03.645Z · EA · GW

Not long and we will have AI the size of our brain.

Comment by Konstantin Pilz on [$20K In Prizes] AI Safety Arguments Competition · 2022-05-12T17:11:39.902Z · EA · GW

Investment in AI has been steadily going up. It even seems to be growing exponentially. AI might bring about changes as big as the industrial revolution.

Comment by Konstantin Pilz on [$20K In Prizes] AI Safety Arguments Competition · 2022-05-12T13:28:35.130Z · EA · GW

Sundar Pichai, CEO of Google:  Artificial Intelligence ‘is probably the most important thing humanity has ever worked on...more profound than electricity or fire’.
Seen here

Comment by Konstantin Pilz on New forum feature: Map of Community Members · 2022-05-04T14:30:13.787Z · EA · GW

This is great! Connecting to people close to my city was crucial for my (still short) EA journey. Don't be afraid to reach out to anyone, almost all EAs I encountered were happy to chat!

Comment by Konstantin Pilz on Nuclear Fusion Energy coming within 5 years · 2022-04-29T12:14:38.232Z · EA · GW

What do you think this implies for global priorities? When do you expect nuclear fusion to be competitive in developing countries? Why is this Metaculus  forecast so pessimistic?

I think fusion is great, but it will probably take some time to have a real impact.

Comment by Konstantin Pilz on Effektiver Altruismus: Eine Einführung · 2022-04-12T12:56:02.568Z · EA · GW

Ort: Seminargebäude, S 210

Comment by Konstantin Pilz on Which texts do we need in non-English languages? · 2022-04-11T17:07:16.505Z · EA · GW

What we need in German:

  • The Precipice (Ord)
  • What We Owe The Future (MacAskill)
  • 80k's most-read texts

Feel free to add

Comment by Konstantin Pilz on Which texts do we need in non-English languages? · 2022-04-11T17:02:35.266Z · EA · GW

What already exists in German:

  • Doing Good Better (though sold out atm)
  • An introduction and some career advice
  • Practical Ethics (Singer)
Comment by Konstantin Pilz on The Berlin Hub: Longtermist co-living space (plan) · 2022-04-11T16:51:23.908Z · EA · GW

Wow! So hyped to see more infrastructure in mainland Europe! Hope I can live there one day

Comment by Konstantin Pilz on Why should we care about existential risk? · 2022-04-09T07:31:28.166Z · EA · GW

"the world is not designed  for humans"

I think our descendants will unlikely be flesh-and-blood humans but rather digital forms of sentience: https://www.cold-takes.com/how-digital-people-could-change-the-world/

I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.

(I think this might be less clear for biorisk where the main concern really is extinction.)

Comment by Konstantin Pilz on Sophie’s Choice as a time traveler: a critique of strong longtermism and why we should fund more development. · 2022-04-09T07:11:50.061Z · EA · GW

I think infinite humans are physically impossible. See this summary: https://www.fhi.ox.ac.uk/the-edges-of-our-universe (since the expansion of the universe is speeding up, it's impossible to reach parts that are expanding faster that light speed, thus we only have a huge chunk, but not all of the universe we can access)

Comment by Konstantin Pilz on [April fool's post] Proposal to assign careers by birthdate · 2022-04-01T14:50:18.957Z · EA · GW

Thanks! Helped me a lot. AI safety it is! Can we do the same thing so I know what org to work at?

Comment by Konstantin Pilz on Simulation arguments · 2022-03-31T07:34:13.561Z · EA · GW

Something that I am still confused about: If you assumed we were in a simulation, would that not destroy your evidence completely? After all, you should not be able to derive any feature of the bottom world from the features of the simulation, including any evidence on whether there are a lot of simulations. Am I missing something or does that mean it is not possible to argue about this usefully because the nature of the evidence changes every time I assume sim/non-sim?

Comment by Konstantin Pilz on How does the simulation hypothesis deal with the 'problem of the dust'? · 2022-03-31T07:25:09.391Z · EA · GW

I guess I'd just say, yes a cloud can be conscious if it reaches some level of complexity (which I don't think clouds do). The mapping and substrate are not irrelevant. There has to be some kind of complex network that produces mental states (but I haven't looked into theories of consciousness much). I don't see how this proves the impossibility of simulating consciousness though.

Comment by Konstantin Pilz on The Future Fund’s Project Ideas Competition · 2022-03-04T13:03:31.291Z · EA · GW

Prevent community drainage due to value drift

Effective Altruism, Movement building

Most Effective Altruists are still young and will have the greates impact with their careers (and spend the greatest amounts of money) in several decades. However, people also change a lot and for some this leads to a decrease of engagement or even full drop-out.  Since there is evidence, that drop out rates might be up to 30% throughout the career of higly engaged EAs, this is some serious loss of high impact work and well directed money. 

Ways of tackling this problem might include: 

  • Introducing more formal commitment steps when getting into EA
  • Encouraging people to write down and reflect on their reasons for being part of EA
  • Creating events especially aimed at strengthening the core community and encouraging friendships
Comment by Konstantin Pilz on The Future Fund’s Project Ideas Competition · 2022-03-04T12:34:53.137Z · EA · GW

EA content translation service

Effective Altruism, Movement Building

(Maybe add to #30 - diversity in EA)

EA-related texts are often using academic language needed to convey complex concepts. For non-native speakers reading and understanding those texts takes a lot more time than reading about the same topic in their native language would. Furthermore, today many educated people in important positions, especcially in non-western countries, do not at or only poorly speak English. (This is likely part of the reason that EA currently mainly exists in English speaking countries and almost exclusively consists of people speaking English well.)

To make EA widely known and easy to understand there needs to be a translation service enabling e.g. 80k, important Forum posts or the Precipice to be read in different languages. This would not only make EA easier to understand - and thus spread ideas further - but also likely increase epistemic diversity of the community by making EA more international.

Comment by Konstantin Pilz on The Future Fund’s Project Ideas Competition · 2022-03-04T10:02:47.556Z · EA · GW

Possible downside: Contribute to further speed-up of AI development, possibly leaving less time for alignment research

(However, if done correctly, this project only harvests pre-existing dynamics and leads funds to beneficial projects.)

Comment by Konstantin Pilz on The Future Fund’s Project Ideas Competition · 2022-03-04T09:56:51.801Z · EA · GW

Formulate AI-super-projects that would be both prestigious and socially beneficial


 Artificial Intelligence, Great Power Relations

There are already some signs of race dynamics between the US and China in developing TAI. Arguably, they are at least partly motivated by concerns of national prestige. If race dynamics speed up, it might be beneficial to present a set of prestigious AI-projects that the US and other countries can adopt. These projects should have the following features:

  • Be highly visible and impressive for a wide audience
  • Contribute to safer AI (e.g. through interpretability or alignment playing a great role in the project)
  • Be socially beneficial (i.e. the benefits should be distributed widely, ideally the technology would be open access after development)

(Idea adopted from Joslyn Barnhart)