Posts

What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? 2020-10-21T04:44:57.757Z · score: 8 (2 votes)
Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z · score: 26 (10 votes)
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z · score: 25 (9 votes)
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z · score: 22 (14 votes)
EA is risk-constrained 2020-06-24T07:54:09.771Z · score: 60 (29 votes)
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z · score: 10 (3 votes)
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z · score: 14 (8 votes)
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z · score: 16 (8 votes)
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z · score: 28 (20 votes)
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z · score: 39 (23 votes)
What is the size of the EA community? 2019-11-19T07:48:31.078Z · score: 24 (8 votes)
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z · score: 52 (31 votes)
Off-Earth Governance 2019-09-06T19:26:26.106Z · score: 15 (8 votes)
edoarad's Shortform 2019-08-16T13:35:05.296Z · score: 3 (2 votes)
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z · score: 21 (9 votes)
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z · score: 11 (7 votes)
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z · score: 9 (6 votes)
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z · score: 1 (2 votes)
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z · score: 12 (5 votes)
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z · score: 8 (4 votes)
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z · score: 12 (6 votes)

Comments

Comment by edoarad on Things I Learned at the EA Student Summit · 2020-10-28T06:10:01.355Z · score: 8 (5 votes) · EA · GW

I think that it is very well written, neatly organized and clear. You clearly state your actual belief in what you claim and you give good references. Also, the tone is friendly and fun to read :) 

Definitely encourage you to write more!

Comment by edoarad on When you shouldn't use EA jargon and how to avoid it · 2020-10-26T16:24:15.056Z · score: 8 (5 votes) · EA · GW

One bad rationalization that I notice myself having sometimes for writing and speaking with "high" jargon is that I say to myself that it's a piece of jargon worth knowing, so I'm actually helping people learn better ways of communicating. I don't think that this is a valid logical conclusion, but instead I can briefly explain some relevant and important terminology or avoid using that if it's not relevant.

I think that my actual motivations are mainly that I feel a need to be very accurate, and that writing is generally slow and tedious for me so it is difficult for me to find better ways of articulating myself once I have already found something that fits what I have in mind. Or - anxiousness and laziness. 

Just some stuff that I notice in myself which might be worth sharing :)

Comment by edoarad on EARadio - more EA podcasts! · 2020-10-26T15:10:16.017Z · score: 5 (4 votes) · EA · GW

Thanks for sharing it again! There is a lot of great content there :)

Comment by edoarad on JP's Shortform · 2020-10-24T18:45:39.889Z · score: 2 (1 votes) · EA · GW

😍

Comment by edoarad on What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? · 2020-10-22T06:02:44.413Z · score: 4 (2 votes) · EA · GW

Thanks!

Comment by edoarad on Evaluating a crowdfunding campaign to test an oral anthrax vaccine for wildlife · 2020-10-12T05:21:53.664Z · score: 6 (4 votes) · EA · GW

I don't have answers to your questions, I just want to say that I really appreciate this post. These are very interesting and important questions in an original setting, and I like your self-improvement attitude. It's clear that you have given this a lot of thought and effort into both thinking about it and writing the post.

Comment by edoarad on Open Communication in the Days of Malicious Online Actors · 2020-10-07T08:50:44.708Z · score: 9 (6 votes) · EA · GW

Can you expand a bit about the relevance of this to EA? Do you think that better open communication is a worthy cause by itself, or that this has relevance to infohazard policies, or perhaps something else?

Comment by edoarad on Propose and vote on potential tags · 2020-10-03T08:49:19.415Z · score: 4 (2 votes) · EA · GW

I've added a Meta-Science tag. I'd love for some help with clarifying the distinction between it and Scientific Progress.  

Generally, I imagine meta-science as being more focused on specific aspects of the academic ecosystem and scientific progress to be related more to the general properties of scientific advances. There is clearly an overlap there, but I'm not sure where exactly to set the boundaries. 

Comment by edoarad on [deleted post] 2020-09-24T10:59:50.252Z

How is this relevant to EA or doing the most good?

Comment by edoarad on Effective strategy and an overlooked area of research? · 2020-09-23T20:14:42.597Z · score: 8 (2 votes) · EA · GW

I actually haven't decided yet whether to downvote or not. It would really help if you could summarize better what you mean by TK and what is it that you argue for - this can enable people to decide whether or not to read the whole post and to get the gist of it immediately. The abstract of the linked post is itself a bit vague, which I feel has similar problems. 

Comment by edoarad on Expansive translations: considerations and possibilities · 2020-09-19T11:20:43.242Z · score: 4 (2 votes) · EA · GW

cool, thanks! Looking forward to reading future posts on the matter 😊

Comment by edoarad on Expansive translations: considerations and possibilities · 2020-09-19T06:28:06.128Z · score: 2 (1 votes) · EA · GW

This is a very interesting idea!  I am reminded of Distill's concept of Research Debt. This sounds potentially promising, but I'm not sure I understand exactly why you think of this as being of perhaps similar epistemic importance as forecasting. 

First, just to clarify, by "futuristic translation" did you mean any form of expansive translation as is written in your post (which would be developed using future tech or innovations) or something like a specific type of translation that is orientated towards understanding the future? (I assume that it is the former)

The case I see for its importance is basically that it increases our capacity for sharing ideas more efficiently, which can improve general reasoning about complex issues and hasten progress. Is this mostly how you think of it?

One interesting point regarding how promising this is, is that either there would be an economic incentive for someone to create such an innovation or that there won't be enough public interest. I think that, perhaps similar to forecasting, most of the added value we can bring would come if we live in the second world where it would take effort to show how this tech can be used well and when it can be of public interest.

Comment by edoarad on Modelling Individual Differences - Introducing the Objective Personality System · 2020-09-07T12:44:14.144Z · score: 6 (4 votes) · EA · GW

I've only skimmed the post, but I couldn't really understand where it might be related to EA work. You have listed lots of possible uses, but would you mind expanding on one example which you find most promising?

Comment by edoarad on DonyChristie's Shortform · 2020-08-23T06:03:04.701Z · score: 5 (3 votes) · EA · GW

This sounds very worrying, can you expand a bit more?  

Comment by edoarad on CEA Mid-year update (2020) · 2020-08-21T10:31:28.748Z · score: 2 (1 votes) · EA · GW

Thanks! 

I find it interesting that first-time attendees make fewer connections than returning attendees. Some reasons that it might be the case - less focus on scheduling 1-1s, less interest in them than in more experienced people, fewer referrals, more interest in the content, less time spent on the conference.

Generally, it is amazing that there are many plan changes and increased motivation within first-timers and returning attendees. Again, amazing work! 

Thanks for the information on the CRM :) We in EA Israel are trying out a costume Airtable database for our needs (which are mostly managing connections and mapping people and organizations of interest).

Comment by edoarad on Effective Altruism Quest · 2020-08-17T05:41:11.520Z · score: 5 (3 votes) · EA · GW

The downvotes are not here to punish you. They are meant to signal others that reading this post might not be the best use of their time. 

Some triggers for me personally - 

Let's play a game...

It's a nice start, but it is not explicit at any point that you are talking about a game you designed. You go straight to explaining how it works, which is not interesting for its own sake. People want to be able to quickly understand what is this about.

Whether you are coming from the blockchain or the EA community, ...

I take this to feel that the post is not edited to fit the forum.

Quests

Explore. Learn. Battle Baddies. Win Rewards. Gitcoin Quests is a fun, gamified way to learn about the web3 ecosystem, compete with your friends, earn rewards, and level up your decentralization-fu!

This is clearly meant for selling the game as something fun. People on the forum are probably not looking for something fun to do, but to understand how to make an impact. Also, people in the forum are not clearly motivated to learn blockchain. The main interest here for me is the outreach potential and the experience with designing games around EA. 

I think that if you had written a short post about why you have made this game and what broadly it is, it would have been received better.

Comment by edoarad on Are some SDGs more important than others? Revealed country priorities from four years of VNRs · 2020-08-16T08:57:59.882Z · score: 3 (2 votes) · EA · GW

CDP = Committee for Development Policy :)

Comment by edoarad on Effective Altruism Quest · 2020-08-16T05:52:13.920Z · score: 2 (1 votes) · EA · GW

Thanks for the elaboration! Don't take it too bad that this got downvoted - this is not necessarily an indication that people don't appreciate the project itself, but I think it is more likely that the format of the post itself, which is unusual in the forum, was the problem (which can be a good opportunity to learn from). 

Comment by edoarad on Effective Altruism Quest · 2020-08-15T17:39:49.146Z · score: 3 (2 votes) · EA · GW

Not sure, I upvoted it because I think it's very cool that you made this! The post itself is perhaps too advertisy for the forum.

Also, I'm curious about several things. Do you mind sharing a bit about the game itself? Is it a simple trivia game? What type of questions are there? How is this going so far? What were your goals here?

Comment by edoarad on The Case for Education · 2020-08-15T17:27:03.805Z · score: 4 (4 votes) · EA · GW

Sorry, I downvoted this. I skimmed this, and it was hard to find the main points in the text. The claims that I did see written (such as EA already focuses implicitly on education) didn't feel convincing enough to delve in the details. 

I appreciate you writing this though, and I'm interested in understanding the case for education and what might be the counter-arguments against the arguments against education. I think that for me it would help a ton if you would have made this much shorter and to the point, and make it very clear what claims you make - both from the start and at each point in the post.

Comment by edoarad on CEA Mid-year update (2020) · 2020-08-14T03:12:32.886Z · score: 2 (1 votes) · EA · GW

I'm impressed with the success of Virtual EAGx. Do you have a measure of how successful that was for a comparable population to EAG London 2019? Or, say, comparing the success for people who have been at 2 previous EAGs?

Also, I'm curious, what CRM are you using and for what purpose? 

Comment by edoarad on Effective Altruism movement in LMIC and Africa · 2020-08-13T15:42:07.486Z · score: 5 (3 votes) · EA · GW

LMIC - low and middle-income countries.

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-05T15:23:15.897Z · score: 3 (2 votes) · EA · GW

Specifically 'Metta Meditation' is precisely targeted at increasing empathy

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-05T06:04:26.576Z · score: 6 (3 votes) · EA · GW

Ah thanks! I think I didn't click 'Save' 🤦‍♂️

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-04T05:32:46.587Z · score: 26 (13 votes) · EA · GW

For some reason it felt quite weird for me to write a bio and I avoided doing that before, even though I think it's very important for our community to get to know each other more personally. So I thought I might use this chance to introduce myself and finally write a bio 😊

My name is Edo, I'm one of the co-organisers of EA Israel. I'm also helping out in moderation for the forum, feel free to reach out if I can help with anything.

I have studied mathematics, worked as an mathematical researcher in the IDF and was in training and leadership roles. After that I started a PhD in CS, where I helped to start a research center with the goal of advancing Biological research using general mathematical abstractions. After about 6 months I have decided to leave the center and the PhD program.

Currently, I'm mostly thinking about improving the scientific ecosystem and particularly how one can prioritize better within basic science. 

Generally, I'm very excited about improving prioritisation within EA and how we conduct our research around it and EA causes in general. I'm also very interested in better coordination and initiative support within the EA community. Well, I'm pretty excited about the EA community and basically everything else that has to do with doing the most good.

My Virtue Ethic brain parts really appreciates honesty and openness, curiosity and self-improvement, caring and supporting, productivity and goal-orientedness, cooperating as the default option and fixing broken systems.

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-03T12:40:47.104Z · score: 2 (1 votes) · EA · GW

This feels good to me. One problem that may have (but I'm not sure about it) is that it might not capture new causes that are contained in another meta-cause. So for instance, the post about M-Risk is related to policy or x-risk, but is clearly a new cause by itself and yet it may feel inappropriate to vote on it as "other causes".

Comment by edoarad on EA Forum update: New editor! (And more) · 2020-08-03T12:27:40.055Z · score: 19 (5 votes) · EA · GW

Regarding norms, some interesting notes from the Tagging FAQ at LW:

  1. The goal of tags is to be used as a curated collection over time.
  2. Better to not tag events, as they won't be relevant after a short while.
  3. Better to tag with more specific tags than broad tags.
  4. Vote on tags if you think the post is very relevant to that tag (and so it will show up earlier in the case where people look up through posts on that tag) and downvote when it's less relevant. 
  5. Since tags are moderated, it is better to just add many tags and they will be checked for relevance and accuracy. So if you think that some tag would be useful, just go about it. (As a moderator here, I think that tags are very important and I'll gladly spend time going over more tags if that causes more tags to be created)
  6. Good tags should have a balance between not being too small to be irrelevant and not being too big so that the list wouldn't help readers going through it and it would take a lot of overhead to tag new posts.
    1. I don't think that we should be that concerned with having a tag that's too big on the EA forum. 
    2. Generally, one can filter through several tags, so I'm not sure what to make of it. @Habryka, I'd be interested in your opinion on this.
  7. Tags should avoid being too near other tags. This can be clarified in the description.
  8. Tag evolution:
    • The tagging system is collectively applied which limits its ability to maintain tags with high-degrees of subjective nuance.
    • Tags overall experience pressure to be as inclusive as possible. If a concept is at all loosely connected to a topic, someone will apply it.
    • The general result of the above is that a closely related, although theoretically distinct, concepts will end up blurred and having heavy redundant post overlap.
  9. Tag names should be as clear as possible, even for people who don't understand it in full neuance.
  10. It's perfectly fine to use multiple names when is appropriate.
  11. Keep tag names brief. Use or instead of . (I should rename ). 
  12. Tag description should have the tag name in bold and link to related tags. (again, I should change things 😊)
Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-03T12:16:05.345Z · score: 2 (1 votes) · EA · GW

Curious to know what people here think about the "unusual causes" tag. 

This comes across to me as a bit deprecating so I was thinking that perhaps the name should be changed to something a bit more neutral. Perhaps 'non-standard-causes'. Or even something that might be  biased the other way like 'underdiscussed-causes'.

Aaron Gertler gave the answer that

In my experience, "unusual" when applied to anything other than a person is a quite neutral term. I'd think of "non-standard" as worse, since "standard" implies quality in a way I don't think "usual" does.

So I'd take his view over mine, since I'm not a native english speaker. Still, interested in what you think and what other alternatives are there.

Generally, I think that this tag could be very important for the discovery of new causes so I think that an appropriate name might be important

Comment by edoarad on EA Forum update: New editor! (And more) · 2020-08-03T11:58:31.605Z · score: 4 (2 votes) · EA · GW

I tested 1 (added a Collections and Resources tag which I think you'd like 😉), and it works fine and visible to anyone. The tags are moderated, so there is an approval process.

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T13:24:15.941Z · score: 2 (1 votes) · EA · GW

Great! good luck!

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T08:18:28.204Z · score: 3 (2 votes) · EA · GW

By the way, I saw that you didn't upvote your own post - that's understandable, but the norm is to just leave it as it is and upvote yourself :) 

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T08:16:21.588Z · score: 14 (6 votes) · EA · GW

Welcome to the forum! This is a very important question, and one that I think many of us are struggling with or have been at some point. I really appreciate you taking action on this and asking about resources publicly. 

Probably my favorite write-ups on this topic is Replacing Guilt by Nate Soares. The first part addresses "listless-guilt" - that feeling that there is something that you should do but you don't know what and there is no specific motivation to guide you. 

Here is a relevant question that has been asked on the forum previously, and may have further sources. 

I also want to remind you that it's perfectly okay not to feel motivated right now. This can come in waves, and you might just randomly come across something which would make you motivated again. It's tough to feel motivated without supporting community and structure (if that's the case you're in). You might also consider joining the EA Virtual Group, or volunteer for an ongoing EA project.

Comment by edoarad on Investing to Save Lives · 2020-07-31T15:27:14.668Z · score: 4 (2 votes) · EA · GW

This is a nice idea which, if developed in a particular way, can lead to microcredit. This is the concept of giving small loans to individuals in developing countries, with comparatively little interest rate (though still larger than what we have at developed countries). There has been analysis by the EA community of microcredit, such as this report by SoGive, which you might be interested in.

Another thing that comes to mind is trying to sell products such as bednets directly to locals. They cost about $4-5 per net, which includes all the costs of manufacturing, distribution etc. That means that to get profit and to handle sales would (probably) cost some more. That's seems like a good bargain, but when the absolute poverty line is at about $2 a day, that makes it tough to save up. I imagine problems might show up because if you want to have bednets in the quality of AMF, these would cost much more than bednets of poorer quality and I'm not sure that the consumers could really tell the difference. Note that it's not that bednets by themselves save lives, but rather that they help mitigate some risks. For people in extreme poverty there is a wide range of risks to manage and possibilities for investments (say a steel roof instead of straw or an ox for farming), and it is actually not clear that buying a good bednet is actually the best use of money or that they understand it.

From the little I know of life in extreme poverty, the situation around loaning is complicated. Generally, interest rates are sky high - something enormous like 200% per few weeks, if I remember correctly -  and there is a big risk that people can't pay back. There is also a real difficulty in the infrastructure involved, and possible corruption.

I'm interested in hearing out people more knowledgeable than myself.

Comment by edoarad on Lukas_Gloor's Shortform · 2020-07-29T05:33:54.947Z · score: 4 (2 votes) · EA · GW

Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure, and that generally people communicate about their own pleasurable states as good. Given a random person off the street, I'm willing to bet that after introspection they will suggest that they value pleasure in the strong sense. So while this may not be universally accepted, I still think it could hold weight. 

Also, a symmetric statement can be said regarding suffering, which I don't think you'd accept. People who say "suffering is bad" claim that we can establish this by introspection about the nature of suffering.

From reading Tranquilism, I think that you'd respond to these as saying that people confuse "pleasure is good" with an internal preference or craving for pleasure, while suffering is actually intrinsically bad. But taking an epistemically modest approach would require quite a bit of evidence for that, especially as part of the argument is that introspection may be flawed.

I'm curious as to how strongly you hold this position. (Personally, I'm totally confused here but lean toward the strong sense of pleasure is good but think that overall pleasure holds little moral weight)

Comment by edoarad on Delegate a forecast · 2020-07-28T06:04:47.363Z · score: 3 (2 votes) · EA · GW

Fantastic, thanks! 

I've requested access to the doc :) 

(Regarding the platform, I think it would help a bit to clarify things if I could do something like selecting a range with the mouse and have the probability mass of that interval displayed) 

Comment by edoarad on Delegate a forecast · 2020-07-26T10:21:12.198Z · score: 4 (3 votes) · EA · GW

The purpose is to see how likely it is to remain valuable over time (and I hope, and think that's likely the case, that we will terminate it if it stops being cost-effective). 

I think that the distribution is only interesting for this purpose until 2030, and then the probability of it lasting to >= 2030 can collapse to one point.  

Comment by edoarad on Delegate a forecast · 2020-07-26T10:18:37.615Z · score: 7 (5 votes) · EA · GW

Conditional on us starting this small grant project in EA Israel, at what year would we terminate the program? 

Comment by edoarad on Mustreader's Shortform · 2020-07-22T19:12:17.014Z · score: 9 (5 votes) · EA · GW

This sounds great! I think you should make this a top-level post :)

Comment by edoarad on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T12:46:13.762Z · score: 4 (2 votes) · EA · GW

I think that's a good example of a way that BIP overlap. Also, intelligence and power clearly change benevolence by changing incentives or view of life or capability of making an impact. (Say, economic growth has made people less violent)

Comment by edoarad on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T06:43:51.700Z · score: 6 (4 votes) · EA · GW

This is an interesting framework. I think that it might make sense to think of the actors incentives as part of its benevolence; An academic scientist (or academia as a whole) has incentives which are aimed at increasing some specific knowledge which in itself is broadly societally useful (because that's how funding is supposed to incentives them). Outside incentives might be more powerful than morality, especially in large organisations. 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T04:39:48.016Z · score: 2 (1 votes) · EA · GW

Thanks! This does clarify things for me, and I think that the definition of a "goal" is very helpful here. I do still have some uncertainty here about the claim of process orthogonality which I can better understand:

Let's define an "instrumental goal" as a goal X for which there is a goal Y such that whenever it is useful to think of the agent as "trying to do X" it is in fact also useful to think of it as "trying to do Y"; In this case we can think that X is instrumental to Y. Instrumental goals can be generated at the development phase or by the agent itself (implicitly or explicitly). 

I think that the (non-process) orthogonality thesis does not hold with respect to instrumental goals. A better selection of instrumental goals will enable better capabilities, and with greater capabilities comes greater planning capacity. 

Therefore, the process orthogonality thesis does not hold as well for instrumental goals. This means that instrumental goals are usually not the goals of interest when trying to discern between process and non-process orthogonality theses, and we should focus on terminal goal (those which aren't instrumental). 

In the case of an RL agent or Deep Blue, I can only see one terminal goal - maximize defined score or win chess. These won't really be change together with capabilities. 

I thought a bit about humans, but I feel that this is much more complicated and needs more nuanced definitions of goals. (is avoiding suffering a terminal goal? It seems that way, but who is doing the thinking in which it is useful to think of one thing or another as a goal? Perhaps the goal is to reduce specific neuronal activity for which avoiding suffering is merely instrumental?)

Comment by edoarad on edoarad's Shortform · 2020-07-20T07:41:45.464Z · score: 13 (5 votes) · EA · GW

There is a new initiative by Yuval Noah Harari, https://www.sapienship.co/. It is focused on global catastrophic risks from emerging tech. 

Comment by edoarad on Quotes about the long reflection · 2020-07-20T05:40:49.225Z · score: 2 (1 votes) · EA · GW

Why Rot13? This seems like an interesting discussion to be had

Comment by edoarad on What’s the Use In Physics? · 2020-07-19T13:13:16.678Z · score: 7 (3 votes) · EA · GW

Some other options and sources from the future:

  • A 2019 report on Quantum Computing by Jaime Sevilla: link
  • Switching to electrical engineering, there has been a call for more people in EA to be experts in computational hardware to better understand questions relevant to AI-Risk.
Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T06:02:27.764Z · score: 4 (2 votes) · EA · GW

I was thinking it over, and I think that I was implicitly assuming that process orthogonality follows from orthogonality in some form or something like that. 

The Deep Blue question still holds, I think.

The human brain should be thought of as designed by evolution. What I wrote is strictly (non-process) orthogonality. An example could be given that the cognitive breakthrough might have been enlargement of the neocortex, while civilization was responsible for the values. 

I guess that the point is that there are example of non-orthogonality? (Say, the evaluation function of DeepBlue being critical for it's success)

Comment by edoarad on Could next-gen combines use a version of facial recognition to reduce excessive chemical usage? · 2020-07-14T11:35:28.168Z · score: 2 (1 votes) · EA · GW

Thanks, very interesting. 

Further automation of agriculture could reduce the points of failure compared to an offline system.

Did you mean that this would increase the points of failure?

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T06:01:44.759Z · score: 7 (4 votes) · EA · GW

In "Unpacking Classic Arguments for AI Risk", you defined The Process Orthogonality Thesis as: The process of imbuing a system with capabilities and the process of imbuing a system with goals are orthogonal.

Then, gave several examples of cases where this does not hold: thermostat, Deep Blue, OpenAI Five, the Human brain. Could you elaborate a bit on these examples? 

I am a bit confused about it. In Deep Blue, I think that most of the progress has been general computational advances, and the application of an evaluation system given later. The human brain value system can be changed quite a lot without apparent changes in the capacity to achieve one's goals (consider psychopaths for extreme example here).

Also, general RL systems have had successes in applying themselves to many different circumstances. Say, the work of DeepMind on Atari. Doesn't that point in favor of the Process Orthogonality Thesis?
 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T05:40:56.958Z · score: 12 (7 votes) · EA · GW

How entrenched do you think are old ideas about AI risk in the AI safety community? Do you think that it's possible to have a new paradigm quickly given relevant arguments?

I'd guess that like most scientific endeavours, there are many social aspects that make people more biased toward their own old way of thinking. Research agendas and institutions are focused on some basic assumptions - which, if changed, could be disruptive to the people involved or the organisation. However, there seems to be a lot of engagement with the underlying questions about the paths to superintelligence and the consequences thereof, and also the research community today is heavily involved with the rationality community - both of these makes me hopeful that more minds can be changed given appropriate argumentation. 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T05:32:05.192Z · score: 11 (7 votes) · EA · GW

What is your theory of change for work on clarifying arguments for AI risk? 

Is the focus more on immediate impact on funding/research or on the next generation? Do you feel this is important more to direct work to the most important paths or to understand how sure are we about all this AI stuff and grow the field or deprioritize it accordingly?

Comment by edoarad on Could next-gen combines use a version of facial recognition to reduce excessive chemical usage? · 2020-07-13T18:06:34.385Z · score: 2 (1 votes) · EA · GW

Would you mind expanding on why you think this is related to doing the most good? 

Also, what inherent risks are you referring to? (do you say that there are risks in any innovation?)