Posts

Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z · score: 23 (7 votes)
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z · score: 24 (8 votes)
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z · score: 21 (13 votes)
EA is risk-constrained 2020-06-24T07:54:09.771Z · score: 55 (25 votes)
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z · score: 10 (3 votes)
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z · score: 14 (8 votes)
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z · score: 16 (8 votes)
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z · score: 28 (20 votes)
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z · score: 39 (23 votes)
What is the size of the EA community? 2019-11-19T07:48:31.078Z · score: 24 (8 votes)
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z · score: 52 (31 votes)
Off-Earth Governance 2019-09-06T19:26:26.106Z · score: 13 (6 votes)
edoarad's Shortform 2019-08-16T13:35:05.296Z · score: 3 (2 votes)
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z · score: 21 (9 votes)
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z · score: 11 (7 votes)
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z · score: 9 (6 votes)
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z · score: 1 (2 votes)
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z · score: 12 (5 votes)
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z · score: 8 (4 votes)
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z · score: 12 (6 votes)

Comments

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-05T15:23:15.897Z · score: 3 (2 votes) · EA · GW

Specifically 'Metta Meditation' is precisely targeted at increasing empathy

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-05T06:04:26.576Z · score: 6 (3 votes) · EA · GW

Ah thanks! I think I didn't click 'Save' 🤦‍♂️

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-04T05:32:46.587Z · score: 24 (12 votes) · EA · GW

For some reason it felt quite weird for me to write a bio and I avoided doing that before, even though I think it's very important for our community to get to know each other more personally. So I thought I might use this chance to introduce myself and finally write a bio 😊

My name is Edo, I'm one of the co-organisers of EA Israel. I'm also helping out in moderation for the forum, feel free to reach out if I can help with anything.

I have studied mathematics, worked as an mathematical researcher in the IDF and was in training and leadership roles. After that I started a PhD in CS, where I helped to start a research center with the goal of advancing Biological research using general mathematical abstractions. After about 6 months I have decided to leave the center and the PhD program.

Currently, I'm mostly thinking about improving the scientific ecosystem and particularly how one can prioritize better within basic science. 

Generally, I'm very excited about improving prioritisation within EA and how we conduct our research around it and EA causes in general. I'm also very interested in better coordination and initiative support within the EA community. Well, I'm pretty excited about the EA community and basically everything else that has to do with doing the most good.

My Virtue Ethic brain parts really appreciates honesty and openness, curiosity and self-improvement, caring and supporting, productivity and goal-orientedness, cooperating as the default option and fixing broken systems.

Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-03T12:40:47.104Z · score: 2 (1 votes) · EA · GW

This feels good to me. One problem that may have (but I'm not sure about it) is that it might not capture new causes that are contained in another meta-cause. So for instance, the post about M-Risk is related to policy or x-risk, but is clearly a new cause by itself and yet it may feel inappropriate to vote on it as "other causes".

Comment by edoarad on EA Forum update: New editor! (And more) · 2020-08-03T12:27:40.055Z · score: 19 (5 votes) · EA · GW

Regarding norms, some interesting notes from the Tagging FAQ at LW:

  1. The goal of tags is to be used as a curated collection over time.
  2. Better to not tag events, as they won't be relevant after a short while.
  3. Better to tag with more specific tags than broad tags.
  4. Vote on tags if you think the post is very relevant to that tag (and so it will show up earlier in the case where people look up through posts on that tag) and downvote when it's less relevant. 
  5. Since tags are moderated, it is better to just add many tags and they will be checked for relevance and accuracy. So if you think that some tag would be useful, just go about it. (As a moderator here, I think that tags are very important and I'll gladly spend time going over more tags if that causes more tags to be created)
  6. Good tags should have a balance between not being too small to be irrelevant and not being too big so that the list wouldn't help readers going through it and it would take a lot of overhead to tag new posts.
    1. I don't think that we should be that concerned with having a tag that's too big on the EA forum. 
    2. Generally, one can filter through several tags, so I'm not sure what to make of it. @Habryka, I'd be interested in your opinion on this.
  7. Tags should avoid being too near other tags. This can be clarified in the description.
  8. Tag evolution:
    • The tagging system is collectively applied which limits its ability to maintain tags with high-degrees of subjective nuance.
    • Tags overall experience pressure to be as inclusive as possible. If a concept is at all loosely connected to a topic, someone will apply it.
    • The general result of the above is that a closely related, although theoretically distinct, concepts will end up blurred and having heavy redundant post overlap.
  9. Tag names should be as clear as possible, even for people who don't understand it in full neuance.
  10. It's perfectly fine to use multiple names when is appropriate.
  11. Keep tag names brief. Use or instead of . (I should rename ). 
  12. Tag description should have the tag name in bold and link to related tags. (again, I should change things 😊)
Comment by edoarad on Open and Welcome Thread: August 2020 · 2020-08-03T12:16:05.345Z · score: 2 (1 votes) · EA · GW

Curious to know what people here think about the "unusual causes" tag. 

This comes across to me as a bit deprecating so I was thinking that perhaps the name should be changed to something a bit more neutral. Perhaps 'non-standard-causes'. Or even something that might be  biased the other way like 'underdiscussed-causes'.

Aaron Gertler gave the answer that

In my experience, "unusual" when applied to anything other than a person is a quite neutral term. I'd think of "non-standard" as worse, since "standard" implies quality in a way I don't think "usual" does.

So I'd take his view over mine, since I'm not a native english speaker. Still, interested in what you think and what other alternatives are there.

Generally, I think that this tag could be very important for the discovery of new causes so I think that an appropriate name might be important

Comment by edoarad on EA Forum update: New editor! (And more) · 2020-08-03T11:58:31.605Z · score: 4 (2 votes) · EA · GW

I tested 1 (added a Collections and Resources tag which I think you'd like 😉), and it works fine and visible to anyone. The tags are moderated, so there is an approval process.

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T13:24:15.941Z · score: 2 (1 votes) · EA · GW

Great! good luck!

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T08:18:28.204Z · score: 3 (2 votes) · EA · GW

By the way, I saw that you didn't upvote your own post - that's understandable, but the norm is to just leave it as it is and upvote yourself :) 

Comment by edoarad on Recommendations for increasing empathy? · 2020-08-02T08:16:21.588Z · score: 11 (5 votes) · EA · GW

Welcome to the forum! This is a very important question, and one that I think many of us are struggling with or have been at some point. I really appreciate you taking action on this and asking about resources publicly. 

Probably my favorite write-ups on this topic is Replacing Guilt by Nate Soares. The first part addresses "listless-guilt" - that feeling that there is something that you should do but you don't know what and there is no specific motivation to guide you. 

Here is a relevant question that has been asked on the forum previously, and may have further sources. 

I also want to remind you that it's perfectly okay not to feel motivated right now. This can come in waves, and you might just randomly come across something which would make you motivated again. It's tough to feel motivated without supporting community and structure (if that's the case you're in). You might also consider joining the EA Virtual Group, or volunteer for an ongoing EA project.

Comment by edoarad on Investing to Save Lives · 2020-07-31T15:27:14.668Z · score: 2 (1 votes) · EA · GW

This is a nice idea which, if developed in a particular way, can lead to microcredit. This is the concept of giving small loans to individuals in developing countries, with comparatively little interest rate (though still larger than what we have at developed countries). There has been analysis by the EA community of microcredit, such as this report by SoGive, which you might be interested in.

Another thing that comes to mind is trying to sell products such as bednets directly to locals. They cost about $4-5 per net, which includes all the costs of manufacturing, distribution etc. That means that to get profit and to handle sales would (probably) cost some more. That's seems like a good bargain, but when the absolute poverty line is at about $2 a day, that makes it tough to save up. I imagine problems might show up because if you want to have bednets in the quality of AMF, these would cost much more than bednets of poorer quality and I'm not sure that the consumers could really tell the difference. Note that it's not that bednets by themselves save lives, but rather that they help mitigate some risks. For people in extreme poverty there is a wide range of risks to manage and possibilities for investments (say a steel roof instead of straw or an ox for farming), and it is actually not clear that buying a good bednet is actually the best use of money or that they understand it.

From the little I know of life in extreme poverty, the situation around loaning is complicated. Generally, interest rates are sky high - something enormous like 200% per few weeks, if I remember correctly -  and there is a big risk that people can't pay back. There is also a real difficulty in the infrastructure involved, and possible corruption.

I'm interested in hearing out people more knowledgeable than myself.

Comment by edoarad on Lukas_Gloor's Shortform · 2020-07-29T05:33:54.947Z · score: 4 (2 votes) · EA · GW

Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure, and that generally people communicate about their own pleasurable states as good. Given a random person off the street, I'm willing to bet that after introspection they will suggest that they value pleasure in the strong sense. So while this may not be universally accepted, I still think it could hold weight. 

Also, a symmetric statement can be said regarding suffering, which I don't think you'd accept. People who say "suffering is bad" claim that we can establish this by introspection about the nature of suffering.

From reading Tranquilism, I think that you'd respond to these as saying that people confuse "pleasure is good" with an internal preference or craving for pleasure, while suffering is actually intrinsically bad. But taking an epistemically modest approach would require quite a bit of evidence for that, especially as part of the argument is that introspection may be flawed.

I'm curious as to how strongly you hold this position. (Personally, I'm totally confused here but lean toward the strong sense of pleasure is good but think that overall pleasure holds little moral weight)

Comment by edoarad on Delegate a forecast · 2020-07-28T06:04:47.363Z · score: 3 (2 votes) · EA · GW

Fantastic, thanks! 

I've requested access to the doc :) 

(Regarding the platform, I think it would help a bit to clarify things if I could do something like selecting a range with the mouse and have the probability mass of that interval displayed) 

Comment by edoarad on Delegate a forecast · 2020-07-26T10:21:12.198Z · score: 4 (3 votes) · EA · GW

The purpose is to see how likely it is to remain valuable over time (and I hope, and think that's likely the case, that we will terminate it if it stops being cost-effective). 

I think that the distribution is only interesting for this purpose until 2030, and then the probability of it lasting to >= 2030 can collapse to one point.  

Comment by edoarad on Delegate a forecast · 2020-07-26T10:18:37.615Z · score: 5 (4 votes) · EA · GW

Conditional on us starting this small grant project in EA Israel, at what year would we terminate the program? 

Comment by edoarad on Mustreader's Shortform · 2020-07-22T19:12:17.014Z · score: 9 (5 votes) · EA · GW

This sounds great! I think you should make this a top-level post :)

Comment by edoarad on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T12:46:13.762Z · score: 4 (2 votes) · EA · GW

I think that's a good example of a way that BIP overlap. Also, intelligence and power clearly change benevolence by changing incentives or view of life or capability of making an impact. (Say, economic growth has made people less violent)

Comment by edoarad on Improving the future by influencing actors' benevolence, intelligence, and power · 2020-07-21T06:43:51.700Z · score: 5 (3 votes) · EA · GW

This is an interesting framework. I think that it might make sense to think of the actors incentives as part of its benevolence; An academic scientist (or academia as a whole) has incentives which are aimed at increasing some specific knowledge which in itself is broadly societally useful (because that's how funding is supposed to incentives them). Outside incentives might be more powerful than morality, especially in large organisations. 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T04:39:48.016Z · score: 2 (1 votes) · EA · GW

Thanks! This does clarify things for me, and I think that the definition of a "goal" is very helpful here. I do still have some uncertainty here about the claim of process orthogonality which I can better understand:

Let's define an "instrumental goal" as a goal X for which there is a goal Y such that whenever it is useful to think of the agent as "trying to do X" it is in fact also useful to think of it as "trying to do Y"; In this case we can think that X is instrumental to Y. Instrumental goals can be generated at the development phase or by the agent itself (implicitly or explicitly). 

I think that the (non-process) orthogonality thesis does not hold with respect to instrumental goals. A better selection of instrumental goals will enable better capabilities, and with greater capabilities comes greater planning capacity. 

Therefore, the process orthogonality thesis does not hold as well for instrumental goals. This means that instrumental goals are usually not the goals of interest when trying to discern between process and non-process orthogonality theses, and we should focus on terminal goal (those which aren't instrumental). 

In the case of an RL agent or Deep Blue, I can only see one terminal goal - maximize defined score or win chess. These won't really be change together with capabilities. 

I thought a bit about humans, but I feel that this is much more complicated and needs more nuanced definitions of goals. (is avoiding suffering a terminal goal? It seems that way, but who is doing the thinking in which it is useful to think of one thing or another as a goal? Perhaps the goal is to reduce specific neuronal activity for which avoiding suffering is merely instrumental?)

Comment by edoarad on edoarad's Shortform · 2020-07-20T07:41:45.464Z · score: 13 (5 votes) · EA · GW

There is a new initiative by Yuval Noah Harari, https://www.sapienship.co/. It is focused on global catastrophic risks from emerging tech. 

Comment by edoarad on Quotes about the long reflection · 2020-07-20T05:40:49.225Z · score: 2 (1 votes) · EA · GW

Why Rot13? This seems like an interesting discussion to be had

Comment by edoarad on What’s the Use In Physics? · 2020-07-19T13:13:16.678Z · score: 7 (3 votes) · EA · GW

Some other options and sources from the future:

  • A 2019 report on Quantum Computing by Jaime Sevilla: link
  • Switching to electrical engineering, there has been a call for more people in EA to be experts in computational hardware to better understand questions relevant to AI-Risk.
Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T06:02:27.764Z · score: 4 (2 votes) · EA · GW

I was thinking it over, and I think that I was implicitly assuming that process orthogonality follows from orthogonality in some form or something like that. 

The Deep Blue question still holds, I think.

The human brain should be thought of as designed by evolution. What I wrote is strictly (non-process) orthogonality. An example could be given that the cognitive breakthrough might have been enlargement of the neocortex, while civilization was responsible for the values. 

I guess that the point is that there are example of non-orthogonality? (Say, the evaluation function of DeepBlue being critical for it's success)

Comment by edoarad on Could next-gen combines use a version of facial recognition to reduce excessive chemical usage? · 2020-07-14T11:35:28.168Z · score: 2 (1 votes) · EA · GW

Thanks, very interesting. 

Further automation of agriculture could reduce the points of failure compared to an offline system.

Did you mean that this would increase the points of failure?

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T06:01:44.759Z · score: 7 (4 votes) · EA · GW

In "Unpacking Classic Arguments for AI Risk", you defined The Process Orthogonality Thesis as: The process of imbuing a system with capabilities and the process of imbuing a system with goals are orthogonal.

Then, gave several examples of cases where this does not hold: thermostat, Deep Blue, OpenAI Five, the Human brain. Could you elaborate a bit on these examples? 

I am a bit confused about it. In Deep Blue, I think that most of the progress has been general computational advances, and the application of an evaluation system given later. The human brain value system can be changed quite a lot without apparent changes in the capacity to achieve one's goals (consider psychopaths for extreme example here).

Also, general RL systems have had successes in applying themselves to many different circumstances. Say, the work of DeepMind on Atari. Doesn't that point in favor of the Process Orthogonality Thesis?
 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T05:40:56.958Z · score: 12 (7 votes) · EA · GW

How entrenched do you think are old ideas about AI risk in the AI safety community? Do you think that it's possible to have a new paradigm quickly given relevant arguments?

I'd guess that like most scientific endeavours, there are many social aspects that make people more biased toward their own old way of thinking. Research agendas and institutions are focused on some basic assumptions - which, if changed, could be disruptive to the people involved or the organisation. However, there seems to be a lot of engagement with the underlying questions about the paths to superintelligence and the consequences thereof, and also the research community today is heavily involved with the rationality community - both of these makes me hopeful that more minds can be changed given appropriate argumentation. 

Comment by edoarad on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T05:32:05.192Z · score: 11 (7 votes) · EA · GW

What is your theory of change for work on clarifying arguments for AI risk? 

Is the focus more on immediate impact on funding/research or on the next generation? Do you feel this is important more to direct work to the most important paths or to understand how sure are we about all this AI stuff and grow the field or deprioritize it accordingly?

Comment by edoarad on Could next-gen combines use a version of facial recognition to reduce excessive chemical usage? · 2020-07-13T18:06:34.385Z · score: 2 (1 votes) · EA · GW

Would you mind expanding on why you think this is related to doing the most good? 

Also, what inherent risks are you referring to? (do you say that there are risks in any innovation?)

Comment by edoarad on Why altruism at all? · 2020-07-13T06:02:36.668Z · score: 4 (2 votes) · EA · GW

I'm not sure I understand, what is it you are criticising?

Comment by edoarad on A bill to massively expand NSF to tech domains. What's the relevance for x-risk? · 2020-07-13T05:59:23.049Z · score: 2 (1 votes) · EA · GW

Nope 😊 Fixed, thanks!

Comment by edoarad on A bill to massively expand NSF to tech domains. What's the relevance for x-risk? · 2020-07-12T15:20:59.308Z · score: 5 (3 votes) · EA · GW

The bill also aims at building a DARPA-like funding institution within NSF. 

I'm quite excited by this. Anyone has more information about it?

Comment by edoarad on Mati_Roy's Shortform · 2020-07-11T07:55:01.082Z · score: 2 (1 votes) · EA · GW

When talking about causes, I'd like to see comments like "there hasn't been enough analysis of effectiveness of meta-science interventions". 

Comment by edoarad on EA for the masses · 2020-07-09T14:36:38.673Z · score: 4 (2 votes) · EA · GW

Great, this is much better :)

I think this might interest you - https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

Comment by edoarad on EA for the masses · 2020-07-09T12:36:22.197Z · score: 5 (5 votes) · EA · GW

I appreciate you writing this post. However, I could not understand what are the main arguments and claims you are making from the first sentences or by skimming this post. This is very important because I didn't end up reading more than that and probably many others won't as well - it's very time demanding to read posts on the forum.

It would probably be better if you could write a brief summary at the start and use headers throughout the text. Also, sometimes being more concise is better - more clarity at the expense of perhaps less rigour/persuasion/content. 

I think these are generally good norms on the forum. Sorry for not engaging with the content itself.

Comment by edoarad on What values would EA want to promote? · 2020-07-09T10:04:47.158Z · score: 6 (5 votes) · EA · GW

This is an interesting question. 

One possible value is something like intrinsically valuing Truth or Better Reasoning. Perhaps also something like Productivity/Maximisation. The rationality community is perhaps a good example of promoting such values (explicitly here). 

It feels somewhat double-edged to promote instrumental values. This can cause all types of troubles if it's misinterpreted or too successful.

What do you think are the important values? 

Comment by edoarad on edoarad's Shortform · 2020-07-06T11:23:38.095Z · score: 11 (5 votes) · EA · GW

Convergence (in Economics) is the idea that poorer countries will grow faster than rich countries, and as a result they would eventually converge. 

In my naive intuition I always imagined richer countries (or sub-communities in them) developing faster than lower income countries by some form of accelerating Endogenous Growth

I would be very interested in reading someone's take on the relevance of these considerations to EA, as I notice my world-view is very dependent on my beliefs on convergence. It feels important both for global poverty and for longtermism - I'd expect a multi-power world if we will have convergence and a singleton if we'd have a strong divergence, and I think that there can be convincing arguments here

Comment by edoarad on Resources to learn how to do research · 2020-07-04T13:45:15.041Z · score: 2 (1 votes) · EA · GW

Also, there is this collaborative doc on advice for new EA researchers

Comment by edoarad on Resources to learn how to do research · 2020-07-04T11:16:40.492Z · score: 20 (6 votes) · EA · GW

Charity Entrepreneurship has recently released its ongoing course and its Handbook - which has large sections on decision making and describes its great research process. 

Effective Thesis has a bunch of resources on improving research skills, but the focus is more academic.

LessWrong has several posts about improving research in its Scholarship & Learning tag, some might be relevant.

Comment by edoarad on I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA · 2020-07-01T10:01:36.926Z · score: 3 (2 votes) · EA · GW

didn't you just violate that?

Comment by edoarad on EA Forum feature suggestion thread · 2020-06-29T11:40:58.000Z · score: 2 (1 votes) · EA · GW

Can we have a nice "Community Events" section like in LW? Can it integrate automatically with the International EA Events Calendar?

Comment by edoarad on EA is risk-constrained · 2020-06-29T09:56:35.621Z · score: 5 (3 votes) · EA · GW

Agree. The interesting question for me is where we expect the cutoff to be - what (personal, not project dependent) conditions make it highly effective to give income to an individual. 

This framing makes me notice that it would probably be far from realistic right now, as small initiatives in EA are funding constrained. But this still might be misleading.

Comment by edoarad on EA Forum feature suggestion thread · 2020-06-28T08:34:00.622Z · score: 2 (1 votes) · EA · GW

🔥

Comment by edoarad on EA Forum feature suggestion thread · 2020-06-28T07:46:29.434Z · score: 9 (3 votes) · EA · GW

No, sorry, though that might be a good idea. I meant an option to easily move a shortform post you have written to a top level post, because I've seen many cases where people write amazing shortform posts which might get a lot more visibility if they were forwarded to top level, perhaps after getting some feedback and comments from people who are more engaged with the forum to even look at the shortform. 

That should transfer all comments and Karma with it, and simply have the option of adding a title. 

I guess this should apply to all comments, not just in the shortform.

Comment by edoarad on The Nuclear Threat Initiative is not only nuclear -- notes from a call with NTI · 2020-06-27T06:33:24.081Z · score: 2 (1 votes) · EA · GW

What are radiological risks?

Comment by edoarad on How should we run the EA Forum Prize? · 2020-06-26T18:47:28.402Z · score: 4 (2 votes) · EA · GW

Ah, thanks! I think I was perceived as saying "content creators publish stuff which is highly valuable to the forum as well, and therefore we should give them a prize!".
It is definitely not what I intended, and it was a very sloppy writing from me 🤦‍♂️ 

What I do think is that the forum is a good platform and that it makes total sense to optimise on building better incentives there. Not as a matter of job descriptions, but generally when building organisations and platforms I think that it makes sense to focus some of the efforts and resources on itself even if there might be alternatives which might be better for that org/platform's stated goals but they do not really work together well with other parts of the org/platform. 

The questions were there because I specifically didn't understand why you think that "Almost all content useful to EAs is not written on the forum, and almost all authors who could write such content will not write it on the forum", regardless of what we think about the previous point. I'm very interested in your take on that!

Comment by edoarad on What is the size of the EA community? · 2020-06-26T11:36:50.247Z · score: 4 (2 votes) · EA · GW

Answer from Rethink Priorities (2020): estimate there are around 2315 highly engaged EAs and 6500 (90% CI: 4700-10,000) active EAs in the community overall.

Comment by edoarad on Is it possible to change user name? · 2020-06-26T11:34:54.579Z · score: 3 (2 votes) · EA · GW

Yea, users don't have permissions to change their own usernames. If you want to change it, you need to contact a forum moderator, such as Aaron Gertler, JP Addison, Julia Wise and myself (not sure exactly who else has the relevant permissions). 

Feel free to add a comment or send me a message with your desired username.

Comment by edoarad on Enlightenment How? · 2020-06-25T18:24:36.297Z · score: 3 (2 votes) · EA · GW

I find this idea very interesting! Several random points:

Relevant here are Kaj Sotala's sequence on meditation and an SSC post on enlightenment (which I'd find and link if the blog wasn't deleted). 

It seems that reports on enlightened people don't display much behavioural difference between them and unenlightened people, which points to a possible delusion. However, subjective reports seem quite consistent (if I recall correctly) and it is possible that their subjective experience really is much better (similar to how depressed people can look from the outside like ordinary people). 

I'd be surprised if we won't find neurofeedback techniques that would (subjectively, as measured against some relevant placebo) improve meditative practice aimed at enlightenment, at least for some of the initial steps of the practice. 

It intuitively feels to me that it would take a lot of work to help achieve something like enlightenment (and even that, it's not clear when it is enough). Much more than alleviating other forms of suffering which are likely to be much worse. So that's a good reason to postpone research on enlightenment. ("Enlightenment Later"?)

I also don't think it's neglected. Apart from the traditional approaches, I think that there is scientific research on that direction, but not really sure. I recall that I heard the Dalai Lama saying something about supporting related scientific research.

Comment by edoarad on Dignity as alternative EA priority - request for feedback · 2020-06-25T17:42:02.003Z · score: 16 (10 votes) · EA · GW

It is very interesting! Glad to see this post.

Do you mind expanding a bit on what you mean by dignity and why you think that's an important measure? Should dignity be valued even at the cost of well-being or should that be used as an indirect measure (like QALY)?

Comment by edoarad on EA is risk-constrained · 2020-06-25T17:18:27.139Z · score: 5 (4 votes) · EA · GW

I think what you are both saying makes total sense, and is probably correct. With that said, it might be the case that 

  1. it is much easier to vet people rather than projects
  2. vetting is expensive
  3. we expect some outliers to do a lot of good
  4. financial security is critical for success.
  5. it is technically very hard to set many institutions or to cover many EAs as employees.