Posts

Best donation venues for helping refugees? 2020-02-09T11:54:27.378Z · score: 6 (4 votes)
How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? 2018-02-27T23:33:54.632Z · score: 8 (24 votes)

Comments

Comment by dunja on Debate and Effective Altruism: Friends or Foes? · 2018-11-12T10:15:14.111Z · score: 3 (3 votes) · EA · GW

Thanks for writing this. The suggested criticism of debate is as old as debate itself, and in addition to the reasons you list here, I'd add the *epistemic* benefits of debating.

Competitive debating allows for the exploration of the argumentative landscape of the given topic in all its breath (from the preparation to the debating itself). That means that it allows for the formulation of the best arguments for either side, which (given all the cognitive biases we may have) may be hard to come by in a non-competitive context. As a result, debate is a learning experience, not only because one has to prepare for it, but because the consequences of what we have learned can be examined with the highest rigor possible. The latter is due to the fact that debate allows for critical interaction with `experts' whose views conflict with one's one, which has been considered essential for the justification of our beliefs already with Mill, and all the way to the contemporary social epistemology.

Comment by dunja on Latest Research and Updates for October · 2018-11-04T16:05:19.889Z · score: 1 (1 votes) · EA · GW

Thanks a lot for this, very useful indeed. I think this list hasn't been mentioned: Awful AI - a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society.

Comment by dunja on Announcing new EA Funds management teams · 2018-11-03T18:02:40.176Z · score: 1 (5 votes) · EA · GW

Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I've just learned about through DavidNash's post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.

Comment by dunja on Announcing new EA Funds management teams · 2018-11-01T09:12:59.335Z · score: 5 (5 votes) · EA · GW

Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn't it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.

Comment by dunja on Announcing new EA Funds management teams · 2018-10-31T18:05:06.978Z · score: 4 (6 votes) · EA · GW

That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it's in ethics, then you need experts working in ethics; if it's interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similarly to journal editors who invite expert reviewers for individual papers submitted to the journal). Of course, the team can do the pre-selection of projects, determining which are worthy of sending for expert review, but for that, it's usually useful to have at least some experience with research in one of the relevant domains, as well as with research proposals.

Comment by dunja on Announcing new EA Funds management teams · 2018-10-31T09:29:22.421Z · score: 12 (16 votes) · EA · GW

Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).

My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn't be enough to just like the objective of the grant. We would have to assess:

  • Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?

  • Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)

  • etc.

Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that's not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that's not enough for being an expert e.g. in AI (it's not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.

Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That's precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors' contributions responsibly, according to the values of EA in general.

My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn't take those criteria to be enough when it comes to the discovery of vaccinations. We'd also want to see the track-record of the given researchers in the field of vaccination, we'd want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn't mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone's interested.

Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I'd be happy to provide further background in the meantime if anyone is interested).

Comment by dunja on Announcing new EA Funds management teams · 2018-10-30T09:47:33.334Z · score: 12 (8 votes) · EA · GW

I'd be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.

From their bio page I don't see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!

Comment by dunja on EA needs a cause prioritization journal · 2018-09-13T20:31:44.468Z · score: 6 (6 votes) · EA · GW

These are good points, and unless the area is well established so that initial publications come from bigger names (who will that way help to establish the journal), it'll be hard to realize the idea.

What could be done at this point though is have an online page that collects/reports on all the publications relevant for cause prioritization, which may help with the growth of the field.

Comment by dunja on EA needs a cause prioritization journal · 2018-09-13T08:48:00.458Z · score: 2 (2 votes) · EA · GW

I agree that journal publications certainly allow for a raise in quality due to the peer-review system. In principle, there could even be a mixed platform with an (online) journal + a blog which (re)posts stuff relevant for the topic (e.g. posts made on this forum that are relevant for the topic of cause prioritization).

My main question is: is there anyone on here who's actually actively doing research on this topic and who could comment on the absence of an adequate journal, as argued by kbog? I don't have any experience with this domain, but if more people could support this thesis, then it makes sense to actually go for it.

If others agree, I suppose that for further steps, you'd need an academic with expertise in the area, who'd get in touch with one of the publishing houses with a concrete proposal (including the editorial board, the condition that articles be open access, etc.), which would host the journal.

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:39:47.462Z · score: 1 (1 votes) · EA · GW

Thanks, Benito, that sums it up nicely!

It's really about the transparency of the criteria, and that's all I'm arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.

As for my papers - crap, that's embarrassing that I've linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website... have to think of some proper free solution here. But in any case: please don't feel obliged to read my papers, there's for sure lots of other more interesting stuff out there! If you are interested in the topic it's enough the scan to check the criteria I use in these assessments :) I'll email them in any case.

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:16:31.761Z · score: 1 (3 votes) · EA · GW

Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.

Sure! Which is why I've been exchanging arguments with you.

Oh, there have been numerous articles, in your field, claimed by you.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

That's all well and good, but it should be clear why people will have reasons for doubts on the topic.

Sure, and so far you haven't given me a single good reason. The only thing you've done is reiterate the lack of transparency on the side of OpenPhil.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T22:11:52.064Z · score: 0 (0 votes) · EA · GW

While I largely agree with your idea, I just don't understand why you think that a new space would divide people who anyway aren't on this forum to begin with? Like I said, 70% on here are men. So how are you gonna attract more non-male participants? This topic may be unrelated, but let's say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn't that a good enough reason to initiate it? Why would it that be conflicting, rather than complementary with this forum?

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:08:04.352Z · score: 1 (3 votes) · EA · GW

Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).

So yes, I'd argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T21:43:51.003Z · score: 0 (0 votes) · EA · GW

Right, and I agree! But here's the thing (which I haven't mentioned so far, so maybe it helps): I think some people just don't participate in this forum much. For instance, there is a striking gender imbalance (I think more than 70% on here are men) and while I have absolutely no evidence to correlate this with near/far-future issues, I wouldn't be surprised if it's somewhat related (e.g. there are not so many tech-interested non-males in EA). Again, this is now just a speculation. And perhaps it's worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T21:37:40.586Z · score: 0 (0 votes) · EA · GW

OK, you aren't anonymous, so that's even more surprising. I gave you earlier examples of your rude responses, but doesn't matter, I'm fine going on.

My impression of bias is based by my experience on this forum and observations in view of posts critical of far-future causes. I don't have any systematic study on this topic, so I can't provide you with evidence. It is just my impression, based on my personal experience. But unfortunately, no empirical study on this topic, concerning this forum, exists, so the best we currently have are personal experiences. My experience is based on observations of the presence of larger-than-average downvoting without commenting when criticism on these issues is voiced. Of course, I may be biased and this may be my blind spot.

You started questioning my comments on this topic by stating that I haven't engaged in any near-future discussions so far. And I am replying that i don't need to have done so in order to have an argument concerning the type of venue that would profit from discussions on this topic. I don't even see how I could change my mind on this topic (the good practice when disagreeing) because I don't see why one would engage in a discussion in order to have an opinion on the discussion. Hope that's clear by now :)

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T21:30:05.017Z · score: 2 (4 votes) · EA · GW

Again: you are missing my point :) I don't care if it's their money or not, that's beside my point.

What I care about is: are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?

Otherwise, makes no sense to label them as an organization that's conforming to the standards of EA, at least in the case of such practices.

Subjective, unverifiable, etc. has nothing to do with such standards (= conducive to effective & efficient scientific research).

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T21:27:14.729Z · score: 0 (0 votes) · EA · GW

But in many contexts this may not be the case: as I've explained, I may profit from reading some discussions which is a kind of engagement. You've omitted that part of my response. Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they've never participated). Knowledge-of doesn't necessarily have to be knowledge obtained by an object-level engagement in the given field.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T20:51:01.229Z · score: 0 (0 votes) · EA · GW

right, we are able to - doesn't mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?

as for my suggestion, unfortunately, and as i've said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that. your approach alone in this discussion with me is super off-putting and my best guess is that you are behaving like this because you are hiding behind your anonymous identity. i wonder if we talked in person, if you'd be so rude (for examples, see my previous replies to you). i doubt.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T20:46:41.470Z · score: 0 (0 votes) · EA · GW

Like I mentioned above, I may be interested in reading focused discussions on this topic and chipping in when I feel I can add something of value. Reading alone brings a lot on forums/discussion channels.

Moreover, I may assess how newcomers with a special interest in these topics may contribute from such a venue. You reduction of a meta-topic to one's personal experience of it is a non-sequitur.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T20:38:48.511Z · score: 0 (0 votes) · EA · GW

I'm recommending that you personally engage before judging it with confidence.

But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can't make sense of your recommendation (which was originally an imperative, in fact).

This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.

I haven't seen any such argument :-/

What part of it doesn't make sense? I honestly don't see how it's not clear, so I don't know how to make it clearer.

See above.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T20:31:04.589Z · score: 0 (0 votes) · EA · GW

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that's expressed in a modal way. So I don't really understand how is this challenging what I have said :-/

different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T20:26:50.260Z · score: 2 (4 votes) · EA · GW

Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven't seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I've raised the issue.

we're willing to give a lot of money to wherever it will do the most good in expectation.

And my focus is on: which criteria are used/should be used in order to decide which research projects will do the most good in expectation. Currently such criteria are lacking, including their justification in terms of effectiveness and efficiency.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T20:21:59.235Z · score: 0 (0 votes) · EA · GW

Civil can still be unfriendly, but hey, if you aren't getting it, it's fine.

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

If it was clear, why would I ask? there's your lack of friendliness in action. And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim. So far, you are just stating something which I currently can't make any sense of.

"talking about near-future related topics and strategies". I don't know how else I can say this.

Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic? As I said above, this is a non-sequitur, or at least you haven't provided any arguments to support this thesis. I can be in a position to suggest that scientists may profit from exchanging their ideas in a venue A even if I myself haven't exchanged any ideas in A.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T13:38:29.437Z · score: -1 (1 votes) · EA · GW

I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:

But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understanding why you are so confrontational. Your last sentence:

Try things out before judging.

is the highest peak of unfriendliness. What should I try exactly before judging?!

Comment by dunja on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T13:21:44.938Z · score: 1 (3 votes) · EA · GW

(1) I think it is standard practice for peer review to be kept anonymous,

Problem wasn't in the reviewer being anonymous, but in the lack of access to the report

(2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context,

Sure, but that doesn't mean no criteria should be available.

(3) you're just looking at one grant out of all that Open Phil has done,

Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same can happen with future funding strategies.

(4) while you are looking at computer science, their first FDT paper was accepted at Formal Epistemology Workshop, and a professional philosopher of decision theory who went there spoke positively about it.

I was raising an issue concerning journal articles, which are nonetheless important even in computer science to solidify the research results. Proceedings are important for novel results, but the actual rigor of reviews comes through in journal publications (otherwise, journals would be pointless in this domain).

As for the rest of your post, I advice comparing an output of groups of smaller or similar size that have been funded via prestigious grants, you'll notice a difference.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T13:14:24.044Z · score: 0 (0 votes) · EA · GW

First, I disagree with your imperatives concerning what one should do before engaging in criticism. That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.

Second, the fact that measuring bias is difficult doesn't mean bias doesn't exist.

Third, to use your phrase, I am not sure what you are really worried about: having different types of venues for discussion doesn't seem harmful especially if they concern different focus groups.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T11:55:17.427Z · score: 0 (0 votes) · EA · GW

No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I'll try to write a separate, more general post on that.

My only point was that due to the high presence of "far-future bias" on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards biased outlooks) it's nice to have some chats on more near-future related topics and strategies for promoting those goals. I see a chat channel more as a complementary venue to this forum than as an alternative.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T10:14:26.089Z · score: 1 (3 votes) · EA · GW

wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar way as I did for OpenPhil.

That said, I don't think we shouldn't criticize the research done by near-future organizations, to the contrary. And I completely agree: it'd be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.

Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.

Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows... well, nobody could ever specify to me any criteria to begin with, except "an anonymous reviewer whom we trust likes the project" or "they seem to have many great publications", which once looked at don't really exist. That's as far from academic procedures as it gets.

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T10:02:04.282Z · score: 23 (18 votes) · EA · GW

This is a nice idea though I'd like to suggest some adjustments to the welcome message (also in view of kbog's worries discussed above). Currently the message begins with:

"(...) we ask that EAs who currently focus on improving the far future not participate. In particular, if you currently prioritize AI risks or s-risks, we ask you not participate."

I don't think it's a good idea to select participants in a discussion according to what they think or do (it pretty much comes down to an Argumentum ad Hominem fallacy). It would be better to specify what the focus of the discussion is, and to welcome those interested in that topic. So I suggest replacing the above with:

"we ask that the discussion be focused on improving the near future, and that the far-future topics (such as AI risks or s-risks) be left for other venues, unless they are of direct relevance for an ongoing discussion on the topic of near future improvements." (or something along those lines).

Comment by dunja on Near-Term Effective Altruism Discord · 2018-09-10T09:04:30.196Z · score: 0 (2 votes) · EA · GW

Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of "effective" charity). Moreover, there is a large consensus about such evaluative practices: they are assumed as valid by OpenPhil and the EAF, and even when I tried to exchange arguments with both of these institutions, nothing has ever changed (I've never even managed to push them into a public dialogue on this topic). I see this problem as a potential danger for the EA community in whole (just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such; similarly for newcomers). In view of this, I think dividing these practices would be a great idea. The fact they are connected to "far-future EA" is secondary to me, and it is unfortunate that far-future ideas turned into a bubble of its own, closed towards criticism questioning the core of their EA methodology.

That said, I agree with some of your worries (see my other comment here).

Comment by dunja on Are men more likely to attend EA London events? Attendance data, 2016-2018. · 2018-08-15T19:17:16.270Z · score: 0 (0 votes) · EA · GW

That would be great!

Comment by dunja on Are men more likely to attend EA London events? Attendance data, 2016-2018. · 2018-08-15T08:30:45.701Z · score: 0 (0 votes) · EA · GW

Oh damn :-/ I was just gonna ask for the info (been traveling and could reply only now). That's really interesting, is this info published somewhere online? If not, it would maybe be worthwhile to make a post on this here and discuss both the reasons for the predominantly male community, as well as ideas for how to make it more gender-balanced.

I'd be very interested in possible relations between the lack of gender balance and the topic of representation discussed in another recent thread. For instance, it'd be interesting to see whether non-male EAs find the forum insufficiently focused on causes which they find more important, or largely focused on issues that they do not find as important.

Comment by dunja on The Ethics of Giving Part Three: Jeff McMahan on Whether One May Donate to an Ineffective Charity · 2018-08-11T15:03:16.770Z · score: 0 (0 votes) · EA · GW

Thanks a lot for writing this up - it's nice to get some info on this literature. I didn't get though the relationship between the selfish option and "doing good ineffectively" - why do you think that rejecting the selfish option would be a response to the ineffective charity?

Comment by dunja on Are men more likely to attend EA London events? Attendance data, 2016-2018. · 2018-08-10T08:24:21.635Z · score: 1 (1 votes) · EA · GW

Thanks a lot for this post, that's really interesting and highly relevant. I'd be curious to see also the proportion of women in online forums such as this one. And of course, I'm super interested in possible reasons behind the tendencies you describe.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-07T08:25:57.405Z · score: 1 (1 votes) · EA · GW

Hey Evan, thanks for the detailed reply and the encouragement! :) I'd love to write a longer post on this and I'll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.

  1. Efficiency in terms of time - the idea that academics can't do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don't have much (or any) teaching duties (e.g. I did my PhD in Belgium where the agreement was that PhDs focus only on research). Moreover, even if you do have teaching dutues, it may often inform your research and as such it's usually not a "wasted time" (when it comes to research results). As for professors, this largely depends on a country, but there are many examples of academics with a prof. title whose productivity is super high in spite of the teaching duties.

  2. Focusing on sexy topics - there is this misconception that sexy topics won't pass through academia, while actually the opposite is the case: the sexier your topic is, the more likely it is that your project gets funded. The primary issue with any topic, whatsoever, is that the project proposal shows how the topic will be investigated, i.e. the basic methodology. I don't know where exactly this myth comes from, to be honest. I work in philosophy of science, and the more relevant your topic is for real-world problems, the more attractive your project proposal will be (at least in the current funding atmosphere). One reason why this myth is so entrenched among EAs could be the experience of EAs within research projects which already had pre-determined goals and so each researcher had to focus on whatever their boss asked them to. However, there are numerous possibilities across EU to apply for one's own project proposals, in which case you will do precisely what you propose. Another reason could be that EAs don't have much experience with applications for funding, and have submitted project proposals that don't seem convincing in terms of methodology (writing projects is a skill which needs to learned like any other), leading them to conclude that academics don't care about the given topics.

  3. Using public funding for EA purposes - this point relates to what you mention above and I think it would be really great if this direction could be improved. For instance, if academics within EA formed a sort of counseling body, helping EAs with their project proposals, choice of a PhD supervisor, etc. This would be a win-win situation for all kinds of reasons: from integrating EA relevant research goals into academia, to using public funding sources (rather than EA donations) for research. This could proceed e.g. in terms of real-life workshops, online discussions, etc. I'd be happy to participate in such a body so maybe we should seriously consider this option.

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-08-07T08:01:43.007Z · score: 0 (0 votes) · EA · GW

Yeah, in case of obvious crap posts (like spams) they'll be massively downvoted. Otherwise, I've never seen here any of the serious posts massively only downvoted. Rather, you'd have some downvotes, some upvotes, and the case you describe doesn't capture this situation. In fact, an initial row of downvotes may misleadingly give such an impression, leading to some people ignoring the issue, while later on a row of upvotes may actually show the issue is controversial, and as such indeed deserves further discussion.

Comment by dunja on Problems with EA representativeness and how to solve it · 2018-08-04T15:56:45.187Z · score: 1 (3 votes) · EA · GW

Hi John, I don't have any concrete links, but I'd start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn't even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.

Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board, to existing problems related to climate changes etc. While I wouldn't necessarily prioritize these causes to future-oriented charities (say, climate related research), it is worth keeping in mind that investing in the reduction of the existing suffering may have an impact on the reduction of future suffering as well (e.g. by increasing the number of vegans we may impact the ethics of human diet in the future). The impact of such changes is much easier to assess than the impact of the research in an area that concerns risks which are extremely hard to predict. Hence, I don't think the research on AI risks is futile --not at all-- I just find it important to have a clear assessment criteria, just like in any other domain of science, as for what counts as effective and efficient research strategy, how are future assessments of the currently funded projects going to proceed (in order to determine how much has been done within these projects and whether a different approach would be better), whether the given cause is already sufficiently funded in comparison to other causes, etc.

Comment by dunja on Leverage Research: reviewing the basic facts · 2018-08-04T14:23:52.065Z · score: 3 (7 votes) · EA · GW

Part of what we do is help people to understand themselves better via introspection and psychological frameworks.

Could you please specify which methods of introspection and psychological frameworks you employ to this end, and which evidence you use to assure these frameworks are based on the adequate scientific evidence, obtained by reliable methods?

Comment by dunja on Problems with EA representativeness and how to solve it · 2018-08-03T23:04:50.359Z · score: 2 (4 votes) · EA · GW

Thanks for the link, Michael - I've missed that post and it's indeed related to the current one.

Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostrom's arguments), I find the transparency of the criteria used by EA organizations to make decisions which projects to fund unsatisfactory, to the point of endangering the EA movement and is reputation when it comes to the claim that EA is about effective paths of reducing suffering. The primary problem here, as I argued in this post is that it remains unclear why the currently funded projects should count as effective and efficient scientific research.

In view of this, I find it increasingly frustrating to associate myself with the EA movement and its recent development, especially since the issue of efficiency of scientific research is the very topic of my own research. The best I can do is to treat this as an issue of a peer disagreement, where I keep it open that I might be wrong after all. However, this also means we have to keep an open dialogue since either of the sides in the disagreement may turn out to be wrong, but this doesn't seem easy. For instance, as soon as I mention any of these issues on this forum, a few downvotes tend to pop up, with no counterargument provided (edit: this current post ironically turned out to be another case in point ;)

So altogether, I'm not sure I feel comfy associating myself with the EA community, though I indeed deeply care about the idea of effective charity and effective reduction of suffering. And introducing a rule-book which would claim, for instance, that EAs support the funding of research on of AI safety would make me feel just as uncomfy, not because of this idea in principle, but because of its current execution.

EDIT: Just wanted to add that the proposal for community-building organizations to strive for cause indifference sounds like a nice solution.

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-08-02T10:33:53.325Z · score: 1 (1 votes) · EA · GW

Hi Max! I agree, it indeed provides information, but the problem is that the information is too vague, and it may easily reflect a sheer bias (as in: "I don't like any posts that question the work of OpPhil"). I think this is a strong sentiment in this community and as an academic who is not affiliated with OpPhil or any other EA organization, I've noticed numerous cases of silent rejection of a certain problem. I don't think the issues are relevant for any "mainstream" EA topic (points on which the majority here agrees). But as soon as it comes to the polarized issues (say, the funding of non-academic institutions to conduct academic research), the majority that downvotes doesn't say a word. I found it quite entertaining (but also disappointing) when I made a longer post on this topic, only to find bunch of downvotes without concrete engagement in the topic. My interpretation of what's happened there: people dislike someone making waves in their little pond.

I understand you may wish to proceed as you've suggested, but eventually this community will push away dissenters, who are very fond of EA, but who just don't see any point in presenting critical arguments on this platform.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-02T10:17:10.505Z · score: 2 (4 votes) · EA · GW

Hi Evan, Here's my response to your comments (including another post of yours from above). By the way, that's a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

Whether a certain group of people can conduct a given project in an effective and efficient way shouldn't primarily depend on their ethical and political mindset (though this may play a motivating role as I've mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don't see why EAs --as such-- would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.

Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).

Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-02T09:56:05.809Z · score: -1 (3 votes) · EA · GW

But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can't accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.

Second, what about the payment for a better equipment? That was another issue mentioned in Nick's post.

Finally, the underlying assumption of Nick's explanation is that the output of non-academic workers will be better within the given projects than the output of the non-academic workers, which is a bold claim and insufficiently explicated in the text he provided. Again: I don't know which projects we are assessing here and without that knowledge we cannot make an adequate assessment. Anything else would be a mere speculation. I am just making a plea for higher transparency given the complexity of these issues.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-02T09:52:18.951Z · score: 1 (1 votes) · EA · GW

But that's just not necessarily true: as I said, academics can accept money to cover e.g. teaching duties and hence do more research. If you look at ERC grants, that's part of their format in case of Consolidator and Advanced grants. So it really depends on who applied for which funds, which is why Nick's explanation isn't satisfactory.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T17:34:29.657Z · score: -2 (4 votes) · EA · GW

Thanks for the input! But I didn't claim that Nick is biased against academia - I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.

As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.

Nevertheless, I am not concluding from this that Nick is biased (nor that he isn't) - we just don't have evidence for either of these claims, and at the end of the day, this shouldn't matter. The procedure for grants awarding should be robust enough to prevent such biases to kick in. I am not sure if any such measures have been undertaken in this case though, which is why I raising this point.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T12:41:09.857Z · score: 0 (10 votes) · EA · GW

Couldn't agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don't know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.

I think this a symptom of an anti-academic tendencies that I've noticed on this form and in this particular domain of research, which I think would be healthy to discuss. The importance of the issue is easy to understand if we think of any other domain of research: just imagine that we'd start arguing that non-academic climate research centers should be financed instead of the academic ones. Or that research in medicine should be redirected from academic institutions towards non-academic ones. I'd be surprised if anyone here would defend such a policy. There are good reasons why academic institutions --with all their tedious procedures, peer-review processes, etc.-- are important sources of reliable scientific knowledge production. Perhaps we are dealing here with an in-group bias, which needs an open and detailed discussion.

Comment by dunja on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-31T10:20:21.758Z · score: 1 (3 votes) · EA · GW

I'd be curious to hear some explanation of

"University-based grantees were not considered for these grants because I believe they are not well-positioned to use funds for time-saving and productivity-enhancement due to university regulations."

since I have no clue what that means. In the text previous to this claim it is only stated that "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)"- but a university staff can indeed use the funding to cover the teaching duties, as well as to buy a better equipment.

Moreover, if it were any other domain of research (say, medicine or physics), I'd be rather worried if university-based grants were disqualified for this kind of reason.

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-07-25T18:25:02.388Z · score: 0 (0 votes) · EA · GW

Ahh, now I get you! Yeah, that sounds like a good idea! Like I've mentioned in another reply, I wouldn't require the same from upvotes because they may imply the lack of counterarguments, while a downvote implies a recognition that there is a problem, in which case it'd only be fair to state which one it is.

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-07-22T21:25:45.940Z · score: 0 (0 votes) · EA · GW

Oh thanks for sharing this!

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-07-22T21:24:19.309Z · score: 0 (0 votes) · EA · GW

Yes, that's a good point, I've been wondering about this as well. According to one (pretty common) approach to argumentation, an argument is acceptable unless challenged by a counterargument. From that perspective:

upvoting = an acknowledgement of the absence of a counterargument.

downvoting = an observation that there is a counterargument, in which case it should be stated.

This is just an idea from the top of my head, I'd be curious to discuss this in more detail since I find it genuinely curious :)

Comment by dunja on EA Forum 2.0 Initial Announcement · 2018-07-22T10:46:44.326Z · score: 0 (0 votes) · EA · GW

That'd probably be already better than nothing ;) Then again, I'm afraid most people would still just (anonymously) downvote without giving reasons. It's much easier to hide behind an anonymous veil than take a stance and open yourself for debate.

In fact, I'd be curious to see some empirical data on how correlated the act of downvoting and the absence of commenting are. My guess is that those who provide comments (including critical ones) mostly don't downvote except in extreme cases (e.g. discrimination, obviously off-topic for the forum, obviously misinformation, etc.).