Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-18T11:01:25.535Z · score: 3 (2 votes) · EA · GW

By that logic you are turning the idea of an x-risk into anything that really matters in the long run. So poverty is an x-risk too in this definition. That makes it not a useful definition and is also very different from how most people think about the term.

Extinction (or something just as bad): x-risk. I go by that.

Comment by kbog on Is Superintelligence Here Already? · 2019-01-15T11:41:10.320Z · score: 4 (3 votes) · EA · GW

To the title question - no it's not, for we have not observed any superintelligent behavior.

"Intelligence" does not necessarily need to have anything to do with "our" type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring "hedgehogs" (as compared to "foxes" in the hedgehogs v foxes compairson in Tetlock's "superintelligence")

Foxes absolutely build on historic knowledge. "Our" (i.e. human) intelligence can be either foxlike or hedgehoglike, after all both of them were featured in Tetlock's research, and in any case this is not what FHI means by the idea of a unitary superintelligent agent.

- who are worse than random at predicting the future;

Foxes are not worse than random at predicting the future, they are only modestly less effective than hedgehogs.

AI has already far surpassed our own level of intelligence

Only in some domains, and computers have been better at some domains for decades anyway (e.g. arithmetic).

this represents a real deep and potentially existential threat that the EA community should take extremely seriously.

The fact that corporations maximize profit is an existential threat? Sure, in a very broad sense it might lead to catastrophes. Just like happiness-maximizing people might lead to a catastrophe, career-maximizing politicians might lead to a catastrophe, security-maximizing states might lead to a catastrophe, and so on. That doesn't mean that replacing these things with something better is feasible, all-things-considered. And we can't even talk meaningfully about replacing them until a replacement is proposed. AFAIK the only way to avoid profit maximization is to put business under public control, but that just replaces profit maximization with vote maximization, and possibly creates economic problems too.

It is also at the core of the increasingly systemic failure of politics

Is there any good evidence that politics is suffering an increasing amount of failure, let alone a systemic one?

Before you answer, think carefully about all the other 7 billion people in the world besides Americans/Western Europeans. And what things were like 10 or 20 years ago.

this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities

I don't know of any evidence that EAs are irrational or biased judges of the capabilities of other people or software.

potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.

The prime purpose of politics is to ensure security, law and order. This cannot be disputed as every other policy goal is impossible until governance is achieved, and anarchy is the worst state of affairs for people to be in. Maybe you mean that the most important political activity for EAs, at the margin right now, is to improve corporation behavior. Potentially? Sure. In reality? Probably not, simply because there are so many other possibilities that must be evaluated as well. Foreign aid, defense, climate change, welfare, technology policy, etc.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T10:07:27.223Z · score: 2 (1 votes) · EA · GW

It's really not clear how any geoengineering plan would cause extinction (they only aim to make modest changes to temperatures and precipitation, e.g. to counteract climate change), and there is such popular antipathy towards geoengineering that we can assume polities to err on the side of too little geoengineering rather than too much.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T10:03:59.829Z · score: 2 (1 votes) · EA · GW
natural catastrophe-->famine-->war/refugees

AFAIK this is not how the current refugee crisis occurred. The wars in the Middle East / Afghanistan were not caused by climate change.

are already warping politics in the developed world in ways that will make it more difficult to fight climate change (e.g. strengthening politicians who believe climate change is a myth

If climate change increases, that will convince people to stop voting for politicians who think it is a myth.

You're also relying on the assumption that leaders who oppose immigration will also be leaders who doubt climate change. That may be true in the US right now but as a sweeping argument across decades and continents it is unsubstantiated. It's also unclear if such politicians will increase or decrease x-risks.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T09:57:49.042Z · score: 2 (1 votes) · EA · GW
X-risk is not just extinction

But it is outcomes that are morally close to extinction, the loss of most of humanity's capacity and potential. Nuclear winter of a few degrees would not impact agriculture so adversely to cause this to happen. At this point you are multiplying so many small probabilities in series that you cannot call climate change a real x-risk without doing the same for so many other things that are equally likely to set a chain of bad events in motion.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-15T09:51:10.832Z · score: 3 (2 votes) · EA · GW
However, there are some ideas, how to use exiting nuclear stockpiles to cause more damage and trigger a larger global catastrophe - one is most discussed is nuking a supervolcano,

Absurd. Why would anyone do that?

retaliation attack on US may include attack on the Yellowstone, but I don't know if it is a part of the official doctrine.

I'm sure it isn't. Also, scientifically speaking it doesn't even seem possible to ignite a supervolcano with nukes:

Future nuclear war could be using even more destructive weapons

Even the most destructive historical weapons (e.g. Tsar Bomba) have not been deployed. Warheads have gotten smaller over recent decades. No reason for this trend to reverse.

Comment by kbog on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T08:00:52.752Z · score: 1 (4 votes) · EA · GW

Having looked at warhead stocks, nuclear winter research, etc, I think nuclear war isn't an x-risk either.

I'm also rather doubtful that climate change significantly increases the probability of nuclear war. Regional conflicts and insurgencies in certain places, sure. But the pathway from there to nuclear war is very unclear. You can point to the Indo-Pakistani dyad as a possible flashpoint, but both of them have few nuclear weapons. And their historical conventional conflicts did not escalate to involve other countries.

A spreadsheet for comparing donations in different careers

2019-01-12T07:32:51.218Z · score: 6 (1 votes)
Comment by kbog on An integrated model to evaluate the impact of animal products · 2019-01-11T02:15:29.898Z · score: 5 (2 votes) · EA · GW

Well we have to count countervailing biases among animal activists and utilitarians too.

Comment by kbog on An integrated model to evaluate the impact of animal products · 2019-01-11T01:58:37.497Z · score: 2 (1 votes) · EA · GW

Ah, yeah, thanks for underlining that, I say "elasticity effect" as opposed to elasticity, maybe that's not clear enough.

My final comment is that I am puzzled by your conclusion regarding milk
given that t welfare metrics you use are just scales from better to
worse and do not have an interpretation for their absolute value.

The quality of life evaluations both have 0 for a life with neutral value, if that's what you mean.

Each point in the scoring corresponds to 1/100 of the difference between a neutral life and a life with all interests satisfied, for a human, for one day.

So the estimated net harm of a cup of milk is like feeling 1% worse for a day, on my assumptions.

Comment by kbog on An integrated model to evaluate the impact of animal products · 2019-01-11T01:31:07.034Z · score: 2 (1 votes) · EA · GW

Thanks, that looks like a mistake, it should be listed as a dummy estimate. It's fixed now with a more accurate estimate, pork is slightly worse now compared to the original version.

It does seem unusually pessimistic, it is the result of combining Norwood's relative pessimism about factory farmed pork and charity Entrepreneurship's general pessimism about farm animals.

Comment by kbog on An integrated model to evaluate the impact of animal products · 2019-01-09T20:04:32.175Z · score: 2 (1 votes) · EA · GW

I did estimate high cross-price elasticity effects between free range and cage eggs. If you add other free range/humane products to the spreadsheet then you will have to estimate CPE effects with this sort of thing in mind. The model has capacity for it, just add more rows and columns in the matrices and interpolate the formulas. You can also include different plant products if you like, but I think you are really just adding needless complexity and noise at that point, those differences seem small and nearly impossible to estimate. I lumped all plant products together with average emissions and the assumption of equal-or-nonexistent elasticity effects.

Comment by kbog on An integrated model to evaluate the impact of animal products · 2019-01-09T19:57:25.159Z · score: 2 (1 votes) · EA · GW

Those welfare points are on a roughly -100 to +100 point scale. It's not a real QALY.

An integrated model to evaluate the impact of animal products

2019-01-09T11:04:57.048Z · score: 32 (18 votes)
Comment by kbog on What movements does EA have the strongest synergies with? · 2018-12-22T03:29:34.942Z · score: 7 (6 votes) · EA · GW

Soccer, because there has been a recent trend of professional soccer players giving up portions of their salary to charity. Usually they give it to things like soccer opportunities in the developing world.

Comment by kbog on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T18:16:31.912Z · score: 10 (7 votes) · EA · GW

I understand disagreement about how harsh or gentle of a tone is appropriate here, but we must at least accept the clear expression of extreme rankings lest we lose the ability to share meaningful credences. We should not make it impossible or overly difficult to say that something is the worst, or that we are certain about that fact, because sometimes a thing really is the worst (SOMETHING must be! It's trivially true, assuming ordering!) and losing that information will bias us. If you ever find yourself writing from a similar position to mine, I'd urge you to find a graceful way to express your belief that something has negative expected value and is inferior to all the other things, and make sure that you aren't moderating yourself to expressing positivity or uncertainty which you do not really believe.

I want to make it clear that I would express these beliefs with more grace and cushioning to an author who I could give the benefit of the doubt (e.g., Kelsey Piper, or a source I had never seen before). My approach here is partially informed by Matthews and Vox's track record, they have been criticized before.

I hope my early, explicit statement that most of their articles are good makes it clear that I don't wish for Matthews or Vox to be run out of this whole thing. And hopefully it's implicit that I don't really worry about similar articles that they write in non-EA contexts, I just file that into the bin of permissible propaganda. So all I have to do is lead them to coming up with the very easy compromise of keeping this sort of thing on the other parts of their site, which means that (a) a stiff response won't back them into a corner, and (b) I don't have to complete the Sisyphean task of truly changing their minds about conservatives in America.

Given in particular that Future Perfect is not funded by donors that explicitly identify with EA ideas and that run by Vox, my quick guess is that careful constructive criticism is far more valuable / low risk than more assertive / slightly aggressive criticism (apologies if I'm already preaching to the choir here).

Good point, but Vox feels this risk as well, which is why a response like this will encourage them to take the easy compromise rather than face the risk.

(Also, since Matthews's own opinion is that across-the-aisle friendship is overrated, and that "airing grievances" can be a good learning experience, surely I get a bit of extra leeway here.)

Comment by kbog on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-21T02:01:16.400Z · score: 1 (2 votes) · EA · GW

I remember Founders Pledge saying something about this before, they work with a lot of startup founders so they often take the existing priorities of people peripheral to EA as given. They have other cause reports like this.

Comment by kbog on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-21T01:58:06.582Z · score: 2 (1 votes) · EA · GW

Good point, though what about the $60/sexual assault one? That impact even seems better than AMF for combined impact.

Response to a Dylan Matthews article on Vox about bipartisanship

2018-12-20T15:53:33.177Z · score: 62 (33 votes)
Comment by kbog on [Link] "Would Human Extinction Be a Tragedy?" · 2018-12-19T04:41:07.882Z · score: 3 (2 votes) · EA · GW

It's <current year> and people still think climate change might cause human extinction!

The VHEMT sort of thing has been around for a while, it's not really new. Recently the new thing seems to be that more people are taking moral antinatalism seriously.

Comment by kbog on Quality of life of farm animals · 2018-12-19T02:57:57.937Z · score: 3 (2 votes) · EA · GW

Well thanks that's neat, I would sooner use that than the other estimates, but it seems that you are equally assuming that scales go from equal positive and negative extremes. E.g., no disease is +17 but severe disease is only -17.

Quality of life of farm animals

2018-12-14T19:21:37.724Z · score: 2 (4 votes)
Comment by kbog on EA needs a cause prioritization journal · 2018-10-09T19:33:44.675Z · score: 0 (0 votes) · EA · GW

OK, I've sent you a connection request.

Comment by kbog on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-30T10:39:45.999Z · score: 1 (1 votes) · EA · GW

There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.

Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn't make sense. At least not without a lot of theoretical groundwork that is presently lacking.

Comment by kbog on A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project · 2018-09-29T15:42:06.570Z · score: 0 (0 votes) · EA · GW

The model doesn't directly use the 1% per year figure; rather, it says the mean of the probability distribution for solving the agenda is 18% overall. That seems pretty reasonable to me. One in a million, on the other hand, would be very wrong (even per-year), because it is many orders of magnitude below the base rates of researchers solving the questions to which they set themselves to solving. And clearly Paul does not feel as though they are only meeting one-millionth of their agenda per year, nor do I from what I have seen of MIRI's research so far.

Comment by kbog on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-28T09:07:56.640Z · score: 2 (2 votes) · EA · GW

Identity is irrelevant if you evaluate total or average welfare through a standard utilitarian model.

Comment by kbog on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-28T09:04:46.164Z · score: 0 (0 votes) · EA · GW

The point, presumably, is that people would feel better because of the expectation that things would improve.

1/1000 people supposedly feels better, but then 999/1000 people will feel slightly worse, because they are given a scenario where they think that things may get worse, when we have the power to give them a guaranteed good scenario instead. It's just shifting expectations around, trying to create a free lunch.

It also requires that people in bad situations actually believe that someone is going to build an AI that does this. As far as ways of making people feel more optimistic about life go, this is perhaps the most convoluted one that I have seen. Really there are easier ways of doing that: for instance, make them believe that someone is going to build an AI which actually solves their problem.

Comment by kbog on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T18:50:19.083Z · score: 3 (3 votes) · EA · GW

This is an algorithmic trick without ethical value. The person who experienced suffering still experienced suffering. You can outweigh it by creating lots of good scenarios, but making those scenarios similar to the original one is irrelevant.

Comment by kbog on Current Estimates for Likelihood of X-Risk? · 2018-09-26T00:13:08.967Z · score: 1 (1 votes) · EA · GW

There is more to the issue, you need to look at the chance that extinction can be avoided by a given donation. Otherwise it would be just like saying that we should donate against poverty because there are a billion poor people.

Comment by kbog on A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project · 2018-09-22T19:33:46.770Z · score: 1 (1 votes) · EA · GW

One in a billion? Of course. Have you thought about how tiny one in a billion is? I'm confident in saying that the average user of this forum has a greater than one-in-a-billion chance of personally designing and implementing safe AI.

I don't see that number in the model, though. You might want to make a copy of the model with your own estimates - or find something specific to dispute. You seem to be working backwards from a conclusion; it's better to consider the input probabilities one step at a time.

Comment by kbog on EA needs a cause prioritization journal · 2018-09-14T11:17:40.243Z · score: 1 (1 votes) · EA · GW

For work people are happy to do in sufficient detail and depth to publish, there are significant downsides to publishing in a new and unknown journal. It will get much less readership and engagement, as well as generally less prestige. That means if this journal is pulling in pieces which could have been published elsewhere, it will be decreasing the engagement the ideas get from other academics who might have had lots of useful comments, and will be decreasing the extent to which people in general know about and take the ideas seriously.

Yes. On the other hand, there has been relatively little publication on cause priorities anyway. I think most of the content here would be counterfactually unpublished.

For early stage work, getting an article to the point of being publishable in a journal is a large amount of work. Simply from how people understand journal publishing to work, there's a much higher bar for publishing than there is on a blog. So the benefits of having things looking more professional are actually quite expensive.

Well some of this (though not all) adds value in terms of making the article more robust or more communicable. Also, a uniform format maybe helps keep people from being biased by the article's appearance (probably a very small effect, however).

Comment by kbog on EA needs a cause prioritization journal · 2018-09-14T09:41:18.798Z · score: 1 (1 votes) · EA · GW

I didn't say it would be run by non-academics.

No one would be able to put this journal on an academic CV.

That will depend on who runs it!

So there's really no benefit to "publishing" relative to posting publicly and letting people vote and comment.

Well there are many ways to run a review process besides public votes and comments. You can always have a more closed/formal/blind process even if you don't publish it.

Comment by kbog on EA needs a cause prioritization journal · 2018-09-13T08:06:57.124Z · score: 2 (2 votes) · EA · GW

Blind review is only possible with a specialized system/website. The EA forum doesn't support math typesetting like LaTeX, and the arguments would be mixed up with all kinds of other posts. I think the best alternative to a true journal would be a community blog that hosted articles with a review system.

But not all peer review is the same, you want to have some review from people who know the relevant subjects well. E.g., a paper that relates to economic policy should be seen by economists. But if we have unpublished works on a website, I imagine it's going to be hard to get subject matter experts outside of EA to participate in the review process.

Comment by kbog on EA needs a cause prioritization journal · 2018-09-13T01:45:46.910Z · score: 2 (2 votes) · EA · GW

(relevant blogs, to be precise)

Some of those objections would not apply a journal like this. Namely, the journal itself would be about questions which matter and have a high impact, and cause prioritization is no longer so ignored that you can make great progress by writing casually. Also, by Brian's own admission, some of his reasons are "more reflective of my emotional whims".

In any case, Brian's only trying to answer the question of whether a given author should submit to a journal. Whether or not a community should have a journal is a subtly different story.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-12T10:38:59.798Z · score: 3 (3 votes) · EA · GW

It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up?

If so, then every academic center would be a filter bubble. But filter bubbles are about communities, not work departments. There are relevant differences between these two concepts that affect how they should work. Researchers have to have their own work departments to be productive. It's more like having different channels within an EA server. Just making enough space for people to do their thing together.

Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?

These institutions don't have premises, they have teloses, and if someone will be the best contributor to the telos then sure they should be hired, even though it's very unlikely that you will find a critic who will be willing and able to do that. But Near Term EA has a premise, that the best cause is something that helps in the near term.

To be frank, I think this problem already exists. I've literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say "oh, you're the Michael Plant with the weird views" which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.

That sounds like stuff that wouldn't fly under the moderation here or the Facebook group. The first comment at least. Second one maybe gets a warning and downvotes.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-12T04:13:46.709Z · score: -2 (2 votes) · EA · GW

As I stated already, "harsh" is a question of tone, and you clearly weren't talking about my tone. So I have no clue what your position is or what you were trying to accomplish by providing your examples. There's nothing I can do in the absence of clarification.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-11T21:39:13.583Z · score: -1 (1 votes) · EA · GW

EA Chicago posts their events on the Facebook page. I don't live in Chicago...(simple as that)

OK, but has nothing to do with whether or not we should have this discord server... why bring it up? In the context of your statements, can't you see how much it looks like someone is complaining that there are too many events that only appeal to EAs who support long-term causes, and too few events for EAs who support near-term causes?

~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy

It's not that the analogy was not absolute, it's that it was relevantly wrong for the topic of discussion. But given that your argument doesn't seem to be what I thought it was, that's fine, it could very well be relevant for your point.

I was answering your question related to why your first reply was "harsher than necessary".

I figured that "harsh" refers to tone. If I insult you, or try to make you feel bad, or inject vicious sarcasm, then I'm being harsh. You didn't talk about anything along those lines, but you did seem to be disputing my claims about the viability of the OP, so I took it to be a defense of having this new discord server. If you're not talking on either of those issues then I don't know what your point is.

Comment by kbog on Additional plans for the new EA Forum · 2018-09-11T03:52:35.104Z · score: 1 (1 votes) · EA · GW

Yes I second this - tag system please, if possible

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:42:28.100Z · score: -1 (1 votes) · EA · GW

Yeah that's a worthy point, but people are not really making decisions on this basis. It's not like Givewell, which recommends where other people should give. Open Phil has always ultimately been Holden doing what he wants and not caring about what other people think. It's like those "where I donated this year" blogs from the Givewell staff. Yeah, people might well be giving too much credence to their views, but that's a rather secondary thing to worry about.

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:26:03.416Z · score: 0 (2 votes) · EA · GW

Sure! Which is why I've been exchanging arguments with you.

And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

It means that you haven't argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating "experts in my field agree with me" does not count here, even though it's a big part of it)

Sure, and so far you haven't given me a single good reason.

Other people have discussed and linked Open Phil's philosophy, I see no point in rehashing it.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T22:21:49.923Z · score: 1 (1 votes) · EA · GW

I just don't understand why you think that a new space would divide people who anyway aren't on this forum to begin with

I stated the problems in my original comment.

So how are you gonna attract more non-male participants

The same ways that we attract male participants, but perhaps tailored more towards women.

let's say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn't that a good enough reason to initiate it?

It depends on the "different type of venue."

Why would it that be conflicting, rather than complementary with this forum?

Because it may entail the problems that I gave in my original comment.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T22:16:54.953Z · score: 1 (1 votes) · EA · GW

My experience is based on observations of the presence of larger-than-average downvoting without commenting when criticism on these issues is voiced.

I'm not referring to that, I'm questioning whether talking about near-term stuff needs to be anywhere else. This whole thing is not about "where can we argue about cause prioritization and the flaws in Open Phil," it is about "where can we argue about bed nets vs cash distribution". Those are two different things, and just because a forum is bad for one doesn't imply that it's bad for the other. You have been conflating these things in this entire conversation.

And I am replying that i don't need to have done so in order to have an argument concerning the type of venue that would profit from discussions on this topic. I don't even see how I could change my mind on this topic (the good practice when disagreeing) because I don't see why one would engage in a discussion in order to have an opinion on the discussion

The basic premise here, that you should have experience with conversations before opining about the viability of having such a conversation, is not easy to communicate with someone who defers to pure skepticism about it. I leave that to the reader to see why it's a problem that you're inserting yourself as an authority while lacking demonstrable evidence and expertise.

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:13:03.427Z · score: -1 (3 votes) · EA · GW

Oh no, this is not just a matter of opinion.

Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.

There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research

Oh, there have been numerous articles, in your field, claimed by you. That's all well and good, but it should be clear why people will have reasons for doubts on the topic.

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:05:27.436Z · score: 0 (0 votes) · EA · GW

are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?

As I stated already, "We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is "disturbing". Those policies are costly - they take more time and people to implement." It is, in short, your conceptual argument about how to do EA. So, people disagree. Welcome to EA.

Subjective, unverifiable, etc. has nothing to do with such standards

It has something to do with the difficulty of showing that a group is not conforming to the standards of EA.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T21:56:56.289Z · score: 0 (0 votes) · EA · GW

I think some people just don't participate in this forum much.

Absofuckinglutely, so let's not make that problem worse by putting them into their own private Discord. As I said at the start, this is creating the problem that it is trying to solve.

And perhaps it's worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.

EA needs to adhere to high standards of intellectual rigor, therefore it can't fracture and make wanton concessions to people who feel emotional aversion to people with a differing point of view. The thesis that our charitable dollars ought to be given to x-risk instead of AMF is so benign and impersonal that it beggars belief that a reasonable person will feel upset or unsafe upon being exposed to widespread opinion in favor of it. Remember that the "near-term EAs" have been pushing a thesis that is equally alienating to people outside EA. For years, EAs of all stripes have been saying to stop giving money to museums and universities and baseball teams, that we must follow rational arguments and donate to faraway bed net charities which are mathematically demonstrated to have the greatest impact, and (rightly) expect outsiders to meet these arguments with rigor and seriousness; for some of these EAs to then turn around and object that they feel "unsafe", and need a "safe space", because there is a "bubble" of people who argue from a different point of view on cause prioritization is damningly hypocritical. The whole point of EA is that people are going to tell you that you are wrong about your charitable cause, and you shouldn't set it in protective concrete like faith or identity.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T21:35:04.212Z · score: 1 (1 votes) · EA · GW

as I've explained, I may profit from reading some discussions which is a kind of engagement.

OK, sure. But when I look at conversations about near term issues on this forum I see perfectly good discussion (e.g., and nothing that looks bad. And the basic idea that a forum can't talk about a particular cause productively merely because most of them reject that cause (even if they do so for poor reasons) is simply unsubstantiated and hard to believe in the first place, on conceptual grounds.

Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they've never participated).

This kind of talk has a rather mixed track record, actually. (source: I've studied economics and read the things that philosophers opine about economic methodology)

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T21:25:25.090Z · score: 0 (0 votes) · EA · GW

Open Phil has a more subjective approach, others have talked about their philosophy here. That means it's not easily verifiable to outsiders, but that's of no concern to Open Phil, because it is their own money.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T21:15:39.565Z · score: 1 (1 votes) · EA · GW

You reduction of a meta-topic to one's personal experience of it is a non-sequitur

I didn't reduce it. I only claim that it requires personal experience as a significant part of the picture.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T21:09:03.279Z · score: 1 (1 votes) · EA · GW

doesn't mean we cannot form arguments.

But they'll be unsubstantiated.

since when did arguments exist only if we can be absolutely certain about something?

You don't have to be certain, just substantiated.

there is a bubble in the EA community concerning the far-future prioritization which may be overshadowing and repulsive towards some who are interested in other topics

It may be, or it may not be. Even if so, it's not healthy to split groups every time people dislike the majority point of view. "It's a bubble and people are biased and I find it repulsive" is practically indistinguishable from "I disagree with them and I can't convince them".

we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that

Again, this is unsupported. What biases? What's the evidence? Who is put off? Etc.

my best guess is that you are behaving like this because you are hiding behind your anonymous identity

my IRL identity is linked via the little icon by my username. I don't know what's rude here. I'm saying that you need to engage with on a topic before commenting on the viability of engaging on it. Yet this basic point is being met with appeals to logical fallacies, blank denial of the validity of my argument, insistence upon the mere possibility and plausible deniability of your position. These tactics are irritating and lead to nowhere, so all I can do is restate my points in a slightly different manner and hope that you pick up the general idea. You're perceiving that as "rude" because it's terse, but I have no idea what else I can say.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T20:51:27.107Z · score: 2 (4 votes) · EA · GW

You're right, I did misread it, I thought the comparison was something against long term causes.

In any case you can always start a debate over how to reduce poverty on forums like this. Arguments like this have caught a lot of interest around here. And just because you put all the "near-term EAs" in the same place doesn't mean they'll argue with each other.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T20:40:59.288Z · score: 0 (0 votes) · EA · GW

But why would I?

First, because you seem to be interested in 'talking about near-future related topics and strategies". And second, because it will provide you with firsthand experience on this topic which you are arguing about.

I haven't seen any such argument

In above comments, I write "It's hard to judge the viability of talking about X when you haven't talked about X", and "I'm not sure what you're really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging."

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T20:37:37.883Z · score: 0 (0 votes) · EA · GW

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above?

The part where I say "it's POSSIBLE to talk about it" relates to your claim "we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues", and the part where I say "bias MAY exist" relates to your claim "the fact that measuring bias is difficult doesn't mean bias doesn't exist."

having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?

Your suggestion to the OP to only host conversation about "[projects that] improve the near future" is the same distinction of near-term vs long-term, and therefore is still the wrong way to carve up the issues, for the same reasons I gave earlier.

Comment by kbog on Near-Term Effective Altruism Discord · 2018-09-10T20:32:15.933Z · score: 0 (0 votes) · EA · GW

And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context

Yes, you can, technically, in theory. I'm recommending that you personally engage before judging it with confidence.

The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim.

This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.

So far, you are just stating something which I currently can't make any sense of.

What part of it doesn't make sense? I honestly don't see how it's not clear, so I don't know how to make it clearer.

Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic

They can, I'm just saying that it will be pretty unreliable.

Comment by kbog on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T20:21:00.910Z · score: 0 (0 votes) · EA · GW

Open Phil gave $5.6MM to Berkeley for AI, even though Russell's group is new and its staff/faculty are still fewer than the staff of MIRI. They gave $30MM to OpenAI. And $1-2MM for many other groups. Of course EAs can give more to a particular groups, that's because we're EAs, we're willing to give a lot of money to wherever it will do the most good in expectation.