Posts

Three Biases That Made Me Believe in AI Risk 2019-02-13T23:22:20.707Z · score: 34 (38 votes)

Comments

Comment by beth-1 on EA Forum 2.0 Initial Announcement · 2019-07-12T21:08:51.197Z · score: 9 (3 votes) · EA · GW

I don't have any specific instances in mind.

Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don't think that the two concerns are separable. The actual mechanisms and results might largely overlap.

Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and what it values before participating. This makes people with oft-downvoted opinions self-select out of the community before ever letting us know that this is happening.

The hovering is helpful, thank you.

Comment by beth-1 on EA Forum 2.0 Initial Announcement · 2019-07-11T21:35:05.434Z · score: 11 (6 votes) · EA · GW

Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.

Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their posts and comments. The lesson to learn from getting both many upvotes and many downvotes is very different from the lesson to learn if nobody bothered to read and vote on your content.

Comment by beth-1 on [Link] "The AI Timelines Scam" · 2019-07-11T05:06:56.785Z · score: 1 (15 votes) · EA · GW

Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.

I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard even among places that don't participate in the "strong AI" scam. Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms, but also in how startups pretend use human labor to pretend they have advanced AI or how short self-driving car timelines are a major part of Uber's value proposition.

The emperor has no clothes. Everyone in the field likes to think they are aware of this fact already when told, but it remains helpful to point it out explicitly at every opportunity.

Comment by beth-1 on Defining Meta Existential Risk · 2019-07-09T20:05:58.582Z · score: 7 (6 votes) · EA · GW

This is mostly a problem with an example you use. I'm not sure whether it points to an underlying issue of your premise:

You link to the exponential growth of transistor density. But that growth is really restricted to just that: transistor density. Growing your number of transistors doesn't necessarily grow your capability to compute things you care about, both from a theoretical perspective (potential fundamental limits in the theory of computation) as well as a practical perspective (our general inability to write code that makes use of much circuitry at the same time + the need for dark silicon + Wirth's law). Other numbers, like FLOP/s, don't necessarily mean what you'd think either.

Moore's law does not posit exponential growth in amount of "compute". It is not clear that the exponential growth of transistor density translates to exponential growth of any quantity you'd actually care about. I think it is rather speculative to assume it does and even more so to assume it will continue to.

Comment by beth-1 on I find this forum increasingly difficult to navigate · 2019-07-05T22:07:39.284Z · score: 5 (6 votes) · EA · GW

These are some issues that actively frustrate me to the point of driving me away from this site.

  • Loading times for most pages are unbearably slow. So are most animations (like the menu from clicking your username top right).
  • Many features break badly when Javascript is turned off.
  • Text field for bio is super small and cannot be rescaled.
  • Super upvotes have their use but the super downvote just encourages harsh voting behaviour.
  • The contrast on the collapse comment button is minimal, same for a number of other places.
  • Basic features take much effort to navigate to. Going to all posts either means two clicks (hamburger menu then all posts) or clicking a link that can not always be seen without scrolling (which is a mess because the page height will change when recent comments have finished loading)
Comment by beth-1 on Effective Altruism is an Ideology, not (just) a Question · 2019-07-02T06:37:21.992Z · score: 1 (1 votes) · EA · GW

Sure it is, but I know a lot more about myself than I do about other people. I could make a good guess on impact on myself of a worse guess on impact on others. It's a bias/variance trade-off of sorts.

I'd say the two are valuable in different ways, not that one is necessarily better than the other.

Comment by beth-1 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T10:34:51.211Z · score: 2 (2 votes) · EA · GW

Any technology comes with its own rights struggle. Universal access to super-longevity, the issue of allowing birth vs exploding overpopulation if everyone were to live many times longer, em rights, just to name a few. New tech will hardly have any positive effect if these social issues resolve in a wrong way.

Comment by beth-1 on Needed EA-related Articles on the English Wikipedia · 2019-06-29T01:23:09.848Z · score: 9 (4 votes) · EA · GW

Can you make a case as to why the two have enough notability separately to deserve their own separate Wikipedia pages?

Comment by beth-1 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T22:16:40.989Z · score: 2 (7 votes) · EA · GW

Regarding 1), if I were to guess which events of the past 100 years made the most positive impact on my life today, I'd say those are the defeat of the Nazis, the long peace, trans rights and women's rights. Each of those carries a major socio-political dimension, and the last two arguably didn't require any technological progress.

I very much think that socio-political reform and institutional change are more important for positive long-term change than technology. Would you say that my view is not empirically grounded?

Comment by beth-1 on Effective Altruism is an Ideology, not (just) a Question · 2019-06-28T21:53:11.953Z · score: 1 (10 votes) · EA · GW
it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology.

Can you expand a bit on this statement? I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted. When I personally try to advocate against the idea that AI Safety is an effective cause, I experience quite some social disapproval for that within EA.

I think the points you're complaining about affect EA just as much as any other ideology, but that they are hard to see when you are in the midst of it. Your own politics and groupthink don't feel like politics and groupthink, they feel like that is the way the world is.

Let me try to illustrate this using an example. Plenty of people accuse any piece of popular media with a poc/female/lgbt protagonist as being overly political, seemingly thinking that white cishet male protagonists are the unique non-political choice. Whether you like this new trend or not, it is absurd to think that one position here is political and the other isn't. But your own view always looks apolitical from the inside. For EA this phenomenon might be compounded by the fact that there is no singular opposing ideology.

Comment by beth-1 on [Link] Book Review: The Secret Of Our Success | Slate Star Codex · 2019-06-07T22:37:43.616Z · score: 18 (10 votes) · EA · GW

It is most apparent in this piece of the review:

He also points out that Tanzanian natives using their traditional farming practices were more productive than European colonists using scientific farming. I’ve had to listen to so many people talk about how “we must respect native people’s different ways of knowing” and “native agriculturalists have a profound respect for the earth that goes beyond logocentric Western ideals” and nobody had ever bothered to tell me before that they actually produced more crops per acre, at least some of the time. That would have put all of the other stuff in a pretty different light.

He remains focused on the expected crops per acre, even though every case study in the book illustrates that such a single variable doesn't encompass the multitude of uses that the acre in question has. I don't think I could describe it better than Reddit user u/TheHiveMindSpeaketh does:

The point of the book is not to point and laugh at the technocrats who failed to squeeze the most X out of Y because they didn't listen to the noble savages. The point is that 'how do we squeeze the most X out of Y' is a bad way to position yourself in relation to your surroundings. The point is that technocrats often succeed in squeezing more X out of Y over a relevant period of time via their techniques, but that treating a forest like a timber-maximizer is already missing the [..] point because a forest is also a home for woodland creatures, and a source for medicinal herbs and fruits and berries, and a nice place to take a hike and stare at the stars. The point is that the mistake was not made at the level of what was implemented, the mistake was made at the level of what was valued, and the implementation mistake was an inevitable downstream consequence of that. The point is that even if traditional Tanzanian farming methods didn't produce more crops per acre, they might still be preferable, because they are more sustainable or less time-intensive or etc, but that these benefits become unintelligible to the technocrat who has already committed to a value system where land is only judged by its yield per acre.

I personally think this is an important question for EA's to grapple with: can we reason abstractly about doing good without this abstraction causing mistakes at the level of what to value. Scott's technocrats surely did not think they were making that mistake, but they were. If we believe that we are somehow different, that is kind of arrogant.

Comment by beth-1 on [Link] Book Review: The Secret Of Our Success | Slate Star Codex · 2019-06-07T10:01:12.798Z · score: 6 (4 votes) · EA · GW

For a different take on the consequences of being "rational", I would highly recommend James C. Scott's book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. The book summary of SSC is pretty good, but when he gives his opinion on the book he seems to have missed the point of the book entirely.

Comment by beth-1 on The Case for Superintelligence Safety As A Cause: A Non-Technical Summary · 2019-06-03T06:45:50.692Z · score: 2 (3 votes) · EA · GW

Thank you for your response.

Yes, that is what I meant. If you could convince me that AGI Safety were solvable with increased funding, and only solvable with increased funding, that would go a long way in convincing me of it being an effective cause.

In response to your question of giving up: If AGI were a long way off from being built, then helping others now is still a useful thing to do, no matter if either of the scenarios you describe were to happen. Sure, extinction would be bad, but at least from some person-affecting viewpoints I'd say extinction is not worse than existing animal agriculture.

Comment by beth-1 on The Case for Superintelligence Safety As A Cause: A Non-Technical Summary · 2019-06-02T15:40:23.582Z · score: 5 (2 votes) · EA · GW

Let me try to rephrase this part, as I consider it to be the main part of my argument and it doesn't look like I managed to convey what I intended to:

AI Safety would be a worthy cause if a superintelligence were powerful and dangerous enough to be an issue but not so powerful and dangerous as to be uncontrollable.

The most popular cause evaluation framework within EA seems to be Importance/Neglectedness/Tractability. AI Safety enthusiasts tell a convincing story on importance and neglectedness being good and make an effort at arguing that tractability is as well.

But here is the thing: all arguments given in favour of AI being risky (to establish importance) can be rephrased as arguments against tractability. Similarly for neglectedness.

I'll illustrate this with a caricature, but it takes little effort to transfer this line of thought to the real arguments being made. Let's say the pro-AIS argument is "AGI will become infinitely smart, so it can out-think all humans and avoid all our security measures. Hence AGI is likely to escape any restrictions we put on it, so it will be able to tile the universe with paperclips if it wants to". Obviously, if it can out smart any security measure, then no sufficient security exists, AI Safety research will never lead to anything and the problem is intractable.

AI Safety is only effective if you can simultaneously argue for each of importance/neglectedness/tractability without detracting from the others. Moreover, your arguments have to address the exact same scenarios. It is not enough for AIS to be important with 50% probability and tractable with 50% probability, these two properties have be likely to hold simultaneously. A coin flip has 50% probability of heads and 50% probability of tails, but they will never happen at the same time.

AI Safety can only be an effective cause (on the margin) if solving it is possible (tractability) but not trivial (importance/neglectedness). I think this is a narrow window to hit, and current arguments are all way off-target.

Comment by beth-1 on Two AI Safety events at EA Hotel in August · 2019-05-21T22:39:30.841Z · score: 7 (5 votes) · EA · GW

Same for the unconference, should be this link.

Comment by beth-1 on The Case for Superintelligence Safety As A Cause: A Non-Technical Summary · 2019-05-21T21:46:39.504Z · score: 4 (6 votes) · EA · GW

Thank you for this nice summary of the argument in favour of AI Safety as a cause. I am not convinced, but I appreciate your write-up. As you asked for counterarguments, I'll try to describe some of my gripes with the AI Safety field. Some have to do with how there seems to be little awareness of results in adjacent fields, making me doubt if any of it would stand up to scrutiny from people more knowledgeable in those areas. There are also a number of issues I have with the argument itself.

Where’s does it end? Well, eventually, at the theoretical limits of computation. These theoretical limits are very, very high - without even getting close to the limit, a 10kg computer could do more computation every hour than 10 billion human brains could do in a million years. (And a superintelligence wouldn’t be limited to just 10kg). At that point, we are talking about something that can essentially do anything that is allowed by the laws of physics - something so incredibly smart it’s comparable to a civilisation millions of years ahead of us.

The theoretical limits of computation are lower bounds, we don't know if it is possible to achieve them for any kind of computation, let alone for general computation. Moreover, having a lot of computational power probably doesn't mean that you can calculate everything. A lot of real-world problems are hard to approximate in a way that adding more computational power doesn't meaningfully help you. For example, computing approximate Nash-equilibria or finding good lay-outs for microchip design. It is not clear that having a lot of computing power translates into relevant superior capabilities.

We don’t yet know how to program any high-level human concept like morality, love, or happiness - the difficulty is in nailing down the concept to the kind of mathematical language a computer can understand before it becomes superintelligent.

There is a growing literature on making algorithms fair, accountable and transparent. This is a collaborative effort between researchers in computer science, law and many other fields. There are so many similarities between this and the professed goals of the AI Safety community that it is strange that no cross-fertilization is happening.

The problem is Instrumental Convergence.

You can't just ask the AI to "be good", because the whole problem is getting the AI to do what you mean instead of what you ask. But what if you asked the AI to "make itself smart"? On the one hand, instrumental convergence implies that the AI should make itself smart. On the other hand, the AI will misunderstand what you mean, hence not making itself smart. Can you point the way out of this seeming contradiction?

So a superintelligence could be super powerful and super dangerous if and when we are able build it.

AI Safety would be a worthy cause if a superintelligence were powerful and dangerous enough to be an issue but not so powerful and dangerous as to be uncontrollable. A solution has to be necessary, but it also has to exist. Thus, there is a tension between scale and tractability here. Both Bostrom and Yudkowsky only ever address one thing at a time, never acknowledging this tension.

If it takes off slow enough, we’ll have time to figure out how to make it safe after we create the first superintelligence, which would be very handy indeed. Unfortunately, it turns out nobody agrees on that either.

Most estimates on take-off speed start counting from the point that the AI is superintelligent. Why wait until then? A computer can be reset, so if you had a primitive AGI specimen you'd have unlimited tries to spot problems and make it behave.

I'd say that a 0.0001% chance of a superintelligence catastrophe is a huge over-estimate. Hence, AI Safety would be an ineffective cause area if you hold a person-affecting view. If you don't, then at least this opens the way for the kind of counterarguments used against Pascal's Mugging.

Comment by beth-1 on Which scientific discovery was most ahead of its time? · 2019-05-16T13:58:05.919Z · score: 14 (9 votes) · EA · GW
ahead of their time, in the sense that if they hadn't been made by their particular discoverer, they wouldn't have been found for a long time afterwards?

This definition is surprisingly weak, and in fact includes some scientific results that were way past their time. One striking example is Morley's trisector theorem, which is an elegant fact in Euclidean 2d geometry which had been overlooked for 2000 years. If not for Morley, this fact might have remained unknown for millennia longer.

Comment by beth-1 on Quantum computing concerns? · 2019-05-07T18:20:53.457Z · score: 10 (5 votes) · EA · GW

1. The mechanics of cryptographic attack and defense are more complicated that you might imagine. This is because (a) there is a huge difference between the attack capabilities of nations versus those of other maligne actors. Even if the NSA, with its highly-skilled staff and big budget, is able to crack your everyday TLS traffic, doesn't mean that your bank transactions aren't safe against petty internet criminals. And (b) state secrets typically need to be safe against computers of 20+ years in the future, as you don't want enemy states to capture your traffic now and decrypt it as soon as slightly better hardware is available.

2. NIST is running a project at this moment to standardize a post-quantum cryptographical protocol. Cryptographers from many countries in the world are collaborating on this. The tentative timeline lists the completion of the draft standards in 2022-2024.

Hence, experts worldwide estimate that strong quantum computers will not be deployed even by intelligence agencies until well into the 2030s (e: 40's). Consumer targets will stay safe for even longer than that.

Comment by beth-1 on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-03T10:59:48.799Z · score: 1 (1 votes) · EA · GW

I remember EA-aligned vegan Youtuber Unnatural Vegan making a video about this argument last week in response to a recent Vox article. She argues that the meat industry is very elastic, but I don't think she cites any specific sources. As she normally does tend to do that, I suspect those numbers are hard to come by.

Comment by beth-1 on How to Get the Maximum Value Out of Effective Altruism Conferences · 2019-04-25T06:11:27.477Z · score: 2 (2 votes) · EA · GW

3b justifies 3a, as well as that I have a much easier time paying attention to the talk. In video, there is too much temptation to play at 1.5x speed and aim for an approximate understanding. Though I guess watching the video together with other people also helps.

As for 3b, in my experience asking questions adds a lot of value, both for yourself as well as for other audience members. The fact that you have a question is a strong indication that the question is good and that other people are wondering the same thing.

Comment by beth-1 on How to Get the Maximum Value Out of Effective Altruism Conferences · 2019-04-24T22:50:47.688Z · score: 2 (2 votes) · EA · GW

I like your list. Here is my conference advice, contradicting some of yours, based mostly on my experience with academic conferences:

1. Focus on making friends. Of course it would be good to have productive discussions and make useful connections, but it is most important to know some friendly faces and feel comfortable. For me it works best to talk about unrelated things like hobbies, not about work or EA or anything like that.

2. Listening to talks is exhausting, so don't force yourself to attend too many of them. It is fine to pick just the 2-3 most interesting talks on a day and skip everything else.

3a. Attending a talk in person is widely preferable over watching the video.

3b. Ask questions at talks. If you ask less than one question over the course of a multi-day conference, you are doing something wrong.

Comment by beth-1 on On AI and Compute · 2019-04-12T16:32:52.937Z · score: 3 (1 votes) · EA · GW

The issue is that FLOPS cannot accurately represent computing power across different computing architectures, in particular between single CPUs versus computing clusters. As an example, let's compare 1 computer of 100 MFLOPS with a cluster of 1000 computers of 1 MFLOPS each. The latter option has 10 times as many FLOPS, but there is a wide variety of computational problems in which the former will always be much faster. This means that FLOPS don't meaningfully tell you which option is better, it will always depend on how well the problem you want to solve maps onto your hardware.

In large-scale computing, the bottleneck is often the communication speed in the network. If the calculations you have to do don't neatly fall apart into roughly separate tasks, the different computers have to communicate a lot, which slows everything down. Adding more FLOPS (computers) won't prevent that in the slightest.

You can not extrapolate FLOPS estimates without justifying why the communication overhead doesn't make the estimated quantity meaningless on parallel hardware.

Comment by beth-1 on Salary Negotiation for Earning to Give · 2019-04-10T19:03:39.021Z · score: 3 (2 votes) · EA · GW

I don't think that 11% figure is correct. It depends on how long you would stay at the company if you would get the job, and on the time you would be unemployed for if the offer were rescinded.

Comment by beth-1 on On AI and Compute · 2019-04-10T18:35:56.090Z · score: 1 (1 votes) · EA · GW

Without commenting on your wider message, I want to pick on two specific factual claims that you are making.

AlphaZero went from a bundle of blank learning algorithms to stronger than the best human chess players in history...in less than two hours.

Training time of the final program is a deeply misleading metric, as these programs have been through endless reruns and tests to get the setup right. I think it is most honest to count total engineering time.

I know people are wary of Kurzweil, but he does seem to be on fairly solid ground here.

Extrapolating FLOPS is inherently fraught, as is the very idea of FLOPS being a useful unit. The problem is best illustrated by the following CS proverb: "A supercomputer is a device for turning computational complexity into communication complexity." In particular, estimates for the complexity of imitating a small, mostly separate, part of a brain don't linearly scale to estimates of imitating the much more interconnected whole.

Comment by beth-1 on What open source projects should effective altruists contribute to? · 2019-03-29T01:26:08.334Z · score: 7 (4 votes) · EA · GW

The EA forum doesn't seem like an obvious best choice. Just because it is related to EA does not make it effective, especially considering the existence of discussion software like Reddit, Discourse, and phpBB.

I'd say it mostly depends on what kind of skills and career capital you are aiming for. There are a number of important (scientific) software packages with either zero or one maintainers, which could be useful to work on either upstream or downstream.

Personally, I am presently just doing (easy) fixes for bugs that I run into myself. But I am considering to either start officially maintaining a driver that I keep patching for my own use anyway or to contribute to some decentralized web project.

It might not be super relevant for you specifically, but I do want to plug Google Summer of Code for all university students of 18 years and older as a wonderful opportunity. (application deadline April 9th)

Comment by beth-1 on Three Biases That Made Me Believe in AI Risk · 2019-02-14T04:22:27.354Z · score: 2 (10 votes) · EA · GW

I used to think pretty much exactly the argument you're describing, so I don't think I will change my mind by discussing this with you in detail.

On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.) So, I don't think I will change your mind by discussing this with you in detail.

I don't feel motivated to go back and forth on this thread, because I think we will both end up feeling like it was a waste of time. I want to make it clear that I do not say this because I think badly of you.

I will try to clear up the bits you pointed out to be confusing. In the Language section, I am referring to MIRI's writing, as well as Bostrom's Superintelligence, as well as most IRL conversations and forum talk I've seen. "bits" are an abstraction akin to "log-odds", I made them up because not every statement in that post is a probabilistic claim in a rigorous sense and the blog post was mostly written for myself. I really do estimate that there is less than chance of AI being risky in a way that would lead to extinction, whose risk can be prevented, and moreover that it is possible to make meaningful progress on such prevention within the next 20 years, along with some more qualifiers that I believe to be necessary to support the cause right now.

Comment by beth-1 on Three Biases That Made Me Believe in AI Risk · 2019-02-14T02:05:03.145Z · score: 4 (4 votes) · EA · GW

Thank you for your response and helpful feedback.

I'm not making any predictions about future cars in the language section. "Self-driving cars" and "pre-driven cars" are the exact same things. I think I'm grasping at a point closer to Clarke's third law, which also doesn't give any obvious falsifiable predictions. My only prediction is that thinking about "self-driving cars" leads to more wrong predictions than thinking about "pre-driven cars".

I changed the sentence you mention to "If you want to understand present-day algorithms, the "pre-driven car" model of thinking works a lot better than the "self-driving car" model of thinking. The present and past are the only tools we have to think about the future, so I expect the "pre-driven car" model to make more accurate predictions." I hope this is clearer.

Your remark on "English that's precise enough to translate into code" is close, but not exactly what I meant. I think that it is a hopeless endeavour to aim for such precise language in these discussions at this point in time, because I estimate that it would take a ludicrous amount of additional intellectual labour to reach that level of rigour. It's too high of a target. I think the correct target is summarised in the first sentence: "All sentences are wrong, but some are useful."

I think that I literally disagree with every sentence in your last paragraph on multiple levels. I've read both pages you linked a couple months ago and I didn't find them at all convincing. I'm sorry to give such a useless response to this part of your message. Mounting a proper answer would take more time and effort than I have to spare in the foreseeable future. I might post some scraps of arguments on my blog soonish, but those posts won't be well-written and I don't expect anyone to really read those.

Comment by beth-1 on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-26T12:36:55.100Z · score: 5 (4 votes) · EA · GW

My troubles with this method are two-fold.

1. SHA256 is a hashing-algorithm. Its security is well-vetted for certain kinds of applications and certain kinds of attacks, but "randomly distribute the first 10 hex-digits" is not one of those applications. The post does not include so much as a graph of the distribution of what the past drawing results would have been with this method, so CEA hasn't really justified why the result would be uniformly distributed.

2. The least-significant digits in the IRIS data are probably fungible by adversaries. It is hard to check them, and IRIS has no reason to secure their data pipeline against attacks that might cost tens of thousands of dollars, because there are normally no stakes whatsoever attached to those bits.

Random.org is exactly in the business that we're looking for, so they'd be a good option for their own institutional guarantee. Otherwise, any big lottery in any country will work as a source of randomness: the prizes there are bigger, which means that, even if these lotteries could be corrupted, nobody would waste that ability on rigging the donor lottery.

Comment by beth-1 on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T07:52:02.893Z · score: 0 (2 votes) · EA · GW

I'd like to see some justification for using this approach over the myriad of more responsible ways of generating random draws.