Posts

How do independent researchers get access to resources? 2022-08-12T07:51:37.866Z
EA Israel Community Survey - 2021 2021-05-24T18:56:13.198Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z

Comments

Comment by Guy Raveh on Samotsvety Nuclear Risk update October 2022 · 2022-10-04T09:29:59.763Z · EA · GW

Thanks for the post. I appreciate your effort to attach your reasoning to the forecasts - I think the reasoning is much more informative, especially given the huge amount of uncertainty.

A small suggestion for improvement would be to number the probabilities and quoted reasoning, so as to make it easier to match between them.

Comment by Guy Raveh on Samotsvety Nuclear Risk update October 2022 · 2022-10-04T09:27:37.473Z · EA · GW

I'm willing to bet* the forum will support it - and I think it's a bad idea.

*Not, like, on the actual platform

Comment by Guy Raveh on Call to Action: Help your high school gain sponsorship to run a charity election · 2022-10-03T09:44:31.034Z · EA · GW

I think my highschool might like this kind of thing. Any thoughts on whether this would work well in a non-English-speaking country?

Highschoolers generally have good enough English here to engage with general texts, but obviously it's harder for them and takes them longer to read, write etc. So basically the question is, how many hours of work do you think this requires from each student (if they were a native speaker)?

If it's not too high an amount, I'll reach out to my school and pitch it as an opportunity for English class.

Comment by Guy Raveh on [deleted post] 2022-10-02T22:53:58.067Z
  1. This is not necessarily between Democrats and Republicans - there's a split within the Republican party too.

  2. "Dangerous for EA" is a consideration, but a bad enough threat for democracy, assuming this is one, can still be a stronger consideration.

I'm not educated enough on American politics to know if the assumption is correct. But on a global scale, it certainly looks like democracy is backsliding (and has been for decades). It's true locally here in Israel, it's true in several European countries, and it looks like it's true on average in the world in general, going by Freedom House democracy scores.

Comment by Guy Raveh on William MacAskill - The Daily Show · 2022-09-29T21:31:12.174Z · EA · GW

Ok, I'm really confused about the downvotes here. If someone cares to explain, I'd be grateful.

Comment by Guy Raveh on William MacAskill - The Daily Show · 2022-09-29T13:20:23.273Z · EA · GW

As another point, I'm really glad to see how well Trevor Noah understood this, and how intelligently he tried to confront Will's argument with the prevalent progressive views.

Comment by Guy Raveh on William MacAskill - The Daily Show · 2022-09-29T13:13:14.170Z · EA · GW

I'm not a native speaker and found his accent very easy to understand. But yeah, info on Americans might be valuable.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-29T08:31:20.362Z · EA · GW

Without opining on the actual discussion: I don't think the logic here is sound. The fact that many agreement votes to this can mean one thing, doesn't mean many disagreement votes aren't indicative of a similar thing. You could imagine casual paths leading to each of those outcomes.

Comment by Guy Raveh on William MacAskill - The Daily Show · 2022-09-29T02:06:24.721Z · EA · GW

I think this went really well, although MacAskill could make another point, which I'm not sure why he chose not to - which is that the people living paycheck to paycheck, not being sure where their rent will come from, aren't really expected to do these things. That these are the obligations of people in stable lives, and that those who don't yet have stability should focus on obtaining it for themselves first.

Comment by Guy Raveh on I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? · 2022-09-29T01:05:39.906Z · EA · GW
  1. How does he view the relationship between AI safety researchers and AI capabilities developers? Can they work in synergy while having sometimes opposite goals?

  2. What does he think the field of AI safety is missing? What kinds of people does it need? What kinds of platforms?

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-29T00:53:27.156Z · EA · GW

I'm willing to discuss this over Zoom, or face to face once I return to Israel in November.

What I think my main points are:

  • We don't seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
  • Most extinction scenarios that I've read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
  • AGI is poorly defined, so it's hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
  • It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off

I'm not entirely convinced in any of them - I haven't thought about this carefully.

Edit: there's a doom scenario that I'm more worried about, and it doesn't require AGI - and that's global domination by a tyrannical government.

Comment by Guy Raveh on The Onion Test for Personal and Institutional Honesty · 2022-09-29T00:37:11.281Z · EA · GW

Edit: I want to highlight that I do appreciate the compassion that I think is part of the model in your post, and I don't mean this comment as a personal attack but rather a very specific criticism.

Huh, I'm not sure why I didn't voice my disagreement initially. My vote was because the phrases you suggested come across as arrogant and patronising, in my opinion.

I think it's sometimes obvious that you'd not tell someone you care about something that would hurt them; and at other times whether you should tell them or not is something that needs to be established explicitly according to their preferences. If it's not your private information, it should be an extremely rare occasion anyway.

It also adds that in either case, you're hiding the information without actually giving any information on what they can expect it to be like, which perhaps contradicts your model.

I admit my disagreement partially has to do with rejecting the concept of infohazards, which I find arrogant and patronising in general.

Comment by Guy Raveh on Smart Movements Start Academic Disciplines · 2022-09-29T00:27:19.561Z · EA · GW

Or maybe just "priorities research". You could have different researchers focusing on different levels, some on global problems and some on local ones.

Pros:

  • Easier to digest for most people, who also value solving local issues
  • We do want prioritisation to also happen at national/municipal levels and not just globally
    • Perhaps this could even free up budget for more foreign aid, although I'm a bit doubtful

Cons:

  • Maybe national level research would be much more popular, and thus we'd lose an important part of what we want this to achieve
  • National interests often conflict with global ones
Comment by Guy Raveh on Reasoning Transparency · 2022-09-28T17:26:16.286Z · EA · GW

Disclaimer: I wrote this while tired, not entirely sure it's coherent or relevant

It's less productive for communicating ideas that are not as analytic and reductionist, or more subjective. One type of example would be ones that are more like an ideology, a [theory of something], a comprehensive worldview. In such cases, trying to apply this kind of reductionist approach is bound to miss important facets, or important connections between them, or a meaningful big picture.

Specific questions I think this would be ill-suited for:

  • Should you be altruistic?
  • What gives life meaning?
  • Should the EA movement have a democratic governance structure? (Should it be centralised at all?)
  • Is capitalism the right framework for EA and for society in general?

It should be noted that I'm a mathematician, so for me it usually is a comfortable way of communication. But for less STEM-y or analytical people, who can communicate ideas that I can't, I think this might be limiting.

Comment by Guy Raveh on Reasoning Transparency · 2022-09-28T15:22:36.392Z · EA · GW

I think it's sometimes a strength and sometimes a weakness. It's useful for communicating certain kinds of ideas, and not others. Contrary to Lizka, I personally wouldn't want to see it as part of the core values of EA, but just as one available tool.

Comment by Guy Raveh on 7 traps that (we think) new alignment researchers often fall into · 2022-09-28T12:50:44.121Z · EA · GW

In my perspective, new and useful innovations in the past, especially in new fields, came from people with a wide and deep education and skillset that takes years to learn; and from fragmented research where no-one is necessarily thinking of a very high level terminal goal.

How sure are you that advice like "don't pursue proxy goals" or "don't spend years getting a degree" are useful for generating a productive field of AI alignment research, and not just for generating people who are vaguely similar to existing researchers who are thought of as successful? Or who can engage with existing research but will struggle with stepping outside its box?

After all:

  1. Many existing researchers who have made interesting and important contributions do have PhDs,
  2. And it doesn't seem like we're anywhere close to "solving alignment", so we don't actually know that being able to engage with their research without a much broader understanding is really that useful.
Comment by Guy Raveh on Red Teaming CEA’s Community Building Work · 2022-09-27T15:11:33.752Z · EA · GW

Ok, I now get what you mean about the electorate. But I think (it's been some time) my point was about responsibilities to the community rather than on following through.

Regarding the last point, I'm a bit confused because in parallel to this thread we're discussing another one where I quoted this specific bit exactly, and you replied that it's not about who should govern CEA, but one meta-level up from that (who decides on the governance structure).

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-27T14:53:25.576Z · EA · GW

Hi, I upvoted because I appreciate that you took the time to give a detailed answer. I'm going to reply more thoroughly, but for now I'll highlight this:

Then we move onto (paraphrasing very slightly) "CEA clearly aren't trying to be representative of the movement". I think that "representative" could mean lots of things here, and again I agree with some versions but not others:

By this I'm referring to the decision-makers in CEA being representative of the community. Not participants of EA events.

I still don't know how exactly you choose participants, and I think the problem there is not necessarily the way you choose, but the fact that nobody seems to know what it is. But this is way less important to me than the general decision-making and transparency in CEA.

Comment by Guy Raveh on Red Teaming CEA’s Community Building Work · 2022-09-27T14:45:09.284Z · EA · GW

I'm not sure I follow.

My sense is that the board is likely to remain fairly stable, and fairly consistently interested in this.

Would you trust a governing body on the basis of someone you don't even personally know saying that their sense is that it's alright?

all of your arguments about board members would also seem like they could apply to any electorate.

Only for a limited time period - elected officials have to stand for re-election, and separation and balance of powers help keep them in check in the meantime. Changes in the community are also reflected by new elections.

I personally think that community democracy would be the wrong governance structure for CEA, for reasons stated elsewhere

Could you please point to that 'elsewhere'? I don't think I've encountered your views on the matter.

Comment by Guy Raveh on Switzerland fails to ban factory farming – lessons for the pursuit of EA-inspired policies? · 2022-09-27T13:50:16.048Z · EA · GW

I think your comment is full of mistakes and misinterpretations, and at least one of them is:

Extensive, or organic agriculture shows absolutely no health benefits whatsoever, is extremely harmful for the environment as it requires far more land, and uses outdated techniques and chemicals which pose a much larger risk to our health and the environment.

As far as I understand, the initiative would've adopted the welfare standards of organic agriculture, without any of the other characteristics of organic food that cause the things you mentioned.

Comment by Guy Raveh on Summaries are underrated · 2022-09-27T09:27:36.088Z · EA · GW

Do we have a list of summaries? I'd like to add this comment.

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-27T09:25:32.297Z · EA · GW

Wow, thanks for this well written summary of expert reviews that I didn't know existed! Strongly upvoted.

Comment by Guy Raveh on Smart Movements Start Academic Disciplines · 2022-09-26T13:14:15.824Z · EA · GW

I think another example is neoliberals, who did not "establish" a discipline, but took over much of the existing discipline of economics, and it helped them gain very wide influence.

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T12:07:21.980Z · EA · GW

To be clear, I wrote "superforecasters" not because I mean the word, but because I think the very notion is controversial like you said - for example, I personally doubt the existence of people who can be predictably "good at reasoning under uncertainty" in areas where they have no expertise.

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T10:03:30.683Z · EA · GW

I think it's kinda weird and unproductive to focus a very large prize on things that would change a single person's views, rather than be robustly persuasive to many people.

E.g. does this imply that you personally control all funding of the FF? (I assume you don't, but then it'd make sense to try to convince all FF managers, trustees etc.)

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T09:57:37.933Z · EA · GW

I think this would be better than the current state, but really any use of "superforecasters" is going to be extremely off-putting to outsiders.

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T09:55:21.756Z · EA · GW

very valuable... to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out.

I think it's the opposite. Only those experts who already share views similar to the FF (or more pessimistic) are there, and they'd introduce a large bias.

Comment by Guy Raveh on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T09:52:31.513Z · EA · GW

I've also seen online pushback against the phrasing as a conditional probability: commenters felt putting a number on it is nonsensical because the events are (necessarily) poorly defined and there's way too much uncertainty.

Comment by Guy Raveh on 9/26 is Petrov Day · 2022-09-26T08:41:37.737Z · EA · GW

Also maybe forgo the Britney Spears line?

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-26T08:39:24.262Z · EA · GW

The added point 14 doesn't have the problems I talked about in my other comment (rather the opposite). But contrary to your point about contests, I think the OpenPhil Cause Exploration Prize has helped to improve this! It produced many dozens of high-quality, object-level posts which were novel, interesting, productive and hope-inspiring.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-26T08:11:47.507Z · EA · GW

I'm saying I've never spent more than one hour on the application form.

Comment by Guy Raveh on Switzerland fails to ban factory farming – lessons for the pursuit of EA-inspired policies? · 2022-09-25T23:12:40.909Z · EA · GW

I admire your ability to persevere and use this to ask important questions and learn lessons - personally my sadness in response currently overwhelms most of this ability.

The only thought I have is that maybe we need to invest in EA becoming not just an intellectual and practical project, but also a mass movement with appeal to the public.

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T21:47:00.693Z · EA · GW

Hi, thanks for responding!

I should probably have been a bit more charitable in thinking why you wrote it specifically like this.

minority opinions like these:

These might be minority opinions in the sense that they have some delta from the majority opinions, but they still form a tiny cluster in opinion space together with that majority.

You don't often hear, for example:

  • People who think there's no significant risk from AI
  • People who think extinction is only slightly worse than a global catastrophy that kills 99.99% of the population
  • People who think charities are usually net negative, including those with high direct impact
  • Socialists

Or other categories which are about experience rather than views, like:

  • Psychologists
  • People who couldn't afford time off to interview for an EA org
  • People who grew up in developing countries (the movement seems to have partial success there, but do these people work in EA orgs yet?)
Comment by Guy Raveh on The $100,000 Truman Prize: Rewarding Anonymous EA Work · 2022-09-25T20:47:57.808Z · EA · GW

I think that you're assuming the judges will give awards to bad / damaging actions.

I'm assuming the judges will give prizes to actions that fit the outline of the examples in the post. If 80% of them seem bad/damaging, how should I trust that the judges will only issue prizes based on the singular better example?

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T20:37:23.810Z · EA · GW

Views vs. "Other Things"

I can imagine a world where the things you wrote, like "Do I learn something from talking to that person", are the sole measure of "posting quality". I don't personally think such a world is favorable (e.g. I'd rather someone who often posts smart things stay off the forum if they promote bigoted views). But I also don't think that's the world we're in.

People cannot separate judgement of these things from judgement of a person's views, even if they think they can. In practice, forum posts are often judged by the views they express ("capitalism is bad" is frowned upon), and even worse, by their style of reasoning (STEM-like arguments and phrasing is much more accepted, quantification and precision are encouraged even when inappropriate). Object-level engagement is appreciated over other forms, disregarding that it is only sometimes right to engage this way.

As I see it, the vision of a rational, logical, strongly truth-seeking forum is an illusion, and this illusion is used to drive out people with more diverse backgrounds or who come from underrepresented schools of thought.

High Standards

I personally have very high standards. There are many posts I want to write, but I really want them to be thorough and convincing, and to engage with relevant materials. You can see the result - I have written none! Is this actually helpful?

I think there can be value in posts that reiterate old content, perhaps even when they leave out important bits or have problematic logic. I have two reasons:

  1. The forum guides the movement not only through building a common knowledge base, but also through representing the growing community's views. If, for example, 8 years ago someone had written that it's acceptable to work for a tobacco company in order to donate to high impact charities - how would you know how many current EAs share that view? The view itself is not an empirical question, and the old post's karma tells you nothing about this. A new post, letting the community reengage with the ideas, might.

  2. As noted in the OP and elsewhere, EAs love to criticise EA. I'm in favor of that - there are lots of problems, and we need to notice and fix them. Alas, many are noticed but then not fixed. If 8 years ago someone had written about how diversity of experience is important, but nowadays the movement is still composed almost entirely of people from Western countries, and most community building resources also go there - it means no meaningful action is being taken to fix the problem, so it needs to be reiterated.

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T20:12:05.857Z · EA · GW

I suspect OP doesn't want more posts from employees at EA orgs because they are such employees -- I understood OP as wanting higher quality posts, wherever they come from.

Indeed, this is why I wrote "a higher concentration of posts with views correlated to those of EA org employees". It doesn't matter whether there's causality here - encouraging the correlation is itself a problem.

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T18:44:37.055Z · EA · GW

Sorry for not explaining myself well enough. But I still stand behind my interpretation. Does my new comment help?

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T18:40:20.470Z · EA · GW

Re OP's point 3:

I tried not to imply that OP directly opposes diversity (my comment was initially phrased more harshly and I changed it before publishing) - so I'm sorry that's still how I came across.

And I don't really get what you mean by competence differences etc. There's no single axis, competence at which makes people's posts on the forum more valuable, and similarly no single axis for getting hired by EA orgs.

There might be some common ones. But even then, I think my logic stands: notice that OP talks about EA orgs in particular. Meaning OP does want to see a higher concentration of posts with views correlated to those of EA org employees. But that means a lower concentration of posts from people with views who don't directly align with EA orgs - which would cause a cycle of blocking more diverse views.

Edit: I forgot to add, OP could have phrased this differently, saying that people with productive things to say (which I assume is what they may have meant by "better takes") would be busier doing productive work and have less time to post here. Which I don't necessarily buy, but let's roll with it. Instead, they chose to focus on EA orgs in particular.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-25T14:51:40.098Z · EA · GW

I’d be surprised if Scott Alexander would have included Constance’s comment in his post if he’d realised that she’d not spent any time on the applications that were rejected (obviously because she didn’t realise she was meant to!)

This is a minor point, but I did notice Constance's revised application had long answers with lots of content, and as she said she spent two hours on it. Is this usual/expected?

I usually just write a few sentences for each question, and I don't believe I've ever spent more than an hour on one.

Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T12:04:17.170Z · EA · GW

I strongly downvoted this post.

It reads mostly like "There are too many people I disagree with coming into the forum, and that's a bad thing."

It is very, very elitist. Both in the phrases you chose:

worse takes

worse distribution

People who have better thoughts

quality of person

And in the actual arguments. Especially point 3 - you want the distribution of styles and opinions (what you think is "quality of thought") to be as close as possible to that of people already employed by EA organisations - which would mean blocking diversification as much as possible.

You also assume things about the EA community (or maybe the expected impact of things), which I'm entirely not sure are right, like:

  • that "we" want more object-level content on the forum (rather than, say, people doing object-level work mostly in their jobs). This one could actually be measured, though
  • that "rational thinkers" are important for the forum. I'd agree, for example, that some limited applications of rationality are wanted - but I do not expect people to be, or even try to be, rational.
Comment by Guy Raveh on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-25T11:45:53.766Z · EA · GW

I suspect a strong selection bias, since the survey was posted in a comment on your shortform, which would mostly be read by people who know you.

Edit: I suggest reading Thomas' reply before voting on this comment.

Comment by Guy Raveh on Prioritizing the Arts in response to AI automation · 2022-09-25T11:41:59.732Z · EA · GW

Speaking personally, I'm a non-professional pianist and as much as I enjoy my professional work, I get a whole other kind of satisfaction and fulfilment from making music. It's an opportunity to express myself, and to connect with others I play with. When I go long without music, I start losing my mind - it's like having pent up, repressed feelings that you can't let out.

So while I always know professionals can do a better job than me playing whatever piece I'm trying to play, art is still a major part of how I give meaning to my life.

In analogy, I really agree that art could be an outlet that would let people feel life is meaningful, even when AI does it better than them.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-25T10:58:09.730Z · EA · GW

I want to offer my own perspective, from when I first applied to an EAG. I don't mean it to invalidate other people's perspectives and feelings about this, but for me it has helped to not give that much importance to those application results.

My perspective is that conferences are a limited resource, that the organizers are trying to distribute in an impactful way. This means choosing a group of participants who are, together, most likely to help each other gain positive impact. So being rejected doesn't mean I'm unworthy - it means that the organizers think others can gain more from the conference than I can. Which for me is basically fine.

Is it actually the case? It's hard to tell, since indeed there's no transparency, which is a problem. But at least assuming for myself that it's true has helped me personally.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-25T10:39:48.533Z · EA · GW

FWIW, I don't feel like a newcomer and I write a lot of contrarian (but honest) comments. I don't generally feel like being massively downvoted gains me status. I'm often afraid I'm lowering my chances of ever getting hired by an EA org.

Comment by Guy Raveh on The $100,000 Truman Prize: Rewarding Anonymous EA Work · 2022-09-24T19:19:24.184Z · EA · GW

Most of the examples you gave seem bad:

  1. A political figure lies about Sam's accomplishment to manipulate people into accepting it.

  2. Greg hides his involvement with a controversial field from the public or the elected officials and avoids scrutiny.

  3. An organisation hides the involvement of an unwanted person from the public (who are potential donors) and from partners. Max claims he is reformed, but how do the prize judges know? Is it really their job to decide? (This is actually relevant to ongoing EA-adjacent research.)

  4. Again it is quite unclear why disclosing Steve's involvement would undermine the project - but if it does, why does this justify hiding it? Maybe it's really bad to accept Steve's contributions - again, is it for you to judge?

Edit: there are lots of downvotes. Would someone care to explain why this is a "bad comment" or what you disagree with?

Comment by Guy Raveh on Criticism of the 80k job board listing strategy · 2022-09-24T15:16:03.952Z · EA · GW

How would you plan this exactly? Comments per job posting or per organisation?

On the one hand job postings come and go, so if you thing anyone working at [Effective Consultancy] is doing a bad thing, you don't want to have to keep looking for their postings and re-comment.

On the other hand, maybe you think their [Consultancy Safety] team is currently doing good work, and you want to endorse it temporarily while not endorsing the rest of EC.

Comment by Guy Raveh on Summarizing the comments on William MacAskill's NYT opinion piece on longtermism · 2022-09-23T22:29:27.745Z · EA · GW

Nice!

Most of the sceptic ideas look quite good to me, except for the doomerism.

These really made me chuckle:

  • 1 - This idea is un-American
  • 1 - This is all the fault of boomers
  • 1 - Stop blaming boomers
Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T20:27:39.906Z · EA · GW
  1. To be clear, I'm grateful for much of the work done by CEA and I've really enjoyed the conferences I've been to.

  2. I guess here what I mean by "elitist" diverges from what Constance meant. Because indeed you're getting more participants and there's strong pushback against that. On the other hand, decision-makers are still the same small group.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T19:53:45.998Z · EA · GW

Thanks for saying this!

I'll be happy to hear what you think when you have the time.

Comment by Guy Raveh on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T16:50:50.640Z · EA · GW

Good point.