Is there such a thing as a 'Meta Think Tank'? 2022-04-30T04:03:58.145Z
Interpreting the Systemistas-Randomistas debate on development strategy 2022-03-29T16:53:07.010Z
Is the current definition of EA not representative of hits-based giving? 2021-04-26T04:37:14.560Z
Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? 2021-04-06T16:07:04.459Z
My preliminary research on the Adtech marketplace 2021-03-30T04:42:44.231Z
What are some resources to learn how Technology affects longtermism in Policymaking? 2020-06-12T14:50:45.677Z


Comment by Venkatesh on How would you draw the Venn diagram of longtermism and neartermism? · 2022-05-26T16:46:53.531Z · EA · GW

I do not think its about discount rates. I was recently corrected on this point here. It looks like conservatives and moderates thinking closer to the present have other better reasons like population axiologies or tractability concerns or something along those lines.

Comment by Venkatesh on How would you draw the Venn diagram of longtermism and neartermism? · 2022-05-26T05:27:29.585Z · EA · GW

There is ambiguity in the terminology here. So here is how I visualize it with my own terminology. Its not a Venn diagram but this is how I see it.


Comment by Venkatesh on The Many Faces of Effective Altruism · 2022-05-19T01:31:16.509Z · EA · GW

I thoroughly enjoyed this! The tone of the writing matched perfectly with the idea that is being conveyed.

If I may add a category:

  1. Desi EA - Someone not from a developed country kinda feeling out of place and totally inadequate to do anything about most mainstream EA cause areas. Mostly English-speaking educated elite from developing countries who possibly watch a lot more Hollywood than their local genres. (Also has some inability to parse slang. I honestly didn't understand what the moniker "IDW" and "A-aesthetic" meant although I think I understood the explanation)
Comment by Venkatesh on EA Forum feature suggestion thread · 2022-05-16T13:06:11.899Z · EA · GW

Recently Less wrong has created this feature. C'mon EA Forum!

Comment by Venkatesh on EA Forum feature suggestion thread · 2022-05-08T13:26:01.260Z · EA · GW

Please let me search within my bookmarks.

In general, I read something and bookmark it if I liked it. Then that thing that I read comes up in conversation. I go into my bookmarks to find it so that I can share it with the other person mid-convo quickly but then I can't retrieve it from the bookmarks list as fast as I thought I could! This happens to me in almost every session as a facilitator of the EA Virtual programs!

Comment by Venkatesh on When did the EA Forum get so good?! · 2022-05-06T11:33:23.461Z · EA · GW

On the topic of saving posts - I personally use the bookmarks feature quite a bit. Just wanted to mention it in case someone wasn't aware. The one issue I have is that I can't search within my bookmarks.

One can bookmark posts by clicking on the 3 dots just below the title of the post and then clicking on Bookmark. Then the Bookmarks can be accessed from the dropdown menu that appears underneath the username.

Comment by Venkatesh on EA is more than longtermism · 2022-05-04T18:14:55.270Z · EA · GW
  1. So EA isn’t “just longtermism,” but maybe it’s “a lot of longtermism”? And maybe it’s moving towards becoming “just longtermism”?

EA has definitely been moving towards "a lot of longtermism".

The OP has already provided some evidence of this with funding data. Another thing that signals to me that this is happening is the way 80k hours has been changing their career guide. Their earlier career guide started by talking about Seligmann's factors/Positive Psychology and made the very simple claim that if you want a satisfying career, positive psychology says your life must involve an aspect of helping others in a meaningful way. Then one fine day they made the key ideas page and somehow now longtermism has become the "foundation" of their advice. When the EA org that is most people's first contact with EA makes longtermism as the "foundation" of their recommendations, it should defintiely mean that EA now wants to move towards "a lot of longtermism".

  1. What if EA was just longtermism? Would that be bad? Should EA just be longtermism?

Yes, it would be bad if EA was just longtermism.

I believe that striving to make the simplest and honest case for a cause area is not a choice but an intellectual obligation. It is irresponsible to put forth unnecessarily complicated ideas and chase away people who might have otherwise contributed to that cause. I think Longtermism is currently an unnecessarily complicated way for us to make the case to convince people to contribute to most of the important EA cause areas. My thoughts come from this excellent post.

I am willing to concede my stance on this 2nd question if you can argue convincingly that:

  1. striving to make the simplest and honest case for a cause area is not an intellectual obligation we need to hold.
  2. there are some cause areas where longtermism is actually the simplest argument one can make to convince people to work on them. Of course, that would mean EA can then only work on those cause areas where this is true, but that might not be so bad if it is still highly impactful.

Some hedging:

I still believe Longtermism is an important idea. If you are a philosopher I would highly encourage you to work on it. But I just don't think it is as important for EA as EA orgs are currently making it seem like. This is especially true of Strong Longterism.

I also think that this could all be a case of nomenclature being confusing. Here is a post talking about this confusion. Maybe those who do Global health & development are also on the longtermism spectrum but are just not as 'gaga' about it as Strong longtermists seem to be. After all it's not as though Expectation value bets on the future (maybe not far future) can't be made in a Global health & development intervention! If we clarify the nomenclature here then it could be possible that "Longtermism" (or whatever else the clarified nomenclature would call it) could become a clearer & simpler explanation to convince people to contribute to a cause area. Then I would still be fine with EA becoming "Longtermism" (in the clarified nomenclature).

Comment by Venkatesh on Solving the replication crisis (FTX proposal) · 2022-04-26T05:42:54.710Z · EA · GW

I am really happy to see someone doing something about the replication crisis. Sorry that you didn't get funded. I know very little about FTX or grantmaking in general and so I can't comment on the nature of your proposal or how to make it better. But now that I see someone doing something about the replication crisis I have done an update on the Tractability of this cause area and I am excited to learn more!

This excitement lead to some small actions from my end:

  1. I visited the Institute for Replication website and found it to be very helpful. I really appreciate the effort that went into making the Teaching tab on the website. I will try to make time in the near future (within a month or so) to go through the resources carefully.
  2. I subscribed to the BITSS YouTube Channel and skimmed through a couple of chapters of the open source textbook, Reproducible Data Science.
  3. I looked for material on the replication crisis elsewhere on this forum. I found this panel discussion from EA Global 2016 and... thats about it! Since, IMO, not enough EA material is there on this cause area, I put down a comment in the What posts do you want someone to write? in the hopes that someone wading through it for ideas will decide to write more about it.

One thing still unclear to me - are there career opportunities here or just volunteer opportunities? In the proposal, you mentioned "reproducibility analysts at selected journals" - I had no idea that was a thing that people did! But it sounds like a very interesting role to me considering the Scale of the problem. How many people do it and is there a high demand for it? What sort of degree does someone need to do it?

All the best with the project! I sincerely hope someone else will fund it and it will be successful.

Comment by Venkatesh on What posts do you want someone to write? · 2022-04-26T05:38:33.373Z · EA · GW

Write about the replication crisis in the 80k hours Problem profile style. Basically, write about the problem, apply the SNT framework to it, mention orgs currently working on it, mention potential career options for someone who wants to address this problem etc..

This suggestion came after reading this post.

Comment by Venkatesh on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T06:36:05.879Z · EA · GW

From reading this and other comments, I think we should rename longtermists to be "Temporal radicalists". The rest of the community can be "Temporal moderates" or even be "Temporal conservatives" (aka "neratermists") if they are so inclined. I attempt to explain why below.

It looks like there is some agreement that long-termism is a fairly radical idea.

Many (but not all) of the so-called "neartermists" are simply not that radical and that is the reason why they perceive their monicker to be problematic. One side is radical and many in the other side are just not that radical while still believing in the fundamental idea

By "radical", I mean believing in one of the extreme ends of the argument. The rest of the community is not on the other extreme end which is what "neartermism" seems to imply. It looks like many of those not identifying as "longermists" are simply on neither of the extreme ends but somewhere in the spectrum between "longtermists" and "neartermists". I understand now that many who are currently termed "Neartermists" would be willing to make expectation value bets on the future even with fairly low discount rates. From the link to the Berger episode that JackM provided (thanks for that BTW!):

"It’s tied to a sense of not wanting to go all in on everything. So maybe being very happy making expected value bets in the same way that longtermists are very happy making expected value bets, but somehow wanting to pull back from that bet a little bit sooner than I think that the typical longtermist might."

So to overcome the naming issue, we must have a way to recognize that there are extreme ends in this argument and also a middle ground. With this in mind, I would rename the currently "longtermists" as "Tempoal radicalists" while addressing the diversity of opinions in the rest of the community with two different labels "Temporal moderates" and "Temporal conservatives" (which is the synonym of "neartermism"). You can even call yourself 'Temporal moderate leaning towards conservatism' to communicate your position with even more nuance.

PS: Sorry for too many edits. I wanted to write it down before forgetting it and later realized I had not communicated properly.

Comment by Venkatesh on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-19T17:18:24.253Z · EA · GW

Is it possible to have a name related to discount rates? Please correct me if I am wrong, but I guess all "neartermists" have a high discount rate right?

Comment by Venkatesh on Unsurprising things about the EA movement that surprised me · 2022-03-31T06:54:38.544Z · EA · GW

For me, the big revelation was that EA was not just about causes that are supported by RCTs/empirical evidence. It has this whole element of hits-based giving. In fact, the first time I realized this, I ended up creating a question on the forum about the misleading definition.

Comment by Venkatesh on What complexity science and simulation have to offer effective altruism · 2021-06-10T17:58:57.909Z · EA · GW

Overall, this seems like a weak criticism worded strongly. It looks like the opposition here is more to the moniker of Complexity Science and its false claims of novelty but not actually to the study of the phenomenon that fall within the Complexity Science umbrella. This is analogous to a critique of Machine Learning that reads "ML is just a rebranding of Statistics". Although I agree that it is not novel and there is quite a bit of vagueness in the field, I disagree on the point that Complexity Science has not made progress.

I think the biggest utility of Complexity Science comes in breaking disciplinary silos. Rebranding things to Complexity Science, just brings all the ideas on systems from different disciplines together under one roof. If you are a student, you can learn all these phenomena in one course or degree. If you are a professor, you can work on anything that relates to Complex Systems phenomena if you are in a Complexity department. The flip side of it is, you might end up living in a world of hammers without nails - you would just have a bunch of tools without a strong domain knowledge in any of the systems that you are studying.

My take on Complexity Science is that it is a set of tools to be used in the right context. For your specific context, some or none of the tools of Complexity Science can be useful. Where Complexity Science falls apart for me is when it tries to lose all context and generalize to all systems. I think the OP here is trying to stay within context. The post is just saying we can build ABMs to approach some specific EA cause areas. So I am more or less onboard with this post.

On a final note, I am in agreement with your critique on abuse of Power Laws. There are too many people that just make a log-log plot, look at the line and exclaim "Power law!". The Clauset-Shalizi-Newman paper you linked to is the citation classic here. For those who do network theory, instead of trying to prove your degree distribution is a power law, I would recommend doing Graphlet Analysis.

Comment by Venkatesh on What complexity science and simulation have to offer effective altruism · 2021-06-08T17:02:43.980Z · EA · GW

Thanks a lot for posting this! I also have the same feeling as finm in that I wanted to write something like this. But even if I had written it wouldn't have been as extensive as this one is. Wonderfully done!

To add to the pool of resources that the post has already linked to:

  1. You can meet other people interested in Complexity Science/Systems Thinking here: It is a wonderful community with a good mix of rookies and experts. So even if you are new to Complexity you should feel free to join in. I participated in their latest event and got a lot out of it (specifically on making ABMs with MESA)
  2. If you want a simple (also free!) intro to Complexity, I would recommend Introduction to the Modeling and Analysis of Complex Systems by Hiroki Sayama. If you go to the Wikipedia page on Complex System, the picture in it that portrays several subfields of Complexity was taken from this book. It should be easy to follow if you just know high school math. I have personally only read the Networks part of it (since that is the subfield of Complexity I have mostly worked on) and it was good enough to get my feet wet.
Comment by Venkatesh on What is meta Effective Altruism? · 2021-06-02T11:28:24.385Z · EA · GW

The very vague definition of "Cause Area" is making it hard for me to think about meta EA. It feels like GPR is a cause area and so working on it would be direct impact work but I am not sure. Same goes for EA Movement building. Also, it starts getting trippy if we claim meta-EA is also a cause area!

Maybe we can clarify the definition for cause area within this meta EA framework?

Comment by Venkatesh on Exporting EA discussion norms · 2021-06-02T10:43:04.279Z · EA · GW

Specifics matter. There can be no one discussion norm to get people to be nice to each other.

I think things like discussion norms are highly contextual. The platform in which the discussion is happening, the point being discussed, the people who are involved in the discussion are some of the many factors that could end up mattering. Given these factors, transporting discussion norms from one virtual place to another might not be the right way to think about it.

I think the "EA-like" discussion norm is a function of several things. In addition to the factors mentioned above, the concept of EA itself seems to ask for people to be uncertain and humble.

Consider the following thought experiment - say you took all the same people from the EA Forum and put them all in a Facebook group. Do you think the "EA-like" discussion norms currently here would be maintained? Or imagine putting them all in a forum, not about EA or Philosophy or Sciency stuff. What would happen?

Comment by Venkatesh on Complexity and the Search for Leverage · 2021-05-28T17:25:07.445Z · EA · GW

Thanks for this wonderful article! I absolutely agree that it would be highly beneficial to have a community that is at the intersection of EA and Complexity. I recently participated in an event, where I actually found several other EAs interested in Complexity but unfortunately I couldn't spend enough time to network with them further (I got involved in another project there).

I have also been thinking about how we may use the tools of Complexity to make EA better although I haven't been able to concretely land on anything. Here are some vague thoughts I have. I am not entirely sure if any of these thoughts are worth pursuing so tug at these threads at your own peril!:

  1. I wonder if there is a possibility of creating an Agent-Based Model to understand Global Catastrophic Risks although I am not entirely sure how to go about doing this. This talk by Luisa Rodriguez here might be a good place to start. She is not building an ABM (atleast going by what she said in that talk) but the way she talks about it made me feel like an ABM could help.
  2. Complexity has some roots in Philosophy (A quick Google search took me here). I wonder how the philosophy of EA and that of Complexity would work together.
  3. I wonder if we can deal with flow-through effects better if we had a Complex Systems view. Is this a network shaped problem?

But these are all mostly at a 'wondering-if' stage and one would definitely need help from cleverer people to actually start some concrete work. So having a community around EA & Complexity would be highly beneficial.

Comment by Venkatesh on 4/29/2021: GiveWell Board Member, EA Global Speaker, & Hewlett Foundation Program Officer, Norma Altshuler, is Giving Career Advice & Q&A for those interested in Philanthropy or Global Development Careers · 2021-04-30T05:26:16.460Z · EA · GW

Is a recording of this event available?

Comment by Venkatesh on Is the current definition of EA not representative of hits-based giving? · 2021-04-28T14:28:03.338Z · EA · GW

Thanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit.

I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.

Comment by Venkatesh on Is the current definition of EA not representative of hits-based giving? · 2021-04-28T13:58:00.972Z · EA · GW

I both agree and disagree with you.


  • I agree that the ambiguity in whether giving in a hits-based way or evidence-based way is better, is an important aspect of current EA understanding. In fact, I think this could be a potential 4th point (I mentioned a third one earlier) to add to the definition desiderata: The definition should hint at the uncertainty that is in current EA understanding.
  • I also agree that my definition doesn't bring out this ambiguity. I am afraid it might even be doing the opposite! The general consensus is that both experimental & theoretical parts of the natural sciences are equally important and must be done. But I guess EAs are actually unsure if the evidence-based giving & careful reasoning-based giving (hits based) should both be done or if we would be doing more good by just focussing on one. I should possibly read up more on this. (I would appreciate it if any of you can DM me any resources you have found on this) I just assumed EAs believed both must be done. My bad!

Disagreement: I don't see how Will's definition allows for debating said ambiguity though. As I mentioned in my earlier comment, I don't think that the definition distinguishes between the two schools of thought enough. As a consequence, I also don't think it shows the ambiguity between them. I believe a conflict(aka ambiguity) requires at least two things but the definition actually doesn't convincingly show there are two things in the first place, in my opinion.

Comment by Venkatesh on Is the current definition of EA not representative of hits-based giving? · 2021-04-27T12:56:10.734Z · EA · GW

Thanks for bringing up Will's post! I have now updated the question's description to link to that.

I actually like Will's definition more. The reason is two-fold:

  1. Will's definition adds a bit more mystery which makes me curious to actually work out what all the words mean. In fact, I would add this to the list of "principal desiderata for the definition" the post mentions: The definition should encourage people to think about EA a bit deeply. It should be a good starting point for research.
  2. Will's definition is not radically different from what is already there - the post says "little more rigorous" - which makes the cost of changing to this definition lesser. (One of the costs of changing something as fundamental as the definition could be giving the perception to the community that somehow there has been a significant change in the foundations of EA when hasn't been any - we are just trying to better reflect what is actually done in EA).

One critique I have of Will's alternative is that the proposed definition isn't quite distinguishing the two schools of thought. To explain my thinking here is a bit more visual representation. Let () represent a bucket:

  • Will's definition and existing definition makes things feel like - (Evidence, Careful reasoning) - it is just one bucket
  • But it should really feel like - (Evidence), (Careful reasoning) - two separate buckets

Apologies if that is too nitpicky but I don't think it is. I think the distinctness of Evidence and Careful reasoning needs to come out. I guess rephrasing it this way would be better: Effective altruism attempts to improve the world by the use of experimental evidence and/or theoretical reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms.This rephrasing is inspired by the fact that many of the natural sciences split into two - theory and experiment (like Theoretical Physics and Experimental Physics). We are saying EA is also that way which I think it is. I think this also adds to the Science-aligned point that Will mentions. (I have edited this to say that I don't think this definition is a good one. See my next comment below)

Comment by Venkatesh on Is the current definition of EA not representative of hits-based giving? · 2021-04-26T18:59:21.100Z · EA · GW
  1. The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evidence. Maybe that is why I am sensing this exclusion of high variance strategies.

  2. You are probably right. But I am worried if that is really a good strategy? By not openly saying that we do things we are uncertain about we could end up coming off as a know-it-all who has it all figured out with evidence! There were some discussions along these lines in another recent post. Maybe having a definition that kind of gives a subtle nod to hits-based giving could help with that?

Your point about 'working through the definition' actually gave me an idea: What if we rephrased to "high-quality evidence and/or careful reasoning". That non-standard phrasing of 'and/or' sows some curiosity to actually work things out, doesn't it? I am making the assumption that the phrase "high-quality evidence" is empirical evidence (as I already said) and the phrase "careful reasoning" includes Expected Value thinking, making Fermi estimates and all the other reasoning tools that EAs use. Also, this small phrasing change is not that radically different from what we already have so the cost of changing shouldn't be that high. Of course the question is, is it actually that much more effective than what we have. Would love to hear thoughts on that and of course other suggestions for a better definition...

Comment by Venkatesh on Is the current definition of EA not representative of hits-based giving? · 2021-04-26T13:31:08.378Z · EA · GW

For evaluating the definition of EA we would only want people who don't know much about EA. So we would need a focus group of EA newcomers and ask them what the definition means to them. Does that sound right?

Comment by Venkatesh on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-08T17:34:17.663Z · EA · GW

Consider this - say the EA figured out the number of people the problem could affect negatively (i.e) the scale. Then even if there is a small probability that the EA could make a difference shouldn't they have just taken it? Also even if the EA couldn't avert the crisis despite their best attempts they still get career capital, right?

Another point to consider - IMHO, EA ideas have a certain dynamic of going against the grain. It challenged the established practices of charitable giving that existed for a long time. So an EA might be inspired by this and indeed go against the established central bank theory to work on a more neglected idea. In fact, there is at least some anecdotal evidence to believe that not enough people critique the Fed. So it is quite neglected.

Comment by Venkatesh on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-07T03:51:45.667Z · EA · GW

"... I believe personal features (like fit and comparative advantages) would likely trump other considerations..." That is a very interesting point. Sometimes I do have a very similar feeling - the other 3 criteria are there mostly just so one doesn't base one's decision fully on personal fit but consider other things too. At the end of the day, I guess the personal fit consideration ends up weighing a lot more for a lot of people. Would love to hear from someone in 80k hours if this is wrong...

Editing to add this: I wonder if there is a survey somewhere out there that asked people how much do they weigh each of the 4 factors. That might validate this speculation...

Comment by Venkatesh on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-07T03:44:14.462Z · EA · GW

Thanks for linking to that OpenPhil page! It is really interesting. In fact, one of the pages that page links to talks about ABMs that rory_greig mentioned in his comment.

Comment by Venkatesh on Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened? · 2021-04-07T03:22:48.866Z · EA · GW

As someone interested in Complexity Science I find the ABM point very appealing. For those of you with a further interest in this, I would highly recommend this paper by Richard Bookstaber as a place to start. He also wrote a book on this topic and was also one of the people to foreshadow the crisis.

Also if you are interested in Complexity Science but never got a chance to interact with people from the field/learn more about it, I would recommend signing up for this event.

Comment by Venkatesh on The case of the missing cause prioritisation research · 2021-04-02T07:32:04.471Z · EA · GW

Sorry for digging up this old post. But it was mentioned in the Jan 2021 EA forum Prize report published today and that is how I got here.

This comment assumes that Cause Prioritization (CP) is a cause area that requires people with width(worked across different cause areas) rather than depth(worked on a single cause area) of knowledge. That is, they need to know something about several cause areas instead of deeply understanding one of them. Would love to hear from CP researchers or others who would disagree.

  1. Maybe CP is an excellent path for some people in mid/late career. I think there could be some people in the middle of their career who have width rather than depth of knowledge. I might be wrong but it feels like the current advice for mid-career folks from 80k hours (See this 80k hours podcast episode discussion for example) seems to focus on people with skill depth alone. Further, I also think 80k hours may actually be creating people who have skill width by encouraging people to experiment with working on different cause areas until they find the best personal fit. What if we could tell them - "Experimented a lot? Have a lot of width? Try CP!"

  2. I also feel like it would be difficult for people in their early career to rationalize working on CP. Personally, as someone in their early career, I feel like I don't fully understand even one of the cause areas of interest to EAs properly. How can I then hope to understand multiples of them, find those not yet unknown and on top of it prioritize them all!? Now, there is good reason to believe EA is a relatively young movement (majority age between 25-34) and since young people can't rationalize working on CP, we are seeing relatively lesser research on this.

  3. Maybe as EAs grow older eventually CP research will gain steam. Maybe their depth could also give them some width. At a later stage, current EAs working on a specific cause area could feel, "Having done specialized work all these years, I am beginning to see some ways I can generalize this stuff. Maybe this generalization is the next big impactful thing I can do" and then get into CP. Maybe some EAs already realized this and have even planned their career so that they can do CP at a later stage. So this whole thing could just be a matter of time. But that doesn't mean we should not worry - what if at the stage when EAs want to generalize we don't have the structures in place for them to pursue it?

Comment by Venkatesh on Announcing "Naming What We Can"! · 2021-04-01T10:39:21.406Z · EA · GW

May I suggest that you also name people who strongly identify with the ideas of some of these organizations? For instance, 64,620 hourists; Glomars; Dr.Phils; The InCredibles (CrediblyGood);

Also if FHI is Bostrom's squad then they should rename their currently boringly named "Research Areas" page to Squad Goals.

Happy April Fools! :-)

Comment by Venkatesh on My preliminary research on the Adtech marketplace · 2021-03-30T14:22:16.063Z · EA · GW

Hi tamgent! Thanks for the suggestion. I have edited the post to add my thoughts on relevance to EA. I am no expert at cause prioritization, so I have tried my best to make an argument. Would love to hear your thoughts.

Comment by Venkatesh on Open Thread #39 · 2018-03-26T06:40:34.583Z · EA · GW

Nope. Its been a long time now and I had almost forgotten about it! I guess this means we should start one...

Comment by Venkatesh on Open Thread #39 · 2017-12-04T21:22:59.583Z · EA · GW

Right. I sent a message via the contact page in the EA Hub Website. Maybe I will get an update on what is going on.

Comment by Venkatesh on Open Thread #39 · 2017-12-04T19:45:47.511Z · EA · GW

Is there an Effective Altruism wiki? I found this one: but the URL that it asks you to go to doesn't take you anywhere.

I am sorta new to the EA movement. I think contributing to a wiki will help me learn more. Plus as a non-native English speaker trying to improve English writing skills, I think contributing to a Wiki can be useful to me. So where is the Wiki? If not, shouldn't we start one or improve the aforementioned wikia page?