Posts

Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z · score: 41 (20 votes)
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z · score: 31 (10 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z · score: 36 (11 votes)
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z · score: 71 (35 votes)
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z · score: 33 (10 votes)
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z · score: 14 (6 votes)
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z · score: 11 (5 votes)
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z · score: 6 (2 votes)
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z · score: 2 (4 votes)
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z · score: 2 (4 votes)
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z · score: 41 (43 votes)
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z · score: 9 (11 votes)
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z · score: 8 (8 votes)
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z · score: 4 (3 votes)
The first .impact Workathon 2015-07-09T07:38:12.143Z · score: 6 (6 votes)
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z · score: 2 (2 votes)
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z · score: 5 (5 votes)
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z · score: 7 (7 votes)

Comments

Comment by oagr on Introducing Foretold.io: A New Open-Source Prediction Registry · 2019-10-17T11:28:00.537Z · score: 2 (1 votes) · EA · GW

Thanks Soeren!

Comment by oagr on Introducing Foretold.io: A New Open-Source Prediction Registry · 2019-10-17T11:27:17.193Z · score: 6 (4 votes) · EA · GW

Thanks!

I believe the items in the "other useful features" section above are unique from Metaculus. Also, I've written this comment on the LessWrong post discussing things further.

https://www.lesswrong.com/posts/wCwii4QMA79GmyKz5/introducing-foretold-io-a-new-open-source-prediction?commentId=i3rQGkjt5CgijY4ow

Comment by oagr on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-14T23:35:08.362Z · score: 4 (2 votes) · EA · GW

If forecasters are giving forecasts for similar things over different times, their resolution should very obviously decrease with time. A good example of this are time series forecasts, which grow in uncertainty over time projected into the future.

bicycle-projection

To site my other comment here, the tricky part, from what I could tell is calibration, but this is a more narrow problem. More work could definitely be done to test calibration over forecast time. My impression is that it doesn't fall dramatically, probably not enough to make a very smooth curve. I feel like if it were the case that it reliably fell for some forecasters, and those forecasters learned that, they could adjust accordingly. Of course, if the only feedback cycles are 10-year forecasts, that could take a while.

Image from the Bayesian Biologist: https://bayesianbiologist.com/2013/08/20/time-series-forecasting-bike-accidents/

Comment by oagr on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-14T23:27:07.878Z · score: 12 (4 votes) · EA · GW

Happy to see this focus. I still find it quite strange out how little attention the general issue has gotten from other groups and how few decent studies exist.

I feel like one significant distinction for these discussions is that of calibration vs. resolution. This was mentioned in the footnotes (with a useful table) but I think it may deserve more attention here.

If long-term calibration is expected to be reasonable, then I would assume we could get much of the important information we could be interested in about forecasting ability from the resolution numbers. If forecasters are confident in predictions for a 5-20+ year time frame, this would be evident in corresponding high-resolution forecasts. If we want to compare these to baselines we could set them up now and compare resolution numbers.

We could also have forecasters do meta-forecasts; forecasts about forecasts. I believe that the straightforward resolution numbers should provide the main important data, but there could be other things you may be interested. For example, "What average level of resolution could we get on this set of questions if we were to spend X resources forecasting them?" If the forecasters were decently calibrated the main way this could go poorly is if the predictions to these questions would be low resolution, but if so that would be apparent quickly.

The much trickier thing seems to be calibration. If we cannot trust our forecasts to be calibrated over long time horizons, then the resolution of their forecasts is likely to be misleading, possibly in a highly systematic and deceiving way.

However, long-term calibration seems like a relatively constrained question to me, and one with possibly a pretty positive outlook. My impression from the table and spreadsheet is that in general, calibration was shown to be quite similar for short and long term forecasts. Also, it's not clear to me why calibration would be dramatically worse in long-term questions than it would be in specific short-term questions that we could test for cheap. For instance, if we expected that forecasters may be poorly calibrated on long-term questions because the incentives are poor, we could try having forecasters forecast very short-term questions with similarly poor incentives. I recall reading Anthony Aguirre speculating that he didn't expect Metaculus's forecaster's incentives to change much for long-term questions, but I forgot where this was mentioned (it may have been a podcast).

Having some long-term studies seems quite safe as well, but I'm not sure how much extra benefit they will give us compared to more rapid short-term studies combined with large sets of long-term predictions by calibrated forecasters (which should come with numbers of resolution).

Separately, I missed the footnotes on my first read through, but think that may have been my favorite part of it. The link is a bit small (though clicking on the citation numbers brings it up).

Comment by oagr on Leverage Research: reviewing the basic facts · 2019-10-09T16:24:32.002Z · score: 6 (3 votes) · EA · GW

Yep, understood, and thanks for clarifying in the above comment. I wasn't thinking you thought many of them were racist, but did think that at least a few readers may have gotten that impression from the piece.

There isn't too much public discussion on this topic and some people have pretty strong feelings on Leverage, so sadly sometimes the wording and details matter more than they probably should.

Comment by oagr on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-04T11:07:18.973Z · score: 26 (13 votes) · EA · GW

Seconded. I'm quite happy with the honesty. My impression is that lots of people in positions of power/authority can't really be open online about their criticisms of other prestigious projects (or at least, don't feel like it's worth the cost.) This means that a lot of the most important information is closely guarded to a few specific social circles, which makes it really difficult for others outside to really know what's going on.

I'm not sure what the best solution is, but having at least some people in-the-know revealing their thoughts about such things seems quite good.

Ideally I'd want honest & open discussions that go both ways (for instance, a back-and-forth between evaluators and organizations), but don't expect that any time soon.

I think my preference would be for the EA community to accept norms of honest criticism and communication, but would note that this may be very uncomfortable for some people. Bridgewater has the most similar culture to what I'm thinking of, and their culture is famously divisive.

Comment by oagr on EA Meta Fund Grants - July 2019 · 2019-09-16T21:44:10.367Z · score: 8 (4 votes) · EA · GW

Thanks for the response, that's roughly in line with what I expected. I guess this seems like an obvious example of an EA area with a funding gap, and a counterargument to the occasional "we have all the money we could want" claim.

Comment by oagr on EA Meta Fund Grants - July 2019 · 2019-09-13T16:09:21.631Z · score: 31 (13 votes) · EA · GW

Thanks for the update!

Just curious; do you have a rough idea of how much more money you would have been comfortable granting, if you would have had more? These projects seem quite good to me, but I imagine that many could have absorbed more funding.

Comment by oagr on Leverage Research: reviewing the basic facts · 2019-09-09T22:18:50.200Z · score: 68 (28 votes) · EA · GW

"which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees."

"Leverage" seems to have employed at least 60 people at some time or another in different capacities. I've known several (maybe met around 15 or so), and the ones I've interacted with often seemed like pretty typical EAs/rationalists. I got the sense that there may have been few people there interested in the neoreactionary movement, but also got the impression the majority really weren't.

I just want to flag that I really wouldn't want EAs generally think that "people who worked at Leverage are pretty likely to be racist," because this seems quite untrue and quite damaging. I don't have much information about the complex situation that represents Leverage, but I do think that the sum of the people ever employed by them still holds a lot of potential. I'd really not want them to get or feel isolated from the rest of the community.

Comment by oagr on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T15:38:09.869Z · score: 9 (6 votes) · EA · GW

The same thing about the edge-widths came to my mind. More specifically, I suggest adding labels to the edges to state a rough number of funding. Perhaps it would ideally be an interactive application.

Also, just curious; was there any reason for having Patrick in particular on the top of this? I imagine there were other donors who gave to many of these things.

Comment by oagr on Latest EA Updates for August 2019 · 2019-08-31T12:09:45.560Z · score: 15 (7 votes) · EA · GW

Happy to see the updates, that's a lot of things!

I was surprised to hear that Blockchain was useful in global aid, and checked out the article. It makes some claims, but also seems pretty grandiose and non-rigorous. Some fun quotes:

  • "This is in contrast to a traditional database (such as Excel)"
  • "Blockchain, however, ensured unparalleled transparency through its publicly accessible immutable ledger"
  • "it is at least safe to say that social assistance programmes may never be the same again."

It also had one other weird bit: "In effect, the giant system, named ‘Building Blocks’, acted as a database, storing the information of refugees in a secure manner whilst not actually utilising the main unique features of the underlying blockchain technology.".

I'm not sure if in the future projects it did use blockchain technology?

I'd be dubious of the specific claims made by this article.


Note: To be clear, I'm optimistic about technology use, but skeptical of the specific need for blockchain technologies in cases like these.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-08-20T20:21:48.396Z · score: 3 (2 votes) · EA · GW

"coaching to entrepreneurs starting on projects" is another one; it could be that there is a lot of coaching you could do, and if so, I would expect that there is still more value there in total than with Asana. By "portfolio of similar wins" I meant other similar things. The items in my original list would count. Also, maybe helping them with other software or services as well. There are lots besides Asana.

(My previous list): nonprofit sponsorship (as described above) operations support coaching / advice (there are lots of things to provide help here with) contractor support

Comment by oagr on Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering · 2019-08-15T20:16:12.421Z · score: 4 (3 votes) · EA · GW

Related; I think people do directly make choices that hint at this. Examples would include spending large numbers of resources on drugs and sex on the positive side, and (I'd expect) large numbers of resources to avoid torture / short-duration-but-painful situations.

Listening to the Feeling Good podcast, one common thing is that many people in America have deep fears of becoming homeless, and work very hard to avoid that specific outcome. Much of this is irrational, but some quite justified.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-08-15T14:27:25.334Z · score: 3 (2 votes) · EA · GW

That's quite a reply, thanks!

That does convince me both that it could be useful and more importantly, that you specifically have expertise and interest for work that you do on it to be useful to others.

That said, I would point out that it seems like a "nice small win", but I would be more excited about it being part of a portfolio of similar wins or similar.

It does cover cover "operations support" and "coaching/advice", but a very specific parts of them.

Kudos for working on this though and helping out those other orgs. I'm excited to see where things go as they continue.

Comment by oagr on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-07T23:19:17.777Z · score: 8 (5 votes) · EA · GW

Thanks!

Some quick thoughts:

  1. One quick way to get people to not take you seriously is with a bad cost effectiveness estimate. There's a much bigger risk of doing a sloppy/overconfident job than benefit of having a high number at the end of it (in EA circles). Also, there is a reputation of these estimates to both produce amazing numbers and also be very wrong, so while I support attempts, I'd also recommend lots of clarification, hedging, and consideration of ways the number could be poor. I think the default expectation is for the number to not be great; but even if the median isn't good, it's possible upon further investigation it could be better than expected, which could be quite worthwhile.

  2. "to reach all chronic sufferers" -> I'd recommend targeting 30%-60% of sufferers. The last several percent would be much more expensive.

  3. I'm quite skeptical of the click -> cure stats in particular. For-profit websites often have a 1% rate of people who go from click -> purchase, and this could be a pretty significant amount of work to purchase.

  4. Is this equation taking into account that the "cure" could last for many years? Would the result be in "QALYs per year"?

  5. I'm sure you've answered this elsewhere, but why the American focus? Would it be possible in India or similar?

  6. This estimation seems like something that Charity Entrepreneurship would have a lot more experience in. The program seems quite similar to some of their others.

  7. I'd suggest reading up on the mini-fiasco of the leafletting research, if you haven't yet. Just make sure not to make some of the mistakes made around that. Some context: https://animalcharityevaluators.org/blog/ace-highlight-updated-leafleting-intervention-report/ https://acesounderglass.com/2015/04/24/leaflets-are-ineffective-tell-your-friends/ https://medium.com/@harrisonnathan/the-problems-with-animal-charity-evaluators-in-brief-cd56b8cb5908

  8. Consider using Guesstimate for clarity, but I'm biased :)

  9. Kudos for the efforts, and good luck!

Comment by oagr on What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) · 2019-08-04T22:05:15.859Z · score: 3 (2 votes) · EA · GW

Yep, good point. I think the fact that EA Funds (and possibly other programs) fund a bunch of other individuals indicates that there could more room for more fellowship-like programs.

I'd hope that EA Funds and similar could always act as "escape valves" for areas overlooked by specific programs. I'm definitely not suggesting that these programs replace EA Funds, but rather that they be provided as well.

Comment by oagr on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-04T20:10:30.958Z · score: 9 (5 votes) · EA · GW

One quick thought: it could be neat experiment to make $2k of Facebook ads to target people with these issues, pointing to a specific webpage the discusses how these people could get treatment. That said, I of course realize some may not be legal, so it could be tricky.

Comment by oagr on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-04T20:08:45.618Z · score: 8 (5 votes) · EA · GW

For what it's worth, this seems like a pretty big deal to me if it were true. Is there any quick QALY estimate or similar for how much things can be improved if everyone had quick access to DMT or similar?

Comment by oagr on EA Forum Prize: Winners for June 2019 · 2019-07-27T12:21:56.427Z · score: 4 (3 votes) · EA · GW

Makes sense. I'm excited for the comment prize.

I think that the main "organization" posts I'm thinking of are almost like a different class, like they are using the EA Forum as an academic journal as opposed to as a blog. There could be some self-selection then; like a separate category / website where people self-select for a different kind of feedback. I'm going to be chatting to people about this.

Comment by oagr on EA Forum Prize: Winners for June 2019 · 2019-07-27T01:46:31.272Z · score: 2 (1 votes) · EA · GW

[comment deleted]

Comment by oagr on EA Forum Prize: Winners for June 2019 · 2019-07-26T22:25:27.086Z · score: 3 (2 votes) · EA · GW

A question for the prize winners (if they read this and have time):

Did you find this award helps to motivate you, and do you have thoughts on if the prize should be changed in the future?

Comment by oagr on EA Forum Prize: Winners for June 2019 · 2019-07-26T22:23:11.899Z · score: 3 (2 votes) · EA · GW

Yep, I'd generally agree with that. One possible distinction is that I could see value in recognizing posts that have high EV but don't necessarily match "intellectual progress" in one way or another.

My comment applied to the fact that all three winners were tough to compete with for most people. However, there is the similar point that the Information Security Careers post in particular is odd because it was useful because it of the reputation of the writers (I'd agree this seemed necessary.)

Comment by oagr on Editing available for EA Forum drafts · 2019-07-26T17:18:08.041Z · score: 2 (1 votes) · EA · GW

That makes me really happy. I really would like to see more experimentation and use of delegation of delegate-able tasks around EA, kudos for setting that up.

Comment by oagr on EA Forum Prize: Winners for June 2019 · 2019-07-26T08:47:47.127Z · score: 38 (17 votes) · EA · GW

(To be clear, I don't mean this as a complaint, but an emergent observation that calls for possible changes)

I think these winners were quite reasonable. That said, I find it a bit awkward that these posts are even competing the more common blog posts. I could imagine this being pretty frustrating for almost anyone not in either an EA org or getting paid by an EA group to spend a significant amount of time working on a piece. If these winners were all valid entries, then I have little hope for almost any "casual" entry to have a chance here.

On a related note, if the norm is to rate the "top serious EA organization documents," this seems quite difficult to do for different reasons. For one, "Information security careers for GCR reduction" seems like a very different class of thing to me than "Invertebrate Sentience". Second, if we keep on doing this, I'd imagine we'd eventually want some domain experts; or at least, a somewhat different ranking/setup than for the many small posts.

I feel like it would be pretty fair to either exclude major EA orgs from this competition in the future, or have a separate tier, like the "best emerging artist" award (but for writing.)

Just a thought for future prizes.

Comment by oagr on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-24T12:26:21.446Z · score: 20 (7 votes) · EA · GW

I think one cool thing this piece does is use a pretty wide range of approaches to estimating the value of this program. As such, I'd be particularly curious to get feedback from others here on what parts people find reliable or questionable.

[Disclaimer: I submitted comments to this post earlier]

Comment by oagr on Editing available for EA Forum drafts · 2019-07-24T12:20:10.220Z · score: 6 (4 votes) · EA · GW

This seems like a really good idea to me, happy to hear you are doing this!

I think that if many people have similar issues, it could be useful to find an assistant from UpWork or similar to help you. There are some people who are reasonably inexpensive and could provide useful writing feedback (though not EA-specific feedback). I personally would be hesitant to use much of your time, but am more willing if much were outsourced (and likely happy to pay the relevant amount.)

Comment by oagr on Defining Effective Altruism · 2019-07-22T10:37:44.579Z · score: 2 (1 votes) · EA · GW

Thanks for the spreadsheet by the way. How have those groups been going? It seems like an interesting project.

Comment by oagr on Defining Effective Altruism · 2019-07-22T10:35:53.278Z · score: 4 (2 votes) · EA · GW

So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.

I could imagine that making a spin-off could be pretty simple. It could take a lot of time and effort to keep all the parts integrated. While this may not have been worth it yet, if there were a time that others in the future would estimate the costs as being high to keeping things uniform, spin-offs seem pretty reasonable to me.

The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X.

I in general agree, though I could imagine many situations where people from CEA or similar may want to be involved somewhat to make sure things don't go wrong.

In this case, I'd assume that William MacAskill is in a really good position to appeal to much of academia. I didn't mean "absolutely all" of academia before, sorry if that wasn't clear.

Comment by oagr on Defining Effective Altruism · 2019-07-21T23:35:16.696Z · score: 2 (1 votes) · EA · GW

I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.

As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.

Comment by oagr on Defining Effective Altruism · 2019-07-21T23:31:02.660Z · score: 7 (5 votes) · EA · GW

My quick take onto why this was downvoted would be because someone may have glanced at it quickly and assumed you were being negative to MIRI or EA.

I think around being "Science-aligned", the post means using the principals and learnings of the scientific method and similar tools, rather than agreeing with "the majority of scientists" or similar.

The mainstream scientific community seems also likely to be skeptical of EA, but that doesn't mean that EA would have to therefore be similarly skeptical of itself.

That said, of course whether one follows the scientific method and similar for some practices, especially in cases where they aren't backed by many other communities, could be rather up for debate.

Comment by oagr on Defining Effective Altruism · 2019-07-21T23:22:16.678Z · score: 13 (4 votes) · EA · GW

Hm... I appreciate what you may be getting at, but I think that post itself doesn't exactly say it's bad, but rather that the specific thing that what one does to maximize probably isn't exactly the best possible thing (though it could still be the best possible guess).

In many areas maximizing as a general heuristic is pretty great. I wouldn't mind maximizing income and happiness within reasonable limits. But maximization can of course be dangerous, as is true for many decision functions.

To say it's usually a bad idea would be to assume a reference class of possible things to maximize, which seems hard for me to visualize.

Comment by oagr on There are *a bajillion* jobs working on plant-based foods right now · 2019-07-17T17:29:03.254Z · score: 4 (2 votes) · EA · GW

Great to see so many jobs opening up! I imagine we have a limited pool of strong Effective Altruist candidates for all these opportunities, and also that some opportunities would require people with EA expertise much more than others. Similarly, I imagine some are better for career capital than others.

Do you have a sense of if any clusters of jobs within this are particularly high-impact or good for EAs?

Comment by oagr on Changes to EA Funds management teams (June 2019) · 2019-07-10T18:42:30.263Z · score: 14 (8 votes) · EA · GW

I could appreciate that first impression, but this really doesn’t seem that dramatic to me. The new “teams” only started in the last year or so, so there was bound to be some turnover as people tried things. The Long Term and Meta Funds both only had one person leave (of 5 in each case). The Animal Welfare fund is changing, but really just because a new one has started, which seems like a healthy thing (people have more choices now.)

Comment by oagr on What's the best structure for optimal allocation of EA capital? · 2019-07-08T18:05:59.168Z · score: 5 (3 votes) · EA · GW

I've heard different things, it's been harder to pin this down.

"Approximately half of the money we’re receiving from Open Philanthropy, we expect to regrant to promising work in the community, either directly through EA Grants or via separate grants to local groups." https://www.centreforeffectivealtruism.org/blog/announcing-grant-from-the-open-philanthropy-project/

My impression is that this money is being used for EA Grants and Community Grants (I believe it's called), but not EA Funds. I don't fully understand the details here, but imagine they probably have decent reasons for this.

Carl Shulman has a $5m fund for regranting. https://www.openphilanthropy.org/giving/grants/centre-for-effective-altruism-new-discretionary-fund

I believe BERI also got money to regrant, but it's possible it's not from Open Phil. You could see their list of grants here: https://existence.org/grants/

Comment by oagr on What's the best structure for optimal allocation of EA capital? · 2019-07-05T21:57:02.415Z · score: 2 (1 votes) · EA · GW

Right now there is quite a bit of this happening through Open Phil. My guess would be that they would want to seem more happen, but don't feel like the existing groups could efficiently take much more money from them at the moment.

This isn't to discourage other donors; extra donor diversity seems pretty useful as well.

Comment by oagr on EA Forum Prize: Winners for April 2019 · 2019-06-30T21:41:38.915Z · score: 3 (2 votes) · EA · GW

"Eventually, I’d like to see every major research article in EA find its way to the Forum, ideally with summaries and authorial responses. This report is an excellent example of how to bring interesting work to a broad audience."

-> What would you think about putting it on the forum, but without a nice summary and authorial responses? For instance, as a simple link post? There are a bunch of EA/safety papers coming out from time to time, it seems quite nice to have discussion here, even other details. This wouldn't be too hard to do, it would just require someone to keep a tab of the main papers and the like and make link posts for all of them.

I imagine a further version of this would make it easy to go from document url -> EA forum post, to see comments.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-27T05:53:50.740Z · score: 3 (2 votes) · EA · GW

I'm honestly not too sure, but could imagine a bunch of different things.

  • nonprofit sponsorship (as described above)
  • operations support
  • coaching / advice (there are lots of things to provide help here with)
  • contractor support

Why are you thinking of Asana Business? Like, you would provide free Asana Business accounts?

Comment by oagr on EA Forum Prize: Winners for April 2019 · 2019-06-27T05:30:01.606Z · score: 5 (3 votes) · EA · GW

I'd second this. I think great comments/feedback are underrepresented at this point. Do recognize it would be tricky though.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-27T05:27:04.702Z · score: 2 (1 votes) · EA · GW

Interesting. This came from chats I had with an attorney. That said, they were based in SF, so maybe their prices were higher. I also asked how much it would cost to do "everything", which I think meant more than strictly file the IRS Form 1023. I believe there's a lot of work that could be done by either yourself or the attorney, and I would hope that in many cases we could generally lean more on the attorney for that work.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T05:26:42.923Z · score: 4 (2 votes) · EA · GW

That sounds like it was a pretty effective spend then. That is pretty good evidence.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T05:25:34.169Z · score: 2 (1 votes) · EA · GW

One difference is that orgs can share contractors, but not employees. For instance, my designer only spends around 1-3 hours per week, so has lots of time to help other groups, like EA groups. I'm thinking of low-time, skill-specific workers (the jobs in that list would all only work a few hours per month or similar)

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T15:01:09.814Z · score: 5 (3 votes) · EA · GW

I know there was one Discord group for rationalists learning math that seemed pretty useful.

Separately, there's been a decent amount of work on AI specifically. I was personally matched with a few other people near me to learn about ML, and we did have several sessions, which was pretty nice.

I'd encourage more experiments here. One common thread (you'll likely see in several responses) is that I'd encourage you to think small at first, as in an MVP. Maybe some early versions would look like Slack/Discord groups for one specific niche.

If you (anyone reading this) are interested, I recommend applying to the EA Funds or similar. I imagine some interventions here may be quite cheap, but useful. I'd be happy to review your application or discuss if you do.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T14:57:39.841Z · score: 2 (1 votes) · EA · GW

Just curious; do you have opinions on the EAHub, and/or thoughts on how it could do better? That seems to be attempting with some of the content work.

https://eahub.org

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T14:56:10.567Z · score: 4 (3 votes) · EA · GW

While I like this idea, I'm kind of surprised it reached the top here. I'm curious; do other commenters have come across many people who this would have been helpful for? Can people describe these cases a bit, along with what kind of loan setup would have been most useful?

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T14:54:40.901Z · score: 2 (1 votes) · EA · GW

There are typically many contractors to choose from, and it's difficult to evaluate their quality. For instance, if you want a virtual assistant, it may take several interviews and trials until you find one you like.

I'm not sure what kinds of suppliers you are referring to. If it's simple things like, "buy paper from this company on Amazon", that's typically simpler. The reviews are more indicative of performance and there are often fewer alternatives.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-21T17:22:14.986Z · score: 4 (3 votes) · EA · GW

I would like to think it's worth it, and wasn't suggesting that this would make the project not-worth-it, but just wanted to make it more clear what the costs would be.

Given that the costs are significant (though possibly justifiable), one easier approach would be with small solution to start.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T19:58:51.382Z · score: 4 (5 votes) · EA · GW

I think this could be cool, but agree that the people should be pretty good. Perhaps it would work better with 1-3 "ambassadors" who are specifically chosen for being able to do this well, and do it full-time.

I wouldn't be as enthusiastic about random people trying this with lots of important people online, due to the complexity.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T19:55:40.466Z · score: 2 (1 votes) · EA · GW

My current guess is that there's not actually that much room for high-impact startups. It's really, really hard to successfully create a C-corp, let alone optimizing for strategy at the same time. Now that there is significant money available for nonprofit ventures, it seems much less of a draw than it used to.

There are almost no ideas I have for what very useful startups would look like, at least things that I wouldn't expect could be more effective as nonprofits (at least for the first few years).

Happy to be proven wrong of course! Also happy to provide feedback on specific ideas if people are interested.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T19:52:36.582Z · score: 4 (3 votes) · EA · GW

One thing I'd note is that Guesstimate took quite a bit of time (maybe around 1.5 years engineering/PM time.) I'd guess is that if you were to try to pay for someone to make something similar, it could cost quite a bit (>$200k.)

It may be possible to start with simple Python library with some visualizations or similar.

Comment by oagr on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T19:50:03.573Z · score: 2 (1 votes) · EA · GW

+1 for originality. Perhaps if it were deemed EV-positive in expectation, it could start as just a well-written blog post on the EA Forum or similar.