Posts

Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z
The first .impact Workathon 2015-07-09T07:38:12.143Z
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z

Comments

Comment by Ozzie Gooen (oagr) on What would you do if you had half a million dollars? · 2021-07-20T14:32:10.489Z · EA · GW

I think that looking at their track record is only partially representative. They used to follow a structure where they would recommend donation opportunities to particular clients. Recently they've set up a fund that works differently; people would donate to the fund, then the fund will make donations at their will. My guess is that this will help a bit around this issue, but not completely. (Maybe they'll even be extra conservative, to prove to donors that they will match their preferences.)

 

Another (minor) point is that Longview's donations can be fungible with LTFF. If they spend $300K on something that LTFF would have otherwise spent money on, then the LTFF would have $300K more to spend on whatever it wants. So if Longview can donate to, say, only 90% of interesting causes, up to $10Mil per year, the last 10% might not be that big of a deal.

Comment by Ozzie Gooen (oagr) on What would you do if you had half a million dollars? · 2021-07-19T22:06:08.161Z · EA · GW

Thanks for the thoughts here!

I'd note that the LTFF definitely invests money into some global priorities research, and some up-and-coming cause areas. Longview is likely to do so as well. 

Right now we don't seem to have many options to donate to funders that will re-fund to non-longtermist (a broadly defined longtermist), experimental work. In this particular case, Patrick is trying to donate to longtermist causes, so I think the funding options are acceptable, but I imagine this could be frustrating to non-longtermists.

Comment by Ozzie Gooen (oagr) on What would you do if you had half a million dollars? · 2021-07-19T22:00:12.721Z · EA · GW

Thanks for the comment, this raises a few good points. 

Longview is trying to attract existing philanthropists who may not identify as Effective Altruists, which will to some extent constrain what they can grant to as granting to something too “weird” might put off philanthropists.

Good point. I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices. We'll ping them about them (or if any of them are reading this, please do reply directly!)

> If you find someone / a team who you think is better than the LTFF grant team then fine, but I’m sceptical you will.

To be clear, one of the the outcomes could be that this person decides to give to the LTFF. These options aren't exclusive. But I imagine in this case, they shouldn't have that much work to do, they would essentially be making a choice from the options we list above.

Comment by Ozzie Gooen (oagr) on Ozzie Gooen's Shortform · 2021-07-04T03:32:46.035Z · EA · GW

No prominent ones come to mind. There are some very junior folks I've recently seen discussing this, but I feel uncomfortable calling them out.

Comment by Ozzie Gooen (oagr) on EA needs consultancies · 2021-07-02T04:58:35.165Z · EA · GW

I just want to flag that one sort of "regular" consulting I'd love to see in EA is "really good" management consulting. My read is that many our management setups (leadership training, leadership practices, board membership administration) are fine but not world-class. As we grow it's increasingly important to do a great job here.

My impression is that the majority of "management consultants" wouldn't be very exciting to us, but if there were some that were somewhat aligned or think in similarly nerdy ways, it would be possibly highly valuable. 

Comment by Ozzie Gooen (oagr) on EA needs consultancies · 2021-07-02T04:55:05.685Z · EA · GW

Hi Peter, 

Thanks for the ideas here.

My guess is that this is going to be a bit difficult. My impression is that the needs EA organizations know they have are fairly specific; they look like "really great research into key questions", or sometimes very tactical things like, "bookkeeping" or simple website development. "Consultant" is a really broad class of thing and really needs to be narrowed down in conversation.

Generally, organizations don't have that much time to experiment with non-obvious contractor arrangements. This includes time brainstorming ways they might be useful. If one is having a lot of trouble getting integrated (as a possible contractor), the best method I know of is to just work in one of these organizations for a while and develop a close understanding, or perhaps try to write blog posts on topics that are really useful to these groups and see if these pick up.

Around having things like a directory, I expect the ones to work will be more narrow. There are a few smaller "contractor hubs" around; or "talent agencies", that assist with hiring contractors and charge some fee on top. I think this is a pretty good model for low-level work, and I'd like to see more of it. It does require people with either really good understandings of EA needs (or the relationships), or really good ability to do some super-obviously useful problems (like accounting). 

If anyone is interested in doing consulting, one easy way to indicate so would be by just posting in a comment in this thread, or there could be a new thread for such work.

Some advanced market commitments (i.e., organisations publicly committing to pay for consulting services if they are offered) might also be helpful.

My guess is that this would be a tough sell, but I appreciate the idea.

The EA infrastructure probably helps but most people don't know much about how to set up and run an organisation

One (small) positive is that I think contractor setups can be some of the easiest to get started with. If you're just doing contracting with yourself, and maybe one other person, you don't even need to set up a formal business, you could just do it directly. The big challenges are in finding clients and providing value. You don't need much scale at first. But those things are challenges.

I imagine it could be considered nice for organizations to hire more new contractors than would otherwise make sense, as that would be effectively subsidizing the industry. 

Comment by Ozzie Gooen (oagr) on EA needs consultancies · 2021-07-01T03:32:12.374Z · EA · GW

Yea,

My hunch is that "EAs doing consulting for non-EA companies" looks very different from "EAs doing consulting for EA orgs", but I'd be happy to be wrong.

Comment by Ozzie Gooen (oagr) on EA needs consultancies · 2021-06-30T18:45:49.610Z · EA · GW

Re: reluctance. Can you say more about the concern about donor perceptions? E.g. maybe grantmakers like me should be more often nudging grantees with questions like "How could you get more done / move faster by outsourcing some work to consultants/contractors?" I've done that in a few cases but haven't made a consistent effort to signal willingness to fund subcontracts.

Contractors are known to be pricey and have a bit of a bad reputation in some circles. Research hires have traditionally been dirt cheap (though that is changing). I think if an org spends 10-30% of its budget on contractors, it would be treated with suspicion. It feels like a similar situation to how a lot of charities tried to have insanely low overheads (and many outside EA still do). 

I think that grantmakers / influential figureheads making posts like yours above, and applying some pressure, could go a long way here. It should be obvious to the management of the nonprofit that the funders won't view them poorly if they spend a fair bit on contractors, even if sometimes this results in failures. (Contract work can be risky for clients, though perhaps less risky than hiring.)

What do you mean about approval from a few parties? Is it different than other expenditures?

At many orgs, regular expenditures can be fairly annoying. Contracting engagements can be more expensive and more unusual, so new arrangements have to sometimes be figured out. I've had some issues around hiring contractors myself in previous startups for a similar reason. The founders would occasionally get cold-feet, sometimes after I agreed to an arrangement with a contractor.

doesn't seem too problematic so long as Open Phil isn't institutionally opposed to subgranting/subcontracting

I agree. The main thing for contractors is the risk of loss of opportunities. So if there were multiple possible clients funded by one group, but each makes separate decisions, and that one group is unlikely to stop funding all of those subgroups at once, things should be fine.

Re: prices. Seems like an education issue.

Agreed

I'm struggling to parse "Many contractors that organizations themselves come from those organizations." Could you rephrase?

Sorry, this was vague. I meant cases where:
1) Person A is employed at Organization B.
2) Person A leaves employment.
3) Person A later (or immediately) joins Organization B as a contractor. 

I've done this before. The big benefit is that person A has established a relationship with Organization B, so this relationship continues to do a lot of work (similar to what you describe). 

One person I spoke to recently suggested that programs like RSP could be a good complement to consultancy work because it allows more people to hang out and gain context on how potential future clients

Yep, this is what I was thinking about above in point (3) on the bottom. Having more methods to encourage interaction seem good. There's been a bit of discussion of having more coworking between longtermists in the Bay Area for example; the more we have things like that, the better I'd expect things to be. (Both because of the direct connections, and the fact that it could make it much easier to integrate more people, particularly generalists)

Comment by Ozzie Gooen (oagr) on Ozzie Gooen's Shortform · 2021-06-30T06:16:49.847Z · EA · GW

Yep, agreed. Right now I think there are very few people doing active work in longtermism (outside of a few orgs that have people for that org), but this seems very valuable to improve upon. 

Comment by Ozzie Gooen (oagr) on Ozzie Gooen's Shortform · 2021-06-30T03:12:55.849Z · EA · GW

There seem to be several longtermist academics who plan to spend the next few years (at least) investigating the psychology of getting the public to care about existential risks.
 

This is nice, but I feel like what we really could use are marketers, not academics. Those are the people companies use for this sort of work. It's somewhat unusual that marketing isn't much of a respected academic field, but it's definitely a highly respected organizational one.

Comment by Ozzie Gooen (oagr) on EA needs consultancies · 2021-06-30T01:41:06.014Z · EA · GW

I've been thinking a bit about EA consultancy solutions for a while. A few thoughts:

1. I think many EA orgs are much more resistant to outsourcing large amounts of work than they should be. I've had a surprising amount of trouble getting groups to pay even token amounts of Guesstimate, a few years back, and have seen other groups refrain from making payments. This seems due to multiple reasons: they often aren't sure how their donors would view this (often somewhat expensive) spending, this sort of spending often needs approval from a few parties, and in many situations it just isn't allowed (University rules).

2. Right now the market for large EA consulting seems very isolated to OpenPhil. If this is the case, I imagine the value proposition is precarious to the contractors. Often the main benefit to hiring a contractor over an employee is the ease of firing/ending contracts, but this is obviously quite undesired by the contractor. When you have only one client, being an employee is generally a better deal than being a contractor (with the exception that sometimes they pay significantly more to compensate). See the recent ridesharing contractor debate as an example.

3. As mentioned in (2), generally the way that contractors work is that they cost a fair bit (~1.3x to 2x) more than an employee per hour worked. This is because they need to also pay for work benefits, the time between jobs, and the costs of finding new work. As long as all parties are fine with this, this can work, but it's something to be aware of. I think a lot of organizations balk when they see contractor prices for most kinds of work they're not used to.

 

If we'd like to move in the direction of an "Effective Altruist Economy/Market", some things that might help kickstart this would be:

1. Setting expectations that contractors will cost money, but are often a good move, all things considered. I imagine it could eventually become common knowledge that contracting relationships are often worthwhile. This would prevent the awkwardness around funders seeing big contractor line-items.

2. Subsidizing contractor rates to small or medium sized client organizations. Like, EA Funds pays out $0.40 for each $1 paid to a contractor by one of these organizations, for research work. In theory there could be some sort of quadratic funding setup for group purchased.

3. Many contractors that organizations themselves come from those organizations. In general, having better systems to facilitate engagement with core Effective Altruists and promising other people will lead to better understandings of needs, which will enable more new consulting groups. I think that understanding the internal needs is really important, but also very difficult.

Comment by Ozzie Gooen (oagr) on Opinion: Digital marketing is under-utilized in EA · 2021-06-27T02:23:26.588Z · EA · GW

If the intervention is more, "We should just have some people with domain expertise in digital marketing to help EA organizations", that's much easier to integrate. 

Comment by Ozzie Gooen (oagr) on Opinion: Digital marketing is under-utilized in EA · 2021-06-27T02:22:31.762Z · EA · GW

I've looked a bit into this.

I think there's a lot of potential value here. On the other hand, such work would have to be fairly carefully.

A large publicity push to get people into the longtermist community could easily backfire, similar to the problem of expanding the EA community too quickly. The specific longtermist concerns (AGI risks, biosafety risks) could also be net harmful if presented sloppily.

Quality discussion and targeting would help with these concerns, but right now I think only a few potential people would actually be trusted and capable to do such work.

If anyone is reading this and would be interested in pursuing this, let me know, and I'll try to figure out the right other people to contact. I imagine this could be a good fit for a new project, but it would have to be done with the right team.

I would say though that if a promising team was interested, and if they were trusted by the main longtermists/funders, it seems like a very promising opportunity for funding.

Comment by Ozzie Gooen (oagr) on Shallow evaluations of longtermist organizations · 2021-06-27T00:29:06.078Z · EA · GW

Thanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR.

Sadly, I imagine there's a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.

Comment by Ozzie Gooen (oagr) on Shallow evaluations of longtermist organizations · 2021-06-27T00:18:34.670Z · EA · GW

Also, I'd agree that <$1Mil funding decisions aren't the main thing I'm interested in. I think that talent and larger allocations are much more exciting.

For example, perhaps it's realized that one small nonprofit's work is much more valuable than expected, so future donors wind up spending $200Mil in related work down the line. Or, there are many systematic effects, like new founders are inspired by trends identified in the evaluations and make better new nonprofits because of it.

Comment by Ozzie Gooen (oagr) on Shallow evaluations of longtermist organizations · 2021-06-27T00:16:33.487Z · EA · GW

+1, to both the questions and the answers.

In an ideal world we'd have intense evaluations of all organizations that are specific to all possible uses, done in communications styles relevant to all people.

Unfortunately this is an impossible amount of work, so we have to find some messy shortcuts that get much of the benefit at a decent cost.

I'm not sure how to best focus longtermist organization evaluations to maximize gains for a diversity of types of decisions. Fortunately I think whenever one makes an evaluation for one specific thing (funding decisions), these wind up relevant for other things (career decisions, organization decisions). 

My primary interest at this point are evaluations of the following:

  • How much total impact is an organization having, positive or negative?
  • How can such impact be improved?
  • How efficient is the organization (in terms of money and talent)
  • How valuable is it to other groups or individuals to read / engage with the work of this organization? (Think Yelp or Amazon reviews)

My guess is that such investigations will help answer a wide assortment of different questions.

To echo what Nuño said, some of my interest in this specific task was in attempting a fairly general-purpose attempt. I think that increasingly substantial attempts is a pretty good bet, because a whole lot could either go wrong (this work upsets some group or includes falsities) or new ideas could be figured out (particularly by commenters, such as those on this post). 

In the longer term my preference isn't for QURI/Nuño to be doing the majority of public evaluations of longtermist orgs, but instead for others to do most of this work. Perhaps this could be something of a standard blog post type, and/or there could be 1-2 small organizations dedicated to it. I think it really should be done independently from other large orgs (to be less biased and more isolated), so it probably wouldn't make sense for this work to be done as part of a much bigger organization.

Comment by Ozzie Gooen (oagr) on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-17T00:25:07.014Z · EA · GW

Some quick thoughts:

  1. This work was meant to be built on. Hopefully there will be more similar work going forward (by both us and others), so much of the purpose here is to lay some foundation and help dip our toes into this sort of evaluation. (It can be controversial or harmful, so we're going slowly). As such, ideas for improvement are most welcome!
  2. I've read the larger review. I'd note that there were few groups that really surprised me. If you go through the list of grantees, and think about what you know of each candidate, I'd bet you can get a roughly similar sense. (This is true for those who read LW/EA Forum frequently). One of the main purposes of this sort of work is to either find or try and fail to find big surprises. From my perspective, I think that groups/individuals who had previously provided value (different from seeming prestigious, to be clear), went on to provide more value, and those that hadn't  didn't do as well. 
  3. This work wasn't done with the particular intention of helping to decide between EA Funds. We have been doing some other investigation here, somewhat accidentally (I've been assisting a donor lottery winner to decide). It's a good thing to keep in mind going forward.
  4. It would be great to later have measures of total impact for longtermism. We don't have strong measures now, but would love to help develop these (or further encourage others to). 
Comment by Ozzie Gooen (oagr) on What should the norms around privacy and evaluation in the EA community be? · 2021-06-17T00:13:14.217Z · EA · GW

Just brainstorming:
I imagine we could eventually have infrastructures for dealing with such situations better. 

Right now this sort of work requires:
 

  • Figuring out who in the organization is a good fit for asking about this.
  • Finding their email address.
  • Emailing them.
  • If they don't respond, trying to figure out how long you should wait until you post anyway.
  • If they do respond and it becomes a thread, figure out where to cut off things.
  • If you're anonymous, setting up an extra email account.

Ideally it might be nice to have policies and infrastructure for such work. For example:

  1. Coded practices and norms for responses. Organizations can specify which person is responsible and what their email address is. They also commit to responding in some timeframe.
  2. Services for responses. Maybe there's a middleman who knows the people at the orgs and could help do some of the grunt work of routing signals back and forth.
Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: May 2021 grant recommendations · 2021-06-01T03:10:29.926Z · EA · GW

Some more points here:
1. Hopefully COVID will stop being an issue in the US (fingers crossed), but I can't be completely sure. It's possible new strains will emerge that the current vaccines won't work against, for example.

2. I think there are possibilities of using NOVID in other countries, but I can't say more.

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: May 2021 grant recommendations · 2021-06-01T03:08:21.080Z · EA · GW

What do you mean by "advisors" here? Like mentors for the program participants? Or like advisors on the running and strategy of the program?

I was thinking of primarily the former, but the latter would also be useful.

So not just current university students in Switzerland itself. Additionally, I believe CHERI ended up being open to candidates from elsewhere in Europe (including the UK, not just continental Europe). 

Originally it was primarily focused on the Switzerland community. My impression is that later on things changed (after the grant was decided), particularly because there was more global demand this summer than expected. 

Comment by Ozzie Gooen (oagr) on Linch's Shortform · 2021-05-11T16:19:31.994Z · EA · GW

I liked this, thanks.

I hear that this similar to a common problem for many entrepreneurs; they spend much of their time on the urgent/small tasks, and not the really important ones. 

One solution recommended by Matt Mochary is to dedicate 2 hours per day of the most productive time to work on the the most important problems. 

https://www.amazon.com/Great-CEO-Within-Tactical-Building-ebook/dp/B07ZLGQZYC

I've occasionally followed this, and mean to more. 

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-04T15:39:28.540Z · EA · GW

Agreed. Also, there are a lot of ways we could pay for prestige; like with branding and marketing,  that could make things nicer for new employees.

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-02T02:08:34.528Z · EA · GW

I just wanted to flag one possible failure mode.

I've come across a few people who said that "getting management experience", for the purpose of eventually helping with direct work, was a big part in not wanting to do direct work directly. So far I haven't seen these people ever get into direct work. I think it can be great for earning to give, but am personally skeptical of its applicability to direct work.

From what I've seen, the skills one needs to lead EA organizations are fairly distinct, and doing so often requires a lot of domain specific knowledge that takes time to develop. Related, much of the experience I see people getting who are in management seem to be domain specific skills not relevant for direct work, or experience managing large teams of skills very different from what is seems to be needed in direct work.

For example, in bio safety orgs, the #1 requirement of a manager is a lot of experience in the field, and the same is true (maybe more so) in much of AI safety. 

I think non-direct-work management tracks can be great for earning to give, as long as that's what's intended.

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-01T16:49:23.202Z · EA · GW

Thanks for this, I feel like I've seen this too.

I'm 30 now, and I feel like several of my altruistic-minded friends in my age group in big companies are reluctant to work in nonprofits for stated reasons that feel off to me. 

My impression is that the EA space is quite small now, but has the potential to get quite a big bigger later on. People who are particularly promising and humble enough to work in such a setting (this is a big restriction) sometimes rise up quickly.

I think a lot of people look at initial EA positions and see them as pretty low status compared to industry jobs. I have a few responses here:
1) They can be great starting positions for people who want to do ambitious EA work. It's really hard to deeply understand how EA organizations work without working in one, even in (many, but not all) junior positions.
2) One incredibly valuable attribute of many effective people is a willingness to "do whatever it takes" (not meaning ethically or legally). This sometimes means actual sacrifice, it sometimes means working positions that would broadly be considered low status. Honestly I regard this attribute as equally important to many aspects of skills and intelligence. Some respected managers and executives are known for cleaning the floors or providing personal help to employees or colleagues, often because those things were highly effective at that moment, even if they might be low status. (Honestly, much of setting up or managing an organization is often highly glorified grunt work).

Personally, I try to give extra appreciation to people in normally low-status positions, I think these are very commonly overlooked.

---

Separately, I'm really not sure how much to trust the reasons people give for their decisions. I'm sure many people who use the "overqualified" argument would be happy to be setting up early infrastructure with very few users for an Elon Musk venture, or building internal tooling for few users at many well run, high paying, and prestigious companies.

Comment by Ozzie Gooen (oagr) on If I pay my taxes, why should I also give to charity? · 2021-04-19T03:34:12.243Z · EA · GW

Like Larks, I'm happy that work is being put into this. That said, I find this issue quite frustrating to discuss, because I think a fully honest discussion would take a lot more words than most people would have time for.

“Since I already pay my fair share in taxes, I don’t need to give to charity”

This is the sort of statement that has multiple presuppositions that I wouldn't agree with.

  • I pay my "fair share" in taxes
  • There's such thing as a "fair share"
  • There is some fairly objective and relevant notion of what one "needs to do"

The phrase is about as alien to me, and as far from my belief system, as an argument saying,

The alien Zordon transmits that Xzenefile means no charity.

One method of dealing with the argument above would be something like,

"Well, we know that Zordon previously transmitted Zerketeviz, which implies that signature Y12 might be relevant, so actually charity is valid."

But my preferred answer would be,
"First, I need you to end your belief in this Zordon figure".

The obvious problem is that this latter point would take a good amount of convincing, but I wanted to put this out there.

Comment by Ozzie Gooen (oagr) on As an EA, Should I renounce my US citizenship? · 2021-04-19T03:23:53.540Z · EA · GW

I'm not very familiar with investment options in the UK, but there are of course many investment options in the US. I believe that being a citizen of the US helps a fair bit for some of these options. 

My impression is that getting full citizenship of both the US and the UK is generally extremely difficult, I imagine ever changing your mind would be quite a challenge.

One really nice benefit of having both citizenship is that it gives you a lot of flexibility. If either country suddenly becomes much more preferable for some reason or another (imagine some tail risk, like a political disaster of some sort), you have the option of easily going to the other. 

You also need account for how the US might treat you if you do renounce citizenship. My impression is that they can be quite unfavorable to those who do this (particularly if they think it's for tax reasons); both by coming at these people for assets, making it difficult to come back to the US for any reason, or other things. 

I would be very hesitant to renounce citizenship of either, until you really do a fair amount of research on the cons of the matter.

https://foreignpolicy.com/2012/05/17/could-eduardo-saverin-be-barred-from-the-u-s-for-life/

Comment by Ozzie Gooen (oagr) on "Good judgement" and its components · 2021-04-17T22:03:53.573Z · EA · GW

I've been thinking about this topic recently. One question that comes to mind: How much of Good Judgement do you think is explained by g/IQ? My quick guess is that they are heavily correlated. 

My impression is that people with "good judgement" match closely with the people that hedge funds really want to hire as analysts, or who make strong executives of product managers. 

Comment by Ozzie Gooen (oagr) on Is there evidence that recommender systems are changing users' preferences? · 2021-04-15T03:16:36.772Z · EA · GW

(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into? 

It feels fairly clear to me that the media facilitates political differences, as I'm not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn't explain quick and correlated changes in political parties). 

(2) The specific issue of prolonged involvement doesn't seem hard to be believe. People spend lots of time on Youtube. I've definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.

All that said, my story above is fairly different from Stuart's. I think his is more of "these algorithms are a fundamentally new force with novel mechanisms of preference changes". My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where "preference modification" basically means, "I didn't used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement"

However, the issue of "how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?" is more vague.

Comment by Ozzie Gooen (oagr) on Is there evidence that recommender systems are changing users' preferences? · 2021-04-13T03:05:10.535Z · EA · GW

There's a lot of anecdotal evidence that news organizations essentially change user's preferences. The fundamental story is quite similar. It's not clear how intentional this is, but there seem to be many cases of people becoming extremized after watching/reading the news (not that I think about it, this seems like a major factor in most of these situations). 

I vaguely recall Matt Taibbi complaining about this in the book Hate Inc. 

https://www.amazon.com/Hate-Inc-Todays-Despise-Another/dp/B0854P6WHH/ref=sr_1_3?dchild=1&keywords=Matt+Taibbi&qid=1618282776&sr=8-3

Here are a few related links:

https://nymag.com/intelligencer/2019/04/i-gathered-stories-of-people-transformed-by-fox-news.html
https://www.salon.com/2018/11/23/can-we-save-loved-ones-from-fox-news-i-dont-know-if-its-too-late-or-not/

If it turns out that the news channels change preferences, it seems like a small leap to suggest that recommender algorithms that get people onto news programs leads to changing their preferences. Of course, one should have evidence to the magnitude and so on.

Comment by Ozzie Gooen (oagr) on What are the highest impact questions in the behavioral sciences? · 2021-04-07T15:09:57.422Z · EA · GW

I've done a bit of thinking on this topic, main post here:
https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental

I'm most excited about fundamental research in the behavioral sciences, just ideally done much better. I think the work of people like Joseph Henrick/David Graeber/Robin Hanson was useful and revealing. It seems to me like right now our general state of understanding is quite poor, so what I imagine as minor improvements in particular areas feel less impactful than just better overall understanding. 

Comment by Ozzie Gooen (oagr) on A Comparison of Donor-Advised Fund Providers · 2021-04-05T21:58:13.593Z · EA · GW

This looks really useful, many thanks for the writeup. I'd note that I've been using Vanguard for regular investments and found website annoying and the customer support quite bad; there would be long periods where they wouldn't offer any because things were "too crowded". I think most people underestimate the value of customer support, in part because it is most valuable in the tail end situations. 

Some quick questions:
- Are there any simple ways of making investments in these accounts that offer 2x leverage or more? Are there things here that you'd recommend?
- Do you have an intuition around when one should make a Donor-Advised Fund? If there are no minimums, should you set one up once you hit, say, $5K in donations that won't be spent a given tax year?
- How easy is it for others to invest in one's Donor-Advised Fund? Like, would it be really easy to set up your own version of EA Funds?

Comment by Ozzie Gooen (oagr) on Announcing "Naming What We Can"! · 2021-04-02T03:47:49.181Z · EA · GW

I think the phrases "Research Institute", and particular "...Existential Risk Institute" are a best practice and should be used much more frequently.

Centre for Effective Altuism -> Effective Altruism Research Institute (EARI)
Open Philanthropy -> Funding Effective  Research Institute (FERI)
GiveWell -> Shortermist Effective  Funding Research Institute (SEFRI)
80,000 Hours -> Careers that are Effective Research Institute (CERI)
Charity Entrepreneurship -> Charity Entrepreneurship Research Institute (CERI 2)
Rethink Priorities -> General Effective Research Institute (GERI)
 Center for Human-Compatible Artificial Intelligence -> Berkeley University Ai Research Institute (BUARI)
CSER -> Cambridge Existential Risk Institute (CERI 3)
LessWrong -> Blogging for Existential Risk Institute (BERI 2)
Alignement Forum -> Blogging for AI Risk Institute (BARI)
SSC -> Scott Alexanders' Research Institute (SARI)
 

Comment by Ozzie Gooen (oagr) on New Top EA Causes for 2021? · 2021-04-02T00:45:21.022Z · EA · GW

Maybe, Probabilistically Good?

Comment by Ozzie Gooen (oagr) on Some quick notes on "effective altruism" · 2021-03-25T03:09:58.478Z · EA · GW

I think this is a good point. That said, I imagine it's quite hard to really tell. 

Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences. 

Comment by Ozzie Gooen (oagr) on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-25T02:21:44.106Z · EA · GW

This is really neat. I think in a better world analysis like this would be done by Goodreads and updated on a regular basis. Hopefully the new API changes won't make it more difficult to do this sort of work in the future.

Comment by Ozzie Gooen (oagr) on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T06:39:20.752Z · EA · GW

I'd also note that the larger goals are to scale in non-human ways. If we have a bunch of examples, we could:

1) Open this up to a prediction-market style setup, with a mix of volunteers and possibly inexpensive hires.
2) As we get samples, some people could use data analysis to make simple algorithms to estimate the value of many more documents.
3) We could later use ML and similar to scale this further.

So even if each item were rather time-costly right now, this might be an important step for later. If we can't even do this, with a lot of work, that would be a significant blocker.

https://www.lesswrong.com/posts/kMmNdHpQPcnJgnAQF/prediction-augmented-evaluation-systems

Comment by Ozzie Gooen (oagr) on EA Funds is more flexible than you might think · 2021-03-10T04:40:53.584Z · EA · GW

From where I'm coming from, having seen bits of many sides of this issue, I think average quality matters more than average quantity.

Traits of mediocre donors (including "good" donors with few resources):
- Don't hunt for great opportunities
- High amounts of noise/randomness in results
- Be strongly overconfident in some weird ways
- Have poor resolution, meaning they will not be able to choose targets much better than light common sense wisdom
- Difficult, time consuming, and opaque to work with
- Not very easy to understand, or not predictable

If one particular person not liking your for an arbitrary reason (uncorrelated overconfidence) stops you from getting funding, that would be the sign of a mediocre donor.  

If we had a bunch of these donors, the chances would go up for some nonprofits. Different nonprofits could be overconfident in different ways, leading to more groups being over or below different bars. Some bad nonprofits would be happy, because the noise could increase their chances of getting funding. But I think this is a pretty mediocre world overall.

Of course, one could argue that a given particular donor base isn't that good, so more competition is likely to result in better donors. I think competition can be quite healthy and result in improvements in quality. So, more organizations can be good, but for different reasons, and only so much as they result in better quality.

Similar to Jonas, I'd like to see more great donors join the fray, both by joining the existing organizations and helping them, and by making some new large funds.

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:34:29.429Z · EA · GW

On the first part:
The main problem that I'm worried about it's not that the terminology is different (most of these questions use fairly basic terminology so far), but rather that there is no order to all the questions. This means that readers have very little clue what kinds of things are forecasted.

Wikidata does a good job of having a semantic structure where if you want any type of fact, you could know where to look. Compare this page of Barack Obama, to a long list of facts, some about Obama, some about Obama and one or two other people, all somewhat randomly written and ordered.  See the semantic web or discussion on web ontologies for more on this subject. 

I expect that questions will eventually follow a much more semantic structure, and correspondingly, there will be far more questions at some points in the future. 

On the second part:
By public dashboards, I mean a rather static webpage that shows one set of questions, but includes the most recent data about them. There's been a few of these done so far. These are typically optimized for readers, not forecasters. 
See:
https://goodjudgment.io/superforecasts/#1464
https://pandemic.metaculus.com/dashboard#/global-epidemiology

These are very different from Metaforecast because they have different features. Metaforecast has thousands of different questions, and allows one to search by them, but it doesn't show historic data and it doesn't have curated lists. The dashboards, in comparison, have these features, but are typically limited to a very specific set of questions.

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:29:47.697Z · EA · GW

This whole thing is a somewhat tricky issue and one I'm surprised hasn't been discussed much before, to my knowledge. 

But there's not yet enough data to allow that.

One issue here is that measurement is very tricky, because the questions are all over the place. Different platforms have very different questions of different difficulties. We don't yet really have metrics that compare forecasts among different sets of questions. I imagine historical data will be very useful, but extra assumptions would be needed.

We're trying to get at some question-general stat of basically, "expected score (which includes calibration + accuracy) adjusted for question difficulty."

One question this would be answering is, "If Question A is on two platforms, you should trust the one with more stars"

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:22:13.839Z · EA · GW

It's possible we have different definitions of ok. 

I have worked with browser extensions before and found them to be a bit of a pain. You often have to do custom work for Safari, Firefox, and Google Chrome. Browsers change the standards, so you have to maintain them and update them in annoying ways at different times.

Perhaps more important, the process of trying to figure out what text is important text of different webpages, and then finding some semantic similarities to match questions, seems tricky to do well enough to be worthwhile. I can imagine a lot of very hacky approaches that would just be annoying most of the time.

I was thinking of something that would be used by, say, 30 to 300 people who are doing important work. 

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:18:58.566Z · EA · GW

Thanks! If you have requests for Metaforecast, do let us know! 
 

Comment by Ozzie Gooen (oagr) on Forecasting Prize Results · 2021-02-24T06:57:05.169Z · EA · GW

Good to hear, and thanks for the thoughts!

Another way we could have phrased things would have been,
"This post was useful in ways X,Y, and Z. If it would have done things A,B, and C it would be been even more useful."

It's always possible to have done more. Some of the entries were very extensive. My guess is that you did a pretty good job per unit of time in particular. I'd think of the comments as things to think about for future work.

And again, nice work, and congratulations!

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2021-02-18T04:12:41.722Z · EA · GW

My point was just that understanding the expected impact seems more challenging. I'd agree that understanding the short-term impacts are much easier of those kinds of things, but it's tricky to tell how that will impact things 200+ years from now. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:38:33.039Z · EA · GW

Happy to hear you're looking for things that could scale, I'd personally be particularly excited about those opportunities. 

I'd guess that internet-style things could scale particularly well; like the Forum / EA Funds / online models, etc, but that's also my internet background talking :).   In particular, things could be different if it makes sense to focus on a very narrow but elite group.

I agree that a group should scale staff only after finding a scalable opportunity.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:35:00.973Z · EA · GW

Thanks!

Maybe I misunderstood this post. You wrote,

Therefore, we want to let people know what we're not doing, so that they have a better sense of how neglected those areas are.

When you said this, what timeline were you implying? I would imagine that if there were a new nonprofit focusing on a subarea mentioned here they would be intending to focus on it for 4-10+ years, so I was assuming that this post meant that CEA was intending to not get into these areas on a 4-10 year horizon. 

Were you thinking of more of a 1-2 year horizon? I guess this would be fine as long as you're keeping in communication with other potential groups who are thinking about these areas, so we don't have a situation where there's a lot of overlapping (or worse, competing) work all of a sudden.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:30:53.245Z · EA · GW

Thanks for the diagrams and explanation!

I think when I see the diagrams, I think of these as "low overhead roles" vs "high overhead roles"; where "low overhead roles" have peak marginal value much earlier than high overhead roles. If one is interested in scaling work, and assuming that requires also scaling labor, then scalable strategies would be ones that would have many low overhead roles, similar to your second diagram of "CEA in the Future"

That said, my main point above wasn't that CEA should definitely grow, but that if CEA is having trouble/hesitancy/it-isn't-ideal growing, I would expect that the strategy of "encouraging a bunch of new external nonprofits" to be limited in potential.

If CEA thinks it could help police new nonprofits, that would also take Max's time or similar; the management time is coming from the same place, it's just being used in different ways and there would ideally be less of it. 

In the back of my mind, I'm thinking that OpenPhil theoretically has access to +$10Bil, and hypothetically much of this could go towards promotion of EA or EA-related principles, but right now there's a big bottleneck here. I could imagine that it's possible it could make sense to be rather okay wasting a fair bit of money and doing things quite unusual in order to get expansion to work somehow.

Around CEA and related organizations in particular, I am a bit worried that not all of the value of taking in good people is transparent. For example, if an org takes in someone promising and trains them up for 2 years, and then they leave for another org, that could have been a huge positive externality, but I'd bet it would get overlooked by funders. I've seen this happen previously. Right now it seems like there are a bunch of rather young EAs who really could use some training, but there are relatively few job openings, in part because existing orgs are quite hesitant to expand. 

I imagine that hypothetically this could be an incredibly long conversation, and you definitely have a lot more inside knowledge than I do. I'd like to personally do more investigation to better understand what the main EA growth constraints are, we'll see about this. 

One thing we could make tractable progress in is in forecasting movement growth or these other things. I don't have things in mind at the moment, but if you ever have ideas, do let me know, and we could see about developing them into questions in Metaculus or similar. I imagine having a group understanding of total EA movement growth could help a fair bit and make conversations like this more straightforward. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:52:10.499Z · EA · GW

Thanks for all the responses!

I've thought about this a bit more. Perhaps the crux is something like this:

From my (likely mistaken) read of things, the community strategy seems to want something like:
1) CEA doesn't expand its staff or focus greatly in the next 3-10 years.
2) CEA is able to keep essential control and ensure quality of community expansion in the next 3-10 years.
3) We have a great amount of EA meta / community growth in the next 3-10 years.

I could understand strategies where one of those three is sacrificed for the other two, but having all three sounds quite tricky, even if it would be really nice ideally.

The most likely way I could see (3) and (1) both happening is if there is some new big organization that comes in and gains a lot of control, but I'm not sure if we want that. 

My impression is that (3) is the main one to be restricted. We could try encouraging some new nonprofits, but it seems quite hard to me to imagine a whole bunch being made quickly in ways we would be comfortable with (not actively afraid of), especially without a whole lot of oversight. 

I think it's totally fine, and normally necessary (though not fun) to accept some significant sacrifices as part of strategic decision making. 

I don't particularly have an opinion on which of the three should be the one to go.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:43:42.922Z · EA · GW

Thanks for the details and calculation of GW.

It's of course difficult to express a complete worldview in a few (even long) comments. To be clear, I definitely acknowledge that hiring has substantial costs (I haven't really done it yet for QURI), and is not right for all orgs, especially at all times. I don't think that hiring is intrinsically good or anything.

I also agree that being slow, in the beginning in particular, could be essential. 

All that said, I think something like "ability to usefully scale" is a fairly critical factor in success for many jobs other than, perhaps, theoretical research. I think the success of OpenPhil will be profoundly bottlenecked if it can't find some useful ways to scale much further (this could even be by encouraging many other groups). 

It could take quite a while of "staying really small" to "be able to usefully scale", but "be able to usefully scale" is one of the main goals I'd want to see. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:34:17.288Z · EA · GW

Having been in the startup scene, wisdom there is a bit of a mess.

It's clear that the main goal of early startups is to identify "product market fit", which to me seems like, "an opportunity that's exciting enough to spend effort scaling". 

Startups "pivot" all the time. (See The Lean Startup, though I assume you're familiar) 

Startups also experiment with a bunch of small features, listen to what users want, and ideally choose some to focus on. For instance, Instagram started with a general-purpose app; from this they found out that users just really liked the photo feature, so they removed the other stuff and just focussed on that. AirBnB started out in many cities, but later were encouraged to focus on one; but in part because of their expertise (I imagine) they were able to make a good decision. 

It's a known bug for startups to scale before "product market fit", or scale poorly (bad hires), both of which are quite bad.

However, it's definitely the intention   of basically all startups to eventually get to the point where they have an exciting and scalable opportunity, and then to expand. 

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2021-01-17T23:05:11.945Z · EA · GW

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach

From where I'm sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don't really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements. 

I'm quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don't cover the thing we're really interested in, and often they don't even replicate. 

My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they'd be similarly skeptical to Nuño here. 

All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy. 

I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for. 

For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one's epistemic abilities, and measuring educational interventions on such tests.