Posts

Impact Investing - A Viable Option for EAs? 2018-07-11T02:10:34.666Z

Comments

Comment by Naryan on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-10T19:59:50.720Z · EA · GW

A Metamodern Approach to "Leveling-up" Humanity

ThinkBetter was founded by five EAs in Toronto with the mission of creating a scalable rationality training program, and the goal of materially raising the global sanity waterline. Through a series of rapid-prototyping and OODA loops, we ended up 'transcending and including' our initial curriculum and strategy, and are now working with a deeper understanding of the complexity of the challenge.

I'd be interested in starting a discussion to share:

  • Our journey
  • A new approach to learning that we are finding highly effective
  • The emerging ecosystem of individuals, teams, and communities that are all working on different parts of the human development stack
  • And of course - how our perspective on EA is shifting as we learn to 'see' higher levels of complexity
Comment by Naryan on How we promoted EA at a large tech company · 2020-01-28T17:17:10.778Z · EA · GW

Thanks so much for posting about your experience, I anticipate your tips will help me improve my strategy for incorporating EA concepts at the large corporation I work in. I'll chime in with my own experiences in case they are also helpful to others.

I work in a Canadian company with 50k employees. In early 2019, I reached out to our company's charitable giving team, expressing an interest in helping run events to increase charitable giving engagement within my part of the business. This invitation was met with enthusiasm and support, over 2 conference calls and several emails. I didn't explicitly mention EA, just that I was well connected and had a fun systematic way for looking at charitable giving.

As we approached the giving campaign period in the fall, I reached out again with an exciting proposal to run a Giving Game, and asked if it could be included as an official charitable giving campaign event. This didn't end up working out (reasons are opaque to me, a few emails went by without response), but I'm hopeful for 2020. Instead, I invited folks from my network to attend, and we had a really good 10 person Giving Game.

This was the first one that I've run, and it seemed to land really well with the attendees. One key aspect was to show employees how they could donate to effective charities through RC Forward, directly from their paycheck. I hope to leverage their testimonials to support whatever proposal I come up with this year.

I think there is a lot of potential to incorporate EA concepts into a greater conversation at my company, and see two paths forward:

1. Grow a grass-roots conversation by finding people who are enthusiastic enough about EA to actually form a core team. Currently it's just me by myself, and this seems like a work-intensive long-term goal.

2. Shortcut the process by building a stronger relationship with our charitable giving team. Changes made by this team could be very high leverage - anything from changing the company matching program to include high impact charities (we don't), to tweaking default donation options and search functionality, to a paradigm shift in how folks view doing good.

I'm also playing with system-wide influence through a new role that I've taken on, which could transform the company culture. It's still early days, but I'm making meta-moves to create a community that increases empathy, connection, and systematic (rational) thinking.

Hoping my story is helpful for folks here. I'm interested in hearing more anecdotes from anyone else who's looking at EA from the context of a corporation.

Comment by Naryan on Notes on “The Art of Gathering” · 2018-12-06T21:16:57.705Z · EA · GW

Very cool summary, I've sent this to a few groups I'm a part of. I'm selfishly hoping it will lead to even better gatherings in my circles in the future!

Comment by Naryan on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-10-30T19:11:12.633Z · EA · GW

Hi Michael and team,

Thanks for thinking about this topic - I agree that this is an important update for the community, and I think you gave it the treatment it required.

I think the puzzle of wealth/income vs SWB is an interesting one. The finding that relative wealth plays a role in SWB made sense - and leads me to hypothesize that countries with lower inequality would be happier.

I found a meta-analysis on the topic which couldn't find a strong correlation. "The association between income inequality and SWB is weak, complex and moderated by the country economic development." - https://www.ncbi.nlm.nih.gov/pubmed/29067589

It is interesting to think about the reduction in happiness due to a neighbour getting a cash transfer (the spillover effect mentioned in source 21).

  • Could this be due to jealousy decreasing one's happiness? Do we need secret cash transfers?

  • Does the reverse also hold true - if your neighbours become poorer, does that make one happier? Seems dangerous to generalize these findings, but this area of research would be quite applicable to the conversations on basic income.

It's a bit of a rabbit-hole, but wondering if you've seen any research that speaks to this?

Comment by Naryan on Is it better to be a wild rat or a factory farmed cow? A systematic method for comparing animal welfare. · 2018-09-18T15:49:55.788Z · EA · GW

Great to see this being looked at. Do you have any examples of this method in use? I'd be interested to see various animals and situations ranked using this method - as it could provide a baseline to quantify the benefits of various interventions.

I also attempted to create my own method of comparing animal suffering while I was calculating the value of going vegetarian. I'll provide a quick summary here, and would love to hear if anyone else has tried something similar.

The approach was to create an internally consistent model based upon my naive intuitions and what data I could find. I spent a while tuning the model so that various trade-offs would make sense and didn't lead to incoherent preferences. It is super rough, but was a first step in my self-examination of ethics.

  1. I created a scale of the value of [human/animal] experience from torture (-1000) to self-actualization (+5) with neutral at 0.
  2. I guessed where various animal experiences fell on the scale, averaged over a lifetime. This is a very weak part of the model - and where Joey's method could really come in handy.
  3. I then multiplied the experience by the lifespan of the animal (as a percentage of human life).
  4. Finally, I added a 'cognitive/subjectivity' multiplier based on the animal's intelligence. This is contentious, but helps so I don't value the long-lived cicada (insect) the same as a human. This follows from other ethical considerations in my model, but some people prefer to remove this step.

The output of this rough model was to value various animal lives as a percentage of human lives - a more salient/comparable measure for me.

This model was built over about 5 hours and is still updating as I have more conversations around animal suffering. Would love to hear if anyone else tried a different strategy!

Comment by Naryan on Why are you here? An origin stories thread. · 2018-08-09T21:02:23.900Z · EA · GW

Pretty cool idea - since I'm new to EA, I hope this will become a neat snapshot for me to look back on in a few years to see how far I've come.

Growing up, I believe I was raised to be a decent member of society - be kind to others, don't litter, help those in need. I never really thought explicitly about ethics, or engaged deeply with any causes. Sure, I'd raise money for cancer at "Relay for Life", but it wasn't because I thought the $100 dollars would make a difference - more because it would be fun to have a camp-out with friends.

In my twenties, my goal was primarily to make money to retire early so I could travel, and maybe volunteer my time to help increase financial literacy, or apply my career experience in a not-for-profit. Fairly ephemeral goals though - I also considered becoming a full-time music producer.

Rationality

When I was 28, I found Less Wrong from a link my friend posted on Facebook. Over the next two years I read every essay in the Rationality sequences, supplemented by a healthy amount of psychology/economy/math/self-help style audiobooks. Reading that material was an enjoyable journey and lead to a few minor epiphanies.

  • I love improving my thinking, and upgrading my effectiveness
  • I thought deeply about my ethics for the first time
  • I have a responsibility to improve the world in the biggest and best way possible

Seriously - the last book "Becoming Stronger" and the sequence "Challenging the Difficult" really motivated me to think much larger than I had before. Discovering 80,000 Hours around the same time was a great template to follow.

Effective Altruism

In May 2018 I attended my first EA meet-up. I recall thinking, "Wow! There are actually other rationalists out there". Up until that point, I'd never really met others who thought or spoke similarly, let alone a whole room full of them. I'm currently enjoying the learning curve, finding more questions than answers.

  • Attending weekly meet-ups at the Fox & Fiddle with EA Toronto
  • Hosting games nights, going on hikes, watching debates
  • Independently tackling cause prioritization, clarifying my ethics and their implications for where I should dedicate my effort
  • Excited to attend the EA Summit 2018!

I'm currently working with a team of amazing EAs towards my top cause priority, and hope to launch this autumn.

Comment by Naryan on A Critical Perspective on Maximizing Happiness · 2018-08-03T15:59:05.111Z · EA · GW

What a coincidence - I just started reading the book "The Happiness Advantage" by Shawn Achor. While I'm only on the second chapter, the gist seems to be: Happiness is not a product of success, but rather a precursor. Happy people are more likely to succeed.

If this premise is true, then I think positive psychology would have an edge over stoicism, when looking forward. Stoicism might be a better technique when thinking about events in the past.

Neutral evaluation of things you cannot change, but a focus on the future states that you prefer. I wish I could have some actual evidence to back this up, but this way of thinking has worked for me so far.

Comment by Naryan on Impact Investing - A Viable Option for EAs? · 2018-07-13T21:43:51.950Z · EA · GW

Hey Jamie, thanks for linking me up with those additional resources - it's a refreshing perspective on the topic after combing through so many non-EA articles.

Continuing the conversation from your blog post on impact investing, I really like the perspective that the appeal of impact investing depends on how funding-constrained a cause or company is. If they have no problem raising money for free or at low cost, they have no need to promise a high return. Inversely, in a place where it is hard to raise capital, companies should be more willing to offer higher returns to attract investment. For someone who is interested in this area, it may still be better to offer a donation rather than take the money, but if you think there are better causes then you could invest for medium good + high profit, then take the earnings to a cause with even better social utility.

From your general thoughts: 1) I'm trying to extrapolate this concept out to a general thought about donating vs investing in general. The hard question looks something like this:

If you compare your best cause/charity vs an index fund earning 7%, under what circumstances are you ambivalent between directing your money to either?

I don't have an answer to that question for myself, but here is the sidestep:

  • Finding either a better charity or a better investment opportunity aught to change your preference

  • If the market is efficient, and any social good tagged onto the investment would reduce the financial return, then you'd be wise not to invest in any impact investment whose social utility was worse than your best charity.

  • If you think markets are inefficient and it's possible earn greater than average returns (by skill), or if you think the charity market is inefficient (less worthy causes get more funding than your most worthy cause), then you'd theoretically be able to find impact investments that would benefit you.

2) I do think it would be really cool to have an EA themed impact fund. Offer an investment vehicle that targets at-market returns while investing in particularly effective cause areas. I'd set it up where the fund invested in securities that matched the preferences of the investors. If half the investors really valued animal rights, 50% of the holdings would be in that area. I wonder if any of the 250 EAs in the FB group have any expertise in setting up something like this...

Re: Specific suggestions: I'm not super up on my knowledge of charity evaluations, but for climate change, it seems that the common currency is $/CO2 tonne. For World Tree, the estimate looks like this: 1 acre costs $2500 CAD, and sequesters 103 tonnes/year (I wasn't able to find a third party # on this). Lifespan of the trees is 50 years, for a total of 5150 tonnes per acre.

I'm having a bit of a rationality crisis here though; Halstead recently posted the new research on climate change charities, which found that the Coalition for Rainforest Nations can reduce carbon for an estimated $0.12/tonne. Should I cancel my World Tree investment and additionally take out a loan to fund this initiative since it is so much more effective? It's really tough being half a rationalist... I want to do good now but also good in the future.

Next steps: figure out my own utility function, while searching for those sweet impact investments that the market has overlooked.

Comment by Naryan on Impact Investing - A Viable Option for EAs? · 2018-07-11T19:10:16.129Z · EA · GW

I think you hit the nail on the head - the current offering of impact investment platforms and offerings for a retail investor is fairly uninspiring. Can they stack up against the best EA charitable causes? Odds are against it.

I did allocate some of my retirement funds (currently in equity) towards buying a few acres with World Tree, which I think is a step in the right direction - more impact, and likely higher returns (ask me again in 10 years). I know mental bucketing of finances is some kind of bias, but keeping my charitable donations separate from my retirement fund will lead me to a future of financial security, rather than donating everything to my favourite charity today.

From a utilitarian view - I'd love to hear more perspectives on the trade off between traditional investing vs charitable giving. Is this an optimization problem? Or is there a strong argument against one or the other?

Comment by Naryan on Impact Investing - A Viable Option for EAs? · 2018-07-11T18:54:09.429Z · EA · GW

I agree that markets are inefficient, but believe that the inefficiency results in opportunities that are both worse than average and better than average. Since I suspect most investors under-value the social impact, this would result in impact investments that are more attractive than average to someone who does value the impact as well as the return.

Generally when was looking to invest, I looked for options that I expected to outperform market average at a set risk level, and I didn't assess social utility in that calculation (assuming I could donate the return more effectively, as you suggest). I'm not sure if this logically follows, but if my choice is between effective charity and impact investment, generally an effective charity would do more good. But if I'm considering my retirement fund, I believe the right impact investment could be better than a comparable equity investment - I just need to remember to include the social utility in my valuation.

Comment by Naryan on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-10T21:06:14.358Z · EA · GW

Fantastic post! It's a significant upgrade from the "terminal/instrumental values" mental model I was previously using.

When I first joined EA, I looked at the annual survey of EAs and was surprised to see so much variation in how EAs ranked the importance of the major causes. I thought that the group would be moving towards a consensus, and that each individual member would be able to trace their actions up towards their understanding of the most important causes.

Personally, I tried to build up my own understanding of the cause priority from strong foundations, doing my best to answer meta questions like "do I value all people equally", "how do I weight animal suffering vs human happiness". From there, I worked my way down the V2ADC, trying to meta-analyze the research on causes, eventually coming to an area that I felt confident was the best place to add value.

I think with a bit more nuance, the EA survey could serve as a good feedback mechanism to see where on the chain we all see ourselves, and to see if the sum of the parts adds up to anything resembling a consistent whole. Will the EA community end up converging in beliefs and strategy? Is it an elephant in the room to say that half of the people working on X cause aught to shift to Y cause because the people up the chain are confident that it is a better move for the community? Even if the exploratory folks at the bottom raised their evidence up the chain, would we have enough corrigibility to pivot? (Love that word, totally gonna use it more!)

Comment by Naryan on Open Thread #40 · 2018-07-10T11:01:10.010Z · EA · GW

This field is really interesting, and there is a lot of research out there on it. The Global Impact Investing Network (GIIN) is a good starting place, but I've spent about a week pulling together stats from several sources to build my view on this space, and the Canadian options in particular.

I do like World Tree in particular, because it both produces high-impact social utility, has a high expected financial return, and I can actually buy-in without being accredited. Unfortunately for people with less than $1M, the options for impact investing are very slim at the moment.

Typical options include Green Bonds with a 4-5% return over 5 years, or investments in smaller community funds with a fairly small impact.

Check out a few Canadian options at OpenImpact

Comment by Naryan on Open Thread #40 · 2018-07-10T10:51:19.379Z · EA · GW

The key word is "safely". This kind of investment would be considered high risk - this company only started this program three years ago, and the first trees haven't yet produced profit. Additionally, the 10 year duration is unattractive for many investors, and there isn't really a market for this type of wood in North America yet. They need to offer a big reward in order to entice investors to fund their venture at this early stage.

I suspect other early stage ventures would have a similar high-risk, high potential return profile, which is why they are typically limited to accredited investors.

Comment by Naryan on Open Thread #40 · 2018-07-09T22:19:58.005Z · EA · GW

Impact Investing from an EA Perspective

This is just a teaser, since I don't have enough karma for a full post yet!

Picture a scale that has charity one one side (good social utility, -100% financial return) and Investing on the other (zero social utility, 7% financial return). Impact investing is a space that can give similar risk-adjusted market returns as traditional investments, but also provides social utility.

In my research, I've found several factors that make me excited about this area:

  • Impact investing is about 5% the size of charitable donations (22B vs 410B in 2016), and is growing much faster (17% vs 4% annually)

  • Impact investing makes up only 0.16% of the total capital markets - huge room for growth

  • Philanthropic enterprises with sustainable business models can use existing capital markets to get funded on a large scale

  • Due to the market's current inability to accurately value the 'social utility' provided, there are many greatly under-valued investment opportunities, providing similar social utility as comparable charities

I've got more detail, logic and sources in the full post, but in the mean time, I'll tell you about one example opportunity that I've zoomed-in on.

WorldTree is a company that lets you buy an acre of fast-growing Empress Splendor trees. It's goal is to generate income from the harvest of the trees, and offset the carbon footprint of investors:

  • $2500 CAD minimum investment, enough to plant 1 acre of trees
  • One acre is enough to offset your lifetime carbon footprint
  • The timber is sold after 10 years, conservative return to the investor is $20k

From an EA perspective, I compared the stated carbon cost of World Tree ($1.72/tonne) to Cool Earth ($1.34/tonne) and traditional carbon offset programs ($10/tonne). This investment could return a 23% annual return, while the Cool Earth 'investment' would be a loss of 100%. At it's surface, this example does look quite promising when counting both the social utility generated, and the future utility my $20k could do in 10 years time.

Looking forward to posting a more detailed write-up on the space once I'm able, and to hearing your feedback on these ideas!