Posts

Greg_Colbourn's Shortform 2021-11-18T09:21:38.864Z
What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short? 2021-11-12T21:59:07.383Z
EA Hotel Fundraiser 2: Current guests and their projects 2019-02-04T20:41:18.823Z
EA Hotel with free accommodation and board for two years 2018-06-04T18:09:09.845Z

Comments

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-19T14:35:27.732Z · EA · GW

I don't think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don't need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.

Comment by Greg_Colbourn on WilliamKiely's Shortform · 2021-11-18T19:39:19.078Z · EA · GW

Gift Cards are live now at  https://www.tisbest.org/redefinegifting

Comment by Greg_Colbourn on Greg_Colbourn's Shortform · 2021-11-18T09:21:39.041Z · EA · GW

[Half-baked global health idea based on a conversation with my doctor: earlier cholesterol checks and prescription of statins]

I've recently found out that I've got high (bad) cholesterol, and have been prescribed statins. What surprised me was that my doctor said that they normally wait until the patient has a 10% chance of heart attack or stroke in the next 10 years before they do anything(!) This seems crazy in light of the amount of resources put into preventing things with a similar (or lower) risk profiles, such as Covid, or road traffic accidents. Would reducing that to, say 5%* across the board (i.e. worldwide), be a low hanging fruit? Say by adjusting things set at a high level. Or have I just got this totally wrong? (I've done ~zero research, apart from searching givewell.org for "statins", from which I didn't find anything relevant).

*my risk is currently at 5%, and I was pro-active about getting my blood tested.

Comment by Greg_Colbourn on What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short? · 2021-11-14T10:19:25.534Z · EA · GW

I think the main problem is that you don't know for sure that they're close to AGI, or that it  is misaligned, beyond saying that all AGIs are misaligned by default, and what they have looks close to one. If they don't buy this argument -- which I'm assuming they won't given they're otherwise proceeding  -- then you probably won't get very far. 

As for using force (lets assume this is legal/governmental force), we might then find ourselves in a "whack-a-mole" situation, and how do we get global enforcement (/cooperation)?

Comment by Greg_Colbourn on What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short? · 2021-11-13T21:15:16.279Z · EA · GW

Imagine it's just the standard AGI scenario where the world ends "by accident", i.e. the people making the AI don't heed the standard risks,  or solve the Control Problem, as outlined in books like Human Compatible and Superintelligence, in a bid to be first to make AGI  (perhaps for economic incentives, or perhaps for your ** scenario). I imagine it will also be hard to know who exactly the actors are, but you could have some ideas (e.g. the leading AI companies, certain governments etc). 

Comment by Greg_Colbourn on What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short? · 2021-11-13T16:58:44.109Z · EA · GW

Ok, changed to $10B.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-12T06:42:45.233Z · EA · GW

Good idea about the fellowship. I've been thinking that it would need to come from somewhere prestigious. Perhaps CHAI, FLI or CSER, or a combination of such academic institutions? If it was from, say, a lone crypto millionaire, they might risk being dismissed as a crackpot, and by extension risk damaging the reputation of AGI Safety. Then again, perhaps the amounts of money just make it too outrageous to fly in academic circles? Maybe we should be looking to something like sports or entertainment instead? Compare the salary to that of e.g. top footballers or musicians. (Are there people high up in these fields who are concerned about AI x-risk?)

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T18:43:33.943Z · EA · GW

Yes, concern is optimisation during training. My intuition is along the lines of "sufficiently large pile of linear algebra with reward function-> basic AI drives maximise reward->reverse engineers [human behaviour / protein folding / etc] and manipulates the world so as to maximise it's reward ->[foom / doom]".

I wouldn't say "personality" comes into it. In the above scenario the giant pile of linear algebra is completely unconscious and lacks self-awareness; it's more akin to a force of nature, a blind optimisation process.

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T16:16:22.830Z · EA · GW

It explains how a GPT-X could become an AGI (via world modelling). I think then things like the basic drives would take over. However, maybe it's not the end result model that we should be looking at as dangerous, but rather the training process? A ML-based (proto-)AGI could do all sorts of dangerous (consequentialist, basic-AI-drives-y) things whilst trying to optimise for performance in training.

Comment by Greg_Colbourn on How many people should get self-study grants and how can we find them? · 2021-11-11T15:28:31.950Z · EA · GW


My original idea (quote below) included funding people at equivalent costs remotely. Basically no one asked about that. I guess because not many EAs have that low a living cost (~£6k/yr). And not that many could without moving to a different town (or country), and there isn't much appetite for that / coordination is difficult. 

Maybe we need a grant specifically for people to work on research remotely that has a higher living cost cap? Or a hierarchy of such grants with a range of living costs that are proportionally harder to get the higher the costs are.

For consistency, grants should be given in general to any EA whose living costs are low enough... Providing grants only at low levels of living costs [can be] a better investment for the donor, but it also leaves a gap for people who don’t want to - or aren’t in a position to - radically alter their lifestyles but still need their living costs covered in order to do (more) useful EA things. Given that EA (as a movement) seems to have plenty of cash at the moment, perhaps there is room for this space to grow. I can envisage a hierarchy where the bigger grants have more stringent demands and more competition. I guess this is just the current non-profit (and for profit!) landscape, but for individuals instead of organisations.

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T14:54:37.631Z · EA · GW

Here is an argument for how GPT-X might lead to proto-AGI in a more concrete, human-aided, way: 

..language modelling has one crucial difference from Chess or Go or image classification. Natural language essentially encodes information about the world—the entire world, not just the world of the Goban, in a much more expressive way than any other modality ever could.[1] By harnessing the world model embedded in the language model, it may be possible to build a proto-AGI.

...

This is more a thought experiment than something that’s actually going to happen tomorrow; GPT-3 today just isn’t good enough at world modelling. Also, this method depends heavily on at least one major assumption—that bigger future models will have much better world modelling capabilities—and a bunch of other smaller implicit assumptions. However, this might be the closest thing we ever get to a chance to sound the fire alarm for AGI: there’s now a concrete path to proto-AGI that has a non-negligible chance of working.

Crossposted from here

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T14:49:15.485Z · EA · GW

Steve Omohundro 

...Google and others are using Mixture-of-Experts to avoid some of that cost: https://arxiv.org/abs/1701.06538

Matrix multiply is a pretty inefficient primitive and alternatives are being explored: https://arxiv.org/abs/2106.10860

These stand out for me as causes for alarm. Anything that makes ML significantly more efficient as an AI paradigm seems like it shortens timelines. Can anyone say why they aren't cause for alarm? (See also)

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T14:05:20.551Z · EA · GW

Eliezer Yudkowsky 

...Throwing more money at this problem does not obviously help because it just produces more low-quality work

Maybe you're not thinking big enough? How about offering the world's best mathematicians  (e.g. Terence Tao) a lot of money to work on AGI Safety. Say $5M to work on the problem for a year. Perhaps have it open to any Fields Medal recipient. (More)
 

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T13:40:40.557Z · EA · GW

I think the issue is more along the lines of the superhuman-but-not-exceedingly-superhuman AGI quickly becoming an exceedingly-superhuman AGI (i.e. a Superintelligence) via recursive self-improvement (imagine a genius being able to think 10 times faster, then use that time to make itself think 1000 times faster, etc). And AGIs should tend toward consequentialism via convergent instrumental goals (e.g.).

Or are you saying that you expect the superhuman France/China/Facebook AGI to remain boxed?

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T13:23:16.657Z · EA · GW

Has anyone tried GPT3-ing this to see if it comes up with any interesting ideas?

Comment by Greg_Colbourn on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T13:18:46.077Z · EA · GW

Deepmind would have lots of penalty-free affordance internally for people to not publish things, and to work in internal partitions that didn't spread their ideas to all the rest of Deepmind.

Companies like Apple and Dyson operate like this (keeping their IP tightly under wraps right up until products are launched). Maybe they could be useful recruiting grounds?

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-10T21:32:07.552Z · EA · GW

CEEALAR now has it's own page, separate from PPF (our US fiscal sponsor, which donations are still routed through). The benefit of this is that donations can be matched separately (with the PPF fundraisers, you can only get a match for one of them).

Comment by Greg_Colbourn on How many people should get self-study grants and how can we find them? · 2021-11-10T17:34:08.211Z · EA · GW

(Or, if it's better for EA to stay smaller, then more dedicated self-starting __s should be funded to work on __ full-time.)

Comment by Greg_Colbourn on How many people should get self-study grants and how can we find them? · 2021-11-10T17:32:40.710Z · EA · GW

I imagine A < B in terms of numbers of people; and B ≈ C, given you are pre-selecting for "self-starting EAs". I think just being dedicated enough to EA to want to spend a year working on it full time is a reasonably strong signal that you would have something to contribute, given that dedication seems to require a strong understanding in the case of EA. And self-starting + dedication + a strong understanding + working full-time on trying to have impact at the margin should = impact at the margin.

Obviously there is then the important detail of how big the impact is, relative to the salary. CEEALAR tries to keep costs to a minimum as a way of raising this ratio, but it's plausible that much higher salaries (grants) could produce more impact/$. 

I think more dedicated self-starting EAs should be funded to work on EA full-time. Bs can be identified by offering grants and seeing who applies (this is already happening to some degree, but could be expanded). Once identified, we vet them to try and estimate whether it's cost effective to throw a year's salary at them.

Comment by Greg_Colbourn on Forecasting transformative AI: the "biological anchors" method in a nutshell · 2021-11-10T17:10:27.814Z · EA · GW

with the exception of "mixture-of-experts models" that I think we should disregard for these purposes, for reasons I won't go into here

This is taken from a footnote. Clicking on the link and reading the abstract, it immediately jumped out as something that we should be potentially quite concerned about (i.e. the potential to scale models by ~1000x using the same compute!), so I'm curious about the reasons for disregarding that you didn't go into in the post. Can you go into them here?

Using the "Cited by" feature on Google Scholar, I've found some more recent papers, which give an impression that there is progress being made with mixture-of-experts models (that could potentially dramatically speed up timelines?). Also, naively, is this not kind of how the brain works? Different subsets of neurons (brain areas) are used for different tasks (vision, hearing, memory etc). Emulating this with ML models seems like it would be a huge step forward for AI.

Comment by Greg_Colbourn on How to make the best of the most important century? · 2021-11-10T13:50:50.934Z · EA · GW

Spreading ideas and building communities.

Holden, have you considered hosting seminars on the Most Important Century? (And incentivising important people to attend?) I've outlined this idea here.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-09T11:03:47.116Z · EA · GW

(Sorry if some of my ideas are fairy big budget, but EA seems to have quite a big budget these days)

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-09T11:03:29.695Z · EA · GW

Maybe if such a thing is successfully pulled off, it could be edited into a documentary TV series, say with clips from each week's discussion taken from each of the groups, and an overarching narrative in interludes with music, graphics, stock footage (plane on runway) etc.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-09T10:44:01.553Z · EA · GW

Note there have already been Most Important Century seminars hosted. I missed this one. Would be interested to hear how it went.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-09T10:42:36.557Z · EA · GW

Are there any prior examples of this kind of thing? (I haven't found any with a quick search.)

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-09T10:40:19.011Z · EA · GW

Introduce important people* to the most important ideas by way of having seminars they are paid to attend. I recommend Holden Karnofsky’s Most Important Century series for this as it is highly engaging, very readable, and has many jumping off points to go into more depth; but other things are also very good. The format could be groups of 4, with a moderator (and optionally the authors of the pieces under discussion on the side lines to offer clarity / answer questions). It could be livestreamed for accountability (on various platforms to diversify audience). Or not livestreamed for people who care about anonymity, but under the condition that anonymised notes of the conversations are made publicly available. And it should be made clear that the fee is equivalent to a “speakers fee” and people shouldn’t feel obliged to “tow the party line”, but rather speak their opinions freely. The fee should be set at a level where it incentivises people to attend (this level could be different for different groups). In addition to (or in place of) the fee, there could be prestige incentives like having a celebrity (or someone highly respected/venerated by the particular group) on the panel or moderating, or hosting it at a famous/prestigious venue. Ideally, for maximum engagement with the ideas and ample time for discussion, the seminar should be split over multiple weeks (say one for each blog post), but I understand this might be impractical in some cases given the busy schedules important people tend to have.

(Note that this is in a similar vein to another other one of my ideas, but at a much lower level.)

*important people would include technical AI/ML experts, AI policy experts, policy makers in general, public intellectuals; those of which who haven’t significantly engaged with the ideas already.

Comment by Greg_Colbourn on EA Infrastructure Fund: Ask us anything! · 2021-11-07T14:54:17.191Z · EA · GW

"can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?"

Interesting discussion. What if there was a separate brand for a mass movement version of EA?

Comment by Greg_Colbourn on What high-level change would you make to EA strategy? · 2021-11-07T14:51:43.864Z · EA · GW

Some related discussion here.

Comment by Greg_Colbourn on What high-level change would you make to EA strategy? · 2021-11-07T13:51:07.785Z · EA · GW

I've sometimes wondered whether it would be good for there to be a distinct brand and movement for less hardcore EA, that is less concerned with prestige, less elitist, more relaxed, and with more mainstream appeal. Perhaps it could be thought of as the Championship to EA's Premier League. I think there are already examples, e.g. Probably Good (alternative to 80,000 Hours), TLYCS and OFTW (alternatives to GWWC), and the different tiers of EA investing groups (rough and ready vs careful and considered). Places where you feel comfortable only spending 5 minutes editing a post, rather than agonising about it for hours; where you feel less pressure to compete with the best in the world; where you are less prone to analysis paralysis or perfect being the enemy of the good; where there is less stress, burnout and alienation; where ultimately the area under the impact curve could be comparable, or even bigger..? Perhaps one of the names mentioned here could be used.

[Note I expect pushback on this,  and considered posting anonymously, but I'm posting in the spirit of the potential broader movement. Apologies if I've offended anyone by insinuating they are "only" Championship material. That was not my intention - the Championship is still a very high standard in absolute terms!]

Comment by Greg_Colbourn on List of EA-related organisations · 2021-11-07T12:42:35.090Z · EA · GW

Giving the EA funding saturation situation, it would be great to see this list include room for more funding (RFMF), or link to a spreadsheet similar to GiveWell's (that Ben Todd tweeted about), but for all EA related orgs.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T12:48:01.240Z · EA · GW

True. But maybe the limiting factor is just the consideration of such ideas as a possibility? When I was growing up, I wanted to be a scientist, liked space-themed Sci-Fi, and cared about many issues in the world (e.g. climate change, human rights); but I didn't care about having or wanting money (in fact I mostly thought it was crass), or really think much about it as a means to achieving ends relating to my interests. It wasn't until reading about (proto-)EA ideas that it clicked.

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T08:08:15.174Z · EA · GW

Interesting. I wonder: many people say they aren't motivated by money, but how many of them have seriously considered what they could do with it other than personal consumption? And how many have actually been offered a lot of money -- to do something different to what they would otherwise do, that isn't immoral or illegal -- and turned it down? What if it was a hundred million, or a billion dollars? Or, what if the time commitment was lower - say 6 months, or 3 months?

Comment by Greg_Colbourn on Liberty in North Korea, quick cost-effectiveness estimate · 2021-11-03T14:21:53.425Z · EA · GW

Note that LINK is on every.org, so for a short time you can get a donation up to $100 matched (and double your cost-effectiveness estimate).

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-03T12:41:11.910Z · EA · GW

This could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem).

Comment by Greg_Colbourn on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-03T12:32:54.002Z · EA · GW

Offer the world's best mathematicians  (e.g. Terence Tao) a lot of money to work on AGI Safety. Say $5M to work on the problem for a year. Perhaps have it open to any Fields Medal recipient.

I imagine that they might not be too motivated by personal consumption, but with enough cash they could forward goals of their own. If they'd like more good math to be done, they could use the money to offer scholarships, grants, and prizes, or found institutes, of their own. (If $5M isn't enough -- I note Tao at least has already won $millions in prizes -- I imagine there is enough capital in the community to raise a lot more. Let them name their price.)

[Previously posted as a comment on MIRI’s Facebook page here.]

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-02T14:10:17.231Z · EA · GW

EDIT: this didn't work (the transfer got reverted from Wise back to Revolut). 

It might be possible to use Wise (formerly TransferWise). I've done it; it seems to have gone through on every.org but not left my Wise account yet. First you have to add cash to your USD balance (I did this using Revolut to minimise conversion losses), then you get a routing number and account number on the "Inside the US" tab of "Your USD account details"; you can then use this to add the bank account to every.org (which takes a few hours; you need to confirm the amounts of a couple of small test transactions they send).

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-01T20:18:20.501Z · EA · GW

New additions since the start of the match: ALLFED and CEEALAR
 

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-01T16:34:33.131Z · EA · GW

The $100 for sharing has been dropped to $10. See update here.

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-01T16:28:32.530Z · EA · GW

Your profile is set to private so can only see if following (have sent a follow request).

Comment by Greg_Colbourn on Make a $100 donation into $200 (or more) · 2021-11-01T16:25:35.763Z · EA · GW

The funds clearly aren't going to last a month. I think another 24-48 hours would be good going given the trajectory. I guess they haven't factored in EAs being attracted to it like moths to a (heart-filamented) lightbulb :)

Comment by Greg_Colbourn on List of EA funding opportunities · 2021-11-01T14:47:23.261Z · EA · GW

CEEALAR:  "We make grants to individuals and charities in the form of providing free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses, at our hotel in Blackpool, UK."

Comment by Greg_Colbourn on [Creative Writing Contest] All the People You Could Come to Love · 2021-10-27T08:06:33.974Z · EA · GW

I liked the story, thanks for writing it! One thing about the nuclear calculations of the US that I am confused about though - surely under any scenario with the satellite destroying weapon there would be Mutually Assured Destruction? i.e. as soon as the US shot down the satellites, China would launch their nuclear arsenal. They likely would not wait to find out what happened. They may even have a "Dead Man's Switch" already rigged for the satellite network. That the US didn't see this (at least initially, as is the premise) seemed a little unrealistic. Although I guess in real life governments often do crazy things in high stakes situations (cf Covid).

Comment by Greg_Colbourn on The humbling art of catching golden fish · 2021-10-14T07:03:06.291Z · EA · GW

The seeds of EA are there, but are very subtle: thinking about how best to catch the fish; Sun's concern for the worms and the fish.

Maybe it would be good to have another section where Barlow is thinking about his life and applies those lessons to it and vows to do more effective/altruistic things? And maybe he even renounces fishing the next summer? (It's kind of ironic that the "evidence and reasoning" part was related to an activity directly at odds with a major EA cause area. Maybe Barlow could recognise this.)

Comment by Greg_Colbourn on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-03-28T22:40:01.253Z · EA · GW

Started watching Next. Think it's great and will recommend people watch it if they want to understand what the big deal is with AI safety/alignment. However, it's frustrating for UK viewers - Episodes 1-3 are available on Disney+, and Episodes 6-10 are available elsewhere, but where are episodes 4 & 5!? Will try YouTube TV with a VPN..

Comment by Greg_Colbourn on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-03-28T22:33:45.374Z · EA · GW

I thought Seaspiracy was great - I started watching it without realising what it was, and it started with the filmmaker wanting to make a documentary about the oceans, then getting concerned about plastic waste (e.g. straws and bottles), and then it just kept going as he went down the rabbit hole. Seemed like a very EA kind of progression :)

Comment by Greg_Colbourn on [deleted post] 2021-03-17T19:51:49.356Z

I realise that. But I wouldn't be surprised if the median household in the developed world had at least one spare room (this was one of the reasons why the "bedroom tax" was so unpopular in the UK).

Comment by Greg_Colbourn on Opportunity for EA orgs: $5k/year in ETH (tech setup required) · 2021-03-17T11:46:04.646Z · EA · GW

Great, thanks. Do they accept UK charities? We are potentially interested at CEEALAR.

Comment by Greg_Colbourn on [deleted post] 2021-03-16T15:59:30.747Z

Great to see the write up of expenditure! 

Housing: this is kind of pretend, because we actually built two extra rooms onto our house, which had a high up-front cost but will be useful for years and will eventually make the house sell for more. I’m instead substituting the cost at which we currently rent our spare bedroom ($900/month times 2 bedrooms)

I think it's unusual for people to rent out their spare rooms, and it's good that you have done so and provided yourselves with more income/reduced your living costs. By that metric I imagine that many people (especially home owners) have higher housing costs than they think. Maybe EAs are more likely to think about this and maximise the efficiency of their housing. But at the limit, every loft, basement and garage not converted is counterfactual lost earnings. Or, indeed, you could say that real estate investment in general is profitable, and people should do more of it. But then so are other things. So any profits "left on the table" through suboptimally investing money are also potential "costs"... (and then here things get tricky, in determining what the optimal investments are. And we're pretty much back to the foundation of EA! Optimal allocation of resources).
 

[As I've said elsewhere in this thread, I don't think children are a special case of expensive. They are one of several things that can be expensive (see also: location, career choice, suboptimal investment, tastes, hobbies), and for most people, who aren't already maximising their financial efficiency (frugality; investments), it's a matter of prioritisation as to the relative expense of having them.]

Comment by Greg_Colbourn on [deleted post] 2021-03-16T11:26:53.255Z

why not do so without kids and get roommates to save costs instead? Or rent a smaller place in Manchester?

Indeed. I would recommend that for anyone trying to be frugal so they can save/donate more (especially if they can work remotely). My point is, however, that unless you are already living a maximally frugal lifestyle, it's possible to reduce your living costs in other areas such that having children needn't be financially expensive. Children aren't necessarily a special case of "expensive living costs". It's ultimately a matter of prioritisation.

Comment by Greg_Colbourn on Opportunity for EA orgs: $5k/year in ETH (tech setup required) · 2021-03-15T12:32:18.243Z · EA · GW

Who is behind it? I can't see any names attached, and a charity would usually need names for due diligence before being able to accept donations of that size. I guess it's ok if they want public anonymity, but they will reveal names privately.