Posts

Free money from New York gambling websites 2022-01-24T22:50:06.381Z

Comments

Comment by Robi Rahman (robirahman) on Anti-squatted AI x-risk domains index · 2022-08-12T18:14:21.452Z · EA · GW

I'm slapping myself on the forehead for not thinking of this earlier, especially after seeing what happened to ea.org. We should do this for other cause areas too. And some funder should give you a retroactive grant for this or buy the domains from you.

Comment by Robi Rahman (robirahman) on Why does no one care about AI? · 2022-08-08T15:01:13.204Z · EA · GW

You think there's an x-risk more urgent than AI? What could be? Nanotech isn't going to be invented within 20 years, there aren't any asteroids about to hit the earth, climate tail risks only come into effect next century, deadly pandemics or supervolcanic eruptions are inevitable on long timescales but aren't common enough to be the top source of risk in the time until AGI is invented. The only way anything is more risky than AI within 50 years is if you expect something like a major war leading to usage of enough nuclear or biological weapons that everyone dies, and I really doubt that's more than 10% likely in the next half century.

Comment by Robi Rahman (robirahman) on Why does no one care about AI? · 2022-08-08T14:45:26.337Z · EA · GW
  1. No argument about AGI risk that I've seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.

You're misunderstanding something about why many people are not concerned with AGI risks despite being sympathetic to various aspects of AI ethics. No one concerned with AGI x-risk is arguing it will disproportionately harm the underprivileged. But current AI harms are from things like discriminatory criminal sentencing algorithms, so a lot of the AI ethics discourse involves fairness and privilege, and people concerned with those issues don't fully appreciate that misaligned AGI 1) hurts everyone, and 2) is a real thing that very well might happen within 20 years, not just some imaginary sci-fi story made up by overprivileged white nerds.

There is some discourse around technological unemployment putting low-skilled employees out of work, but this is a niche political argument that I've mostly heard of proponents of UBI. I think it's less critical than x-risk, and if artificial intelligence gains the ability to do diverse tasks as well as humans can, I'll be just as unemployed a computer programmer as anyone else is as a coal miner.

Comment by Robi Rahman (robirahman) on What work has been done on the post-AGI distribution of wealth? · 2022-07-06T21:14:38.266Z · EA · GW

Maybe you're asking about intra-country wealth distributions (like: how will AI affect each country's Gini coefficient?), but I've heard that ETIRI at the World Bank has done some research on how AI will affect wealth, unemployment, and trade from an international standpoint. Let me see if I can find some links to their work.

Comment by Robi Rahman (robirahman) on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-03T02:36:47.982Z · EA · GW

There are tons of people vaguely considering working on alignment, and not a lot of people actually working on alignment.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T19:14:26.402Z · EA · GW

Sorry, yes, didn't mean to imply Charles He was only talking about catering. I was just using that as an example of EAs following vegan diets in a way that costs more money, as opposed to costlessly. This post by Jeff Kaufman is relevant, https://www.jefftk.com/p/two-kinds-of-vegan :

"Go vegan!", you hear, "it's cheaper, more environmentally sustainable, and just as healthy and delicious!" The problem is, these aren't all true at the same time.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T18:10:01.893Z · EA · GW

The specific points being made in those quotations aren't mutually exclusive. Onni Aarne is saying you can make very inexpensive adjustments to your diet that greatly reduce animal suffering, and Charles He is saying that EA events spend extra money on catering to satisfy the constraint of making it vegan. I think both claims are correct.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T15:48:31.543Z · EA · GW

The two are disanalogous from an offsetting perspective: Eating (factory farmed) animal products relatively directly results in an increase in animal suffering, and there is nothing that you can do to "undo" that suffering, even if you can "offset" it by donating to animal advocacy orgs. By contrast, if you cause some emissions and then pay for that amount of CO2 to be directly captured from the atmosphere, you've not harmed a single being.

This is only reasonable if you believe that causing x units of suffering and then preventing x units of suffering is worse than causing 0 suffering and allowing the other x units of suffering to continue. Actually, it's probably wrong even with that premise. Suppose Alice spends a day doing vegan advocacy, so that ten people who would have each eaten one hamburger don't eat them, but then she goes home and secretly eats ten hamburgers while no one is watching. Meanwhile, Brian leaves his air conditioner running while he's away from home, emitting 1 ton of CO2, then realizes his mistake, feels guilty, and buys 1 ton of carbon offsets. In either case, there's no more net harm than if both of them had done none of these actions, but according to your argument, Alice's behavior is worse than Brian's? Personally I consider these harms fungible and therefore Alice was net zero even if the hamburgers she ate came from a different cow than the ones the other people would've eaten.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T15:30:57.385Z · EA · GW

[Objection] from Robi Rahman: “A person choosing to eat 1kg less chicken results in 0.6 kg less expected chicken produced in the long run, which averts 20 days of chicken suffering. A comparable sacrifice would be to turn off your air conditioning for 3 days, which in expectation reduces future global warming by 10^(-14) °C and reduces suffering by zero.”

Without quibbling with the precise numbers, I think this is fundamentally a point about the importance of the two cause areas.

Actually, what I meant was fundamentally not a point about the importance of either cause area. I think that even if total harms from climate are greater than total harms from factory farming, the marginal harm reduction from changing individual behavior on diet is probably greater than the marginal harm reduction from changing personal energy consumption. I still think you're right overall that individual action on animal welfare is over-emphasized relative to individual action on climate or political/technology interventions on animal welfare, but this is one possible justification for the behavior of a lot of EAs I've met who put lots of effort into changing their diet but none into reducing their energy usage.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T15:11:09.109Z · EA · GW

However, whether this means EAs are not allocating their resources appropriately requires consideration of the marginal cost-effectiveness of: [1, 2, 3, 4, 5]

Actually, it doesn't require knowing all of those! If you find that among two of those options, one is more cost-effective than the other, but resources are going to the less effective one, you already know the overall allocation is suboptimal (even though the optimal allocation is probably some entirely different option).

That's a main point of this essay, which I think is underappreciated throughout EA: we put more effort into reducing harms from dietary animal product consumption than can be justified on a consequentialist basis relative to how little we emphasize individual actions on climate change and policy/technological interventions for animal welfare.

Comment by Robi Rahman (robirahman) on The Role of Individual Consumption Decisions in Animal Welfare and Climate are Analogous · 2022-06-10T15:04:01.684Z · EA · GW

I question your economic analysis of meat-eating:

your reduction in demand for meat makes them cheaper for others, which will lead some to increase in their consumption...

Following this logic the amount of consumption of any given non-essential good would never change.

You've misunderstood the line you quoted. It's only saying that other people's meat consumption will increase by some fraction of the amount you've reduced your consumption, not that people will increase their consumption by however much you reduce yours.

Comment by Robi Rahman (robirahman) on Who wants to be hired? (May-September 2022) · 2022-05-28T16:14:35.703Z · EA · GW

Intermediary German, Basic Spanish & Russian

Intermediate, not intermediary.

Comment by Robi Rahman (robirahman) on Who wants to be hired? (May-September 2022) · 2022-05-28T16:14:10.457Z · EA · GW

Location: Boston, Massachusetts

Remote: yes

Willing to relocate: yes

Skills: computer programming, databases, machine learning, statistics

Resume: https://www.linkedin.com/in/robirahman/

Email: robirahman94@gmail.com

Comment by Robi Rahman (robirahman) on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-24T21:21:49.785Z · EA · GW

Have you heard of this idea? Unique entity ethics: http://allegedwisdom.blogspot.com/2021/05/unique-entity-ethics-solves-repugnant.html

Comment by Robi Rahman (robirahman) on Introducing the ML Safety Scholars Program · 2022-05-20T00:42:06.612Z · EA · GW

Someone referred me to apply to be a TA for this program. How would you like such people to contact you - should I email you, or is there another form for that?

Comment by Robi Rahman (robirahman) on Norms and features for the Forum · 2022-05-16T14:12:28.199Z · EA · GW

Ah, you're right, I misinterpreted it since the epistemic status suggestion said time per post and that one didn't.

Comment by Robi Rahman (robirahman) on Norms and features for the Forum · 2022-05-16T13:51:29.473Z · EA · GW

Speculative: NLP for claim detection: the site asks you for your probabilities about the main claims. Time cost: 30 mins.

You think it'd take only 30 minutes to implement a feature that detects claims in forum posts? I'm not a web developer but that strikes me as wildly optimistic.

Comment by Robi Rahman (robirahman) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-11T13:26:21.021Z · EA · GW

If you think pandemic response is the key issue, Dr. Harder is a highly experienced doctor who used to run the Oregon Medical Board. Medical and policy experience: maybe you still think your guy will be better, but by how much?

The FDA has hundreds of highly -experienced doctors and still had such a disastrous response to the pandemic they probably caused millions of extra deaths. They completely blocked challenge trials and delayed vaccine deployment by six months. What matters is not whether the people in government are doctors, it's the policies on how the government behaves when an important problem arises. And crucially, the key issue isn't pandemic response, it's pandemic prevention. Carrick Flynn is the only congressional candidate I know of who's running on that.

Comment by Robi Rahman (robirahman) on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-09T21:17:55.704Z · EA · GW

Mormonism is an obvious example of a religion that people join because Mormons have well-functioning families? I'm skeptical that's a main reason for the growth of Mormonism compared to their high birthrate or their amount of missionary effort.

Comment by Robi Rahman (robirahman) on EA needs a hiring agency and Nonlinear will fund you to start one · 2022-04-19T00:17:03.631Z · EA · GW

Did anyone ever end up creating a hiring agency?

Comment by Robi Rahman (robirahman) on What are the strongest arguments against working on existential risk? (EA Librarian) · 2022-03-11T04:23:37.785Z · EA · GW

Even a comparatively low pure discount rate of 1% implies most future value is concentrated in the next hundred years

This is not correct! Suppose the human population grows at a constant rate for 1000 years. If you discount the moral worth of future people by 1% per year, but the growth rate is anything above 1%, most of the value of humanity is concentrated in the last hundred years, not the first hundred years.

There's this very surprising, maybe counterintuitive moral implication of cosmopolitanism where if you think future people have moral value and you believe in discount rates of 1-3%, you should basically disregard any present-day considerations and make all of your decisions based solely on how they affect the distant future, but if you use a discount rate of 5%,  you should help one person today rather than a billion trillion people a thousand years from now.[1]

  1. ^

    https://www.wolframalpha.com/input?i=1000000000000000000000*0.95%5E1000.0

Comment by Robi Rahman (robirahman) on Some thoughts on vegetarianism and veganism · 2022-02-16T01:30:49.185Z · EA · GW

I'm strongly in favor of 'welfaretarianism'! It's been my diet* for a few years now and I'm really glad you invented a name for it. I've been telling people for ages that I agree you shouldn't eat animals that suffer while farmed if it causes more of them to exist, but people don't really internalize the logical conclusion of this, that it's good to eat animals if it causes happy animals to exist (assuming you don't subscribe to negative utilitarianism or the person-affecting view) or existing animals to become happier. Hypothetically, if it were more profitable to sell meat from happy chickens than from battery-cage chickens, all factory farms would switch over to raising happy chickens, though this will probably never happen due to costs and I don't think consumers are willing to pay that much more.

*I don't actually eat any meat from happily-farmed animals because I don't know how you would find such a thing, but I'd be willing to eat it if it existed. In practice this resulted in me going from omnivore to lacto-vegetarian by cutting out meat products in order of most to least suffering per calorie.

Comment by Robi Rahman (robirahman) on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2022-01-28T14:11:45.711Z · EA · GW

I'm guessing they went to Colorado because they were on the west coast and it was the closest state with legal sports betting.

Comment by Robi Rahman (robirahman) on Dismantling Hedonism-inspired Moral Realism · 2022-01-28T00:14:45.459Z · EA · GW

Thanks for writing this sequence. I have gotten some weird looks in my discussion group last year for not being a moral realist - wish I'd had this link handy back then!

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-28T00:12:08.210Z · EA · GW

EV is the same, you're just reducing volatility (risk is maybe a better word?) by guaranteeing the outcome either way.

Oh, I see. Yes, this is what I thought - the EV doesn't change but you can reduce your exposure to particular outcomes.

Comment by Robi Rahman (robirahman) on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2022-01-27T22:59:03.726Z · EA · GW

I'm glad you didn't see my post until now! I am a bit lazy and your writeup is a little more detailed than mine :) plus, I didn't emphasize enough that this is available to people outside of New York, although I think the recent NY legalization is why it is so especially lucrative right now.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T14:38:26.179Z · EA · GW

To my knowledge, you are generally right. The situation is exceptional this month because the market was just legalized and these sites are trying to grab market share while it's brand new.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T14:32:54.644Z · EA · GW

Yeah, here are some examples. I was in NYC from 5pm Saturday to 7pm Sunday.

I signed up for the BetRivers match promo. They asked me to verify my identity by uploading a photocopy of my driver's license (I used my passport, which was accepted). Upon depositing $250, they gave me $250 in matching funds. The soonest upcoming sports game was the Kansas City Chiefs vs the Buffalo Bills, which was at -110 moneyline odds (meaning the Chiefs were 55% favorites according to the sportsbook). FiveThirtyEight had them as 65% favored, so I bet on the Chiefs. They won, so I got a $200 payout. I withdrew $700 (my original $250, the matching $250, and the bet payout). If the Chiefs had lost, I would've withdrawn my original $250 and quit. Definitely don't gamble your own money, that's how they get you.

I also used the Caesars promo to bet on the San Francisco 49ers vs Green Bay Packers game, that the total score would be under 47.5 and the 49ers would lose by less than 3.5 points. This required a $1500 deposit and no document verification. The final score was SF 13-10 GB so I won $3967.

The total amount of money I had to temporarily deposit for these two was $1750. As you say, it's usually a deposit equal to the bonus amount. You can do them all serially if you only have $1500, or simultaneously if you have $6000 to spare and want to finish the process quickly. And you are right, I only screenshotted five; I'll edit that.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T06:07:33.636Z · EA · GW

It is not negative EV. You are either misunderstanding the offer or someone's example.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T02:05:09.851Z · EA · GW

I used the maximum EV you can get without risking any of your own money. If you want to have less exposure to outcomes, you can choose lower-EV bets with higher likelihood of payouts, but if you're doing this for charity, it doesn't really make sense to do that.

For example, suppose you have a $1000 free bet. It doesn't pay out the principal if you win, just returns payouts for whatever you bet on. You can bet on an outcome that is 91% likely, in which case you have a 91% chance of winning $100 and a 9% chance of winning nothing, for an EV of $91. Or you could bet on an outcome that is 1% likely, in which case you have a 1% chance of winning $100000 and a 99% chance of winning nothing, for an EV of $1000. If you're donating your winnings to charity regardless of the outcome, you should do the riskier bet, but if you have sharply declining marginal utility of money, you might want the safer bet with lower average payout.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T01:00:45.479Z · EA · GW

Why is it optimal to size the hedge bet such that you get the same payout for either outcome? Why does that have greater EV than if the bets are skewed in either direction?

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-26T00:58:06.148Z · EA · GW

Yes, this was something I was thinking earlier. I was in an ideal position to take advantage of the offers without getting hooked because: I hate sports, have good background knowledge of probability, don't like gambling, and don't live in NY and thus couldn't make any more bets even if I wanted to. If I'm the one taking these offers rather than a NY resident who might end up with a gambling problem, that's a very good social outcome in addition to the donation directed to charity.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T23:32:26.081Z · EA · GW

Physical presence is enough. They have geolocation applets on their site and prohibit you from placing bets unless you are in an eligible location. I live in Massachusetts but went to NYC for about a day. I used my passport for identification because I didn't bring my driving license.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T23:31:23.607Z · EA · GW

Correct. They might ban you and confiscate your money if you use a VPN to obfuscate your location.

Comment by Robi Rahman (robirahman) on Prediction Bank: A way around current prediction market regulations? · 2022-01-25T04:44:29.137Z · EA · GW

This pays far too little to the winners to make it worthwhile to have any money in this. It wouldn't have much more liquidity than a moneyless prediction book.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T03:18:06.624Z · EA · GW

Oh, good catch, I will edit that. What is the EV from those offers, then? It seems to still be nearly +$1000 and nearly risk-free with the following approach: bet $1000 on something very unlikely (EV of the payout will be $1000, but as something like a 1% chance of $100000). If you lose, bet the $1000 of site credit on something extremely likely, so you end up with $1000 cash. Then withdraw that.

For BetMGM, repeat the same approach but do 5x very safe bets of $200 with the site credits.

That still works for $1000 expected profit, right?

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T03:11:02.310Z · EA · GW

I'd guess a couple more weeks. The Caesars one was just reduced from $3000 to $1500. They were most generous right at the beginning of legal online betting but are decreasing the incentives as everyone who will end up signing up has done so.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T01:48:39.468Z · EA · GW

Yes, that's correct. Optimal usage of that offer is to deposit $1,500, use the free bet on an unlikely event that resolves very soon, and then withdraw your deposit quickly, plus winnings if any.

Comment by Robi Rahman (robirahman) on Free money from New York gambling websites · 2022-01-25T00:45:41.229Z · EA · GW

Good point, thanks! I added a note in the FAQ.

Comment by Robi Rahman (robirahman) on Reflections on EA Global London · 2021-12-14T20:19:02.264Z · EA · GW

I disagree with your point about participants not being cautious enough about covid. Last I heard (someone correct me if this was later updated), four attendees tested positive during or after the conference, out of about 900 participants. That is an impressively low rate, and indicates that the safety measures worked well! I want to commend the organizers for doing a great job addressing covid issues: they had lots of rapid tests available for us, gave lots of advice about travel safety, and didn't do anything excessive or unwarranted by the risk level, like cancelling the conference or social-distancing the discussions.

Comment by Robi Rahman (robirahman) on Nines of safety: Terence Tao’s proposed unit of measurement of risk · 2021-12-12T20:24:18.627Z · EA · GW

I've heard this for e.g. server uptime as well.

Comment by Robi Rahman (robirahman) on The problem with person-affecting views · 2021-12-06T15:32:28.476Z · EA · GW

I thought the person-affecting view only applies to acts, not states of the world. I don't hold the PAV but my impression was that someone who does would agree that creating an additional happy person in World A is morally neutral, but wouldn't necessarily say that Worlds B and C aren't better than World A.

Comment by Robi Rahman (robirahman) on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-30T18:51:53.333Z · EA · GW

I'd love to read such a post.

Comment by Robi Rahman (robirahman) on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-30T18:48:55.603Z · EA · GW

The steady-state population assumption is my biggest objection here. Everything you've written is correct yet I think that one premise is so unrealistic as to render this somewhat unhelpful as a model. (And as always, NPV of the eternal future varies a crazy amount even within a small range of reasonable discount rates, as your numbers show.)

Comment by Robi Rahman (robirahman) on A Red-Team Against the Impact of Small Donations · 2021-11-25T23:49:18.189Z · EA · GW

If the financial capital is $46B and the population is 10k, the average person's career capital is worth about ~$5M of direct impact (as opposed to the money they'll donate)? I have a wide confidence interval but that seems reasonable. I'm curious to see how many people currently going into EA jobs will still be working on them 30 years later.

Comment by Robi Rahman (robirahman) on A Red-Team Against the Impact of Small Donations · 2021-11-25T23:44:25.455Z · EA · GW

Sorry, I didn't mean to imply that biorisk does or doesn't have "fast timelines" in the same sense as some AI forecasts. I was responding to the point about "if [EA organization] is a good use of funds, why doesn't OpenPhil fund it?" being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.

Comment by Robi Rahman (robirahman) on A Red-Team Against the Impact of Small Donations · 2021-11-25T20:59:57.631Z · EA · GW

> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This is highly implausible. First of all, if it's true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.

But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we're really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.

When you're working on global poverty, perhaps you'd want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.

For x-risks this seems totally implausible. What's the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don't think alignment progress has outpaced that. If time until AGI is limited and short then we're actively falling behind. I don't think their investments or effectiveness are increasing fast enough for this explanation to make sense.

Comment by Robi Rahman (robirahman) on Open Thread: Spring 2022 · 2021-11-08T16:56:14.536Z · EA · GW

I noticed something at EAG London which I want to promote to someone's conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with  overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:

1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn't appeal to overweight people

It's clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it's also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donations each year in the US alone, an amount that dwarfs annual outlays to all effective causes. I think this topic has been covered on the forum before from the religion and ethnicity angles, but I haven't seen it for other types of demographics.

If we're somehow limiting participation to the 3/10ths of the population who are under 25 BMI, are we needlessly keeping out 7/10ths of the people who might otherwise work to effectively improve the world?

Comment by Robi Rahman (robirahman) on Managing COVID restrictions for EA Global travel: My plans + request for other examples · 2021-10-20T14:30:52.979Z · EA · GW

Additional suggestion: don't just have a photo of your vaccine card on your phone; physically bring it or scan and print a copy.

Comment by Robi Rahman (robirahman) on Managing COVID restrictions for EA Global travel: My plans + request for other examples · 2021-10-19T21:08:53.610Z · EA · GW

Thanks for the writeup! I'm following this process but going to the UK a few days earlier, so I'll try this out and provide results before you leave.

I ordered a 2-day covid test and received a booking reference number. My flight arrives in London on Friday, so tomorrow morning I will fill out the passenger locator form.

Edit, 2021-10-20: Submitted all my info to the UK gov website and got a passenger locator form. I'll update tomorrow when boarding the plane.

2021-10-21: will be departing from the US for the UK on Thursday evening.

2021-10-22: will be arriving in London on Friday morning.