Posts

Bill Gates book on pandemic prevention 2022-05-01T10:42:22.446Z
The Effective Altruism culture 2022-04-17T01:23:05.855Z
A tough career decision 2022-04-09T00:46:57.582Z
Pitching AI Safety in 3 sentences 2022-03-30T18:50:28.240Z
The role of academia in AI Safety. 2022-03-28T00:04:21.239Z
Meditations on careers in AI Safety 2022-03-23T22:00:11.836Z
Science policy as a possible EA cause area: problems and solutions 2022-01-23T00:32:21.716Z
Should the EA community have a DL engineering fellowship? 2021-12-24T13:43:39.366Z
Reflections on EA Global London 2021-12-12T00:56:10.372Z
How to get more academics enthusiastic about doing AI Safety research? 2021-09-04T14:10:14.528Z
1h-volunteers needed for a small AI Safety-related research project 2021-08-16T17:51:42.132Z
Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD. 2020-09-20T18:32:12.688Z

Comments

Comment by PabloAMC on Levelling Up in AI Safety Research Engineering · 2022-09-02T08:37:37.759Z · EA · GW

The HuggingFace RL course might be an alternative in the Deep Learning - RL discussion above: https://github.com/huggingface/deep-rl-class

Comment by PabloAMC on The great energy descent (short version) - An important thing EA might have missed · 2022-09-01T11:46:10.297Z · EA · GW

Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.

I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.

Comment by PabloAMC on The great energy descent (short version) - An important thing EA might have missed · 2022-09-01T10:27:07.753Z · EA · GW

I think I quite disagree with this post because batteries are improving quite a lot, and if we are capable of also improving Hydrogen production and usage, things should work pretty well. Finally, nuclear fusion no longer seems so far away. Of course, I agree with the author that this transition will take quite a long time, especially in developing countries, but I expect this to work out well anyways. One key argument of the author is that we are limited in the amount of different metals available, but Li is very common on Earth, even if not super cheap, so I am not totally convinced by this. Similar thoughts apply to land usage.

Comment by PabloAMC on Mexico EA Fellowship · 2022-08-30T21:48:05.422Z · EA · GW

In the Spanish community we often have conversations in English, and I think at least 80% of the members are comfortable with both.

Comment by PabloAMC on Who's going to EAG DC? · 2022-08-14T15:02:08.359Z · EA · GW

I am, and am interested in technical AI Safety

Comment by PabloAMC on Another call for EA distillers · 2022-08-03T20:43:57.355Z · EA · GW

The point 1 is correct, but there is a difference: when you research it's often needed to live near a research group. Distillation is more open to remote and asynchronous work.

Comment by PabloAMC on If we ever want to start an “Amounts matter” movement, covid response might be a good flag/example · 2022-07-29T23:45:32.578Z · EA · GW

Thanks for the answer. The problem is that this is likely pointing in the wrong direction. Immigration has by itself quite large benefits for immigrants and almost all studies of the impact of immigration find positive or no effect for locals. From "Good economics for hard times" by Duflo and Barnejee there is only one case where locals ended up worse off: during the URRS, Hungarian workers were allowed to work but not live in East Germany, forcing them to spend their money at home. Overall, it is well known that open border situations would probably boost worldwide GDP by at least 50%, possibly 100%. I sincerely think that criticising Germany for this policy requires being only worried about very short term costs, which seems more like an ideological response than a reasonable choice.

Comment by PabloAMC on If we ever want to start an “Amounts matter” movement, covid response might be a good flag/example · 2022-07-27T16:03:36.100Z · EA · GW

I think it is wrong to say that Syrian refugee crisis might have cost Germany 0.5T. My source: https://www.igmchicago.org/surveys/refugees-in-germany-2/. To be fair though I have not found a posterior analysis, and I am far from an expert.

Comment by PabloAMC on Some unfun lessons I learned as a junior grantmaker · 2022-05-26T17:52:12.856Z · EA · GW

My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.

Comment by PabloAMC on Some unfun lessons I learned as a junior grantmaker · 2022-05-26T17:49:04.696Z · EA · GW

No need to apologize! I think your idea might be even better than mine :)

Comment by PabloAMC on Some unfun lessons I learned as a junior grantmaker · 2022-05-23T19:54:35.400Z · EA · GW

Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.

Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.

Comment by PabloAMC on Some unfun lessons I learned as a junior grantmaker · 2022-05-23T17:34:40.167Z · EA · GW

A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

I also think that it would be worth exploring ways to give feedback with as little time cost as possible.

Comment by PabloAMC on EA and the current funding situation · 2022-05-10T13:33:50.127Z · EA · GW

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.

See also the link by Michael above.

Comment by PabloAMC on EA and the current funding situation · 2022-05-10T13:32:17.869Z · EA · GW

I don't think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right. See also https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1

Comment by PabloAMC on EA and the current funding situation · 2022-05-10T10:50:20.743Z · EA · GW

My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only "cool AI Safety stuff" is the only thing worth doing.

Comment by PabloAMC on EA is more than longtermism · 2022-05-03T16:59:29.650Z · EA · GW

Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.

That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the most good really shouldn't.

Additionally, it might be worth highlighting, especially when talking with unfamiliarized people, that we deeply care about all present people suffering. Quoting Nate Soares:

One day, we may slay the dragons that plague us. One day we, like the villagers in their early days, may have the luxury of going to any length in order to prevent a fellow sentient mind from being condemned to oblivion unwillingly. If we ever make it that far, the worth of a life will be measured not in dollars, but in stars. That is the value of a life. It will be the value of a life then, and it is the value of a life now.

Comment by PabloAMC on Bill Gates book on pandemic prevention · 2022-05-01T20:19:08.553Z · EA · GW

Without thinking much over it, I'd say yes. I'm not sure buying a book will get it more coverage in the news though.

Comment by PabloAMC on The Effective Altruism culture · 2022-04-17T11:41:40.557Z · EA · GW

I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.

Comment by PabloAMC on The Effective Altruism culture · 2022-04-17T10:00:32.554Z · EA · GW

Hey James!

I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.

Comment by PabloAMC on EA: A More Powerful Future Than Expected? · 2022-04-16T12:10:33.680Z · EA · GW

I agree with the post, and the same has already been noticed previously.

However, there is also a risk from this: as a community, we have to struggle to avoid being elitist, and should be welcoming to everyone, even those whose personal circumstances are not ideal to change the world.

Comment by PabloAMC on A tough career decision · 2022-04-11T16:34:05.732Z · EA · GW

Thanks!

Comment by PabloAMC on A tough career decision · 2022-04-11T16:33:14.608Z · EA · GW

Thanks!

Comment by PabloAMC on A tough career decision · 2022-04-11T07:31:01.643Z · EA · GW

Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks

Comment by PabloAMC on A tough career decision · 2022-04-11T07:27:53.497Z · EA · GW

Gracias Juan!

Comment by PabloAMC on A tough career decision · 2022-04-09T11:48:19.311Z · EA · GW

Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!

Comment by PabloAMC on A tough career decision · 2022-04-09T09:45:52.826Z · EA · GW

Thanks a lot Max, I really appreciate it.

Comment by PabloAMC on Issues with centralised grantmaking · 2022-04-06T10:08:24.814Z · EA · GW

So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.

Comment by PabloAMC on Issues with centralised grantmaking · 2022-04-05T22:03:37.276Z · EA · GW

I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?

On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.

Comment by PabloAMC on Issues with centralised grantmaking · 2022-04-04T16:22:40.368Z · EA · GW

One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.

Comment by PabloAMC on Unsurprising things about the EA movement that surprised me · 2022-03-30T22:23:24.633Z · EA · GW

Makes sense, and I agree

Comment by PabloAMC on Unsurprising things about the EA movement that surprised me · 2022-03-30T18:37:38.065Z · EA · GW

EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.

Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-29T19:27:21.900Z · EA · GW

Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-29T19:26:14.255Z · EA · GW

Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.

In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T22:58:16.854Z · EA · GW

Hey Lukas!

If the concrete problems are too watered down compared to the real thing, you also won't solve AI alignment by misleading people into thinking it's easier.

Note that even MIRI sometimes does this

  1. We could not yet create a beneficial AI system even via brute force. Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other carbon atoms. (Pretend we don’t care how it makes the diamond, or what it has to take apart in order to get the carbon; the goal is to study a simplified problem.) Let’s say that the Jupiter-sized computer is running python. How would you program it to produce lots and lots of diamond? As it stands, we do not yet know how to program a computer to achieve a goal such as that one.

It would be fair to say that this is just from an exposition of the importance of AI Safety, rather than from a proposal itself. But in any case, humans always solve complicated problems by breaking them up because otherwise it is terribly hard. Of course, there is a risk that we oversimplify the problem, but general researchers often know where to stop.

Perhaps you were focusing more on things vaguely related such as fairness etc, but I'm arguing more for making the real AI Safety problems concrete enough that they will tackle it. And that's the challenge, to know where to stop simplifying. :)

some original-thinking genius reasoners can produce useful shovel-ready research questions for not-so-original-thinking academics

Don't discount the originality of academics, they can also be quite cool :)

I think the best judges are the people who are already doing work that the alignment community deems valuable.

I agree!

If EAs who have specialized on this for years are so vastly confused about it, academia will be even more confused.

Yeah, I think this is right. That's why I wanted to pose this as concrete subproblems so that they do not feel the confusion we still have around it :)

Independently of the above argument that we're in trouble if we can't even recognize talent, I also feel pretty convinced that we can on first-order grounds. It seems pretty obvious to me that work tests or interviews conducted by community experts do an okay job at recognizing talent.

Yeah, I agree. But also notice that Holden Karnofsky believes that academic research has a lot of aptitudes overlap with AI Safety research skills, and that the academic research track of record is the best fidelity signal for whether you'll do well in AI Safety research. So perhaps we should not discount it entirely.

Thanks!

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T18:49:51.276Z · EA · GW

Yes, I do indeed :)

You can frame it if you want as: founders should aim to expand the range of academic opportunities, and engage more with academics.

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T18:04:12.099Z · EA · GW

Hi Steven,

Possible claim 2: "We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better." You didn't exactly say this, but sorta imply it. I disagree.

I don't agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don't think money is the issue.

Thanks

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-28T15:47:14.264Z · EA · GW

Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T15:22:05.388Z · EA · GW

Hey Simon, thanks for answering!

We won't solve AI safety by just throwing a bunch of (ML) researchers on it.

Perhaps we don't need to buy ML researchers (although I think we should try at least), but I think it is more likely we won't solve AI Safety if we don't get more concrete problems in the first place.

AGI will (likely) be quite different from current ML systems.

I'm afraid I disagree with this. For example, if this were true, interpretability from Chris Olah or the Anthropic team would be automatically doomed; Value Learning from CHAI would also be useless, our predictions about forecasting that we use to convince people of the importance of AI Safety equally so. Of course, this does not prove anything; but I think there is a case to be made that Deep Learning seems currently as the only viable path we have found to perhaps get to AGI. And while I think the agnostic approach of MIRI is very valuable, I think it would be foolish to bet all our work to the truth of this statement. It could still be the case if we were much more bottlenecked in people than in research lines, but I don't think that's the case, I think we are more bottlenecked in concrete ideas of how to push forward our understanding. Needless to say, I believe Value Learning and interpretability are things that are very suitable for academia.

we rather need breakthroughs

Breakthroughs only happen when one understands the problem in detail, not when people float around vague ideas.

We much rather need a few Paul Christiano level researchers that build a very deep understanding of the alignment problem and then can make huge advances, than we need many still-great-but-not-that-extraordinary researchers.

Agreed. But I think there are great researchers at academia, and perhaps we could profit from that. I don't think we have any method to spot good researchers in our community anyways. Academia can sometimes help with that.

(1) focus on what you can do with current ML systems, instead of focusing on more uncertain longer-term work, and (2) goodhart on some subproblems that don't take that long to solve.

I think this is a bit exaggerated. What academia does is to ask for well defined problems and concrete solutions. And that's what we want if we want to progress. It is true that some goodharting will happen, but I think we would be closer to the optimum if we were goodharting a bit than where we are right now, unable to measure much progress. Notice also that Shannon and many other people coming up with breakthroughs did so in academic ways.

we need some other way of creating incentives to usefully contribute to AI safety I think arguing about the importance of AI Safety is enough, as long as they don't feel they have nothing to contribute to because things are too vague or too far away from their expertise.

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T09:23:41.639Z · EA · GW

I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn't. If someone comes to me with such kind of argument I will go defensive really quickly, and he'll have to waste a lot of effort to convince me there is a slight chance that he's right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.

Perhaps we should be thinking about this from the opposite perspective. How can we extend the range of what can be published in academia?

I don't think this is a good idea: there are a couple of reasons why academic publishing is so stringent: to avoid producing blatant useless articles and to measure progress. I argue we want to play by the rules here, both because we would risk being seen as crazy people and because we want to publish sound work.

Comment by PabloAMC on The role of academia in AI Safety. · 2022-03-28T09:14:05.080Z · EA · GW

Thanks Dan!

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-27T10:32:03.390Z · EA · GW

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer

I think it would be an AGI very capable of chemistry :-)

one might even wonder what learnable quantum circuits / neural networks would entail.

Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air yet.

Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant?

By chemistry I mean electronic simulation. Other than that, proteins are quite classical, and that's why alphafold works well, and why it is highly unlikely that neurons would have any quantum effects involved in their functioning.

Or would a 'quantum version of alphafold' trounce the original?

For this I even have a published article showing that (probably) no: https://arxiv.org/pdf/2101.10279.pdf (published in https://iopscience.iop.org/article/10.1088/2058-9565/ac4f2f/meta)

Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is "yes" is worth thinking about the implications of wrt AI safety.

My intuition is that no, but if that were to be the case, then it is unlikely to be an issue for AI Safety: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial

Thanks in any case, Mantas :)

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-27T10:23:01.998Z · EA · GW

From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.

I think this is right.

You don't say that some of the top AI safety orgs are trying to hire you.

I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.

Then you have to consider how useful quantum algorithms are to existential risk.

I think it is quite unlikely that this will be so. I'm 95% sure that QC will not be used in advanced AI, and even if that were the case, it is quite unlikely it will matter for AIS: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial Perhaps I could be surprised, but do we really need someone watch out in case this turns out valuable? My intuition is that if that were to happen I could just learn whatever development has happened quite quickly with my current background. I could spend say, 1-3h a month, and that would probably be enough to be on the watch.

One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically).

In fact, the reason why I wanted to go for academia, apart from my personal fit, is that the AI Safety community is right now very tilted towards the industry. I think there is a real risk that between blog posts and high-level ideas we could end up with a reputation crisis. We need to be seen as a serious scientific research area, and for that, we need more academic research and way better definitions of the concrete problems we are trying to solve. In other words, if we don't get over the current `preparadigmaticity' of the field, we risk reputation damage.

Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary).

Good question. I have been offered 10k stock options with a value of around $5 to $10 each. Right now the valuation of this startup is in $3B. What do you think?

Also, have you considered 80k advising? I want to talk to Habiba before making a decision but she was busy this week with EAGx Oxford. Let's see what she thinks.

Thanks Thomas!

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-26T18:19:12.516Z · EA · GW

Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?

The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a year. I'm thinking of it more in academic terms; I would like to target academic-quality papers. But perhaps if that happens I could come back to quantum computing or any other boring computer scientist job.

(Btw I'm still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)

The main reason is that if I go to a place where people are working in technical AI Safety I will get to speed with the AI/ML part faster if I am there. So it'd be for learning purposes.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-26T16:31:53.510Z · EA · GW

Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks. In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-26T15:39:17.046Z · EA · GW

Have you considered doing this for a while if you think it's possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?

Indeed, I think that would be a good objective for the postdoc. It's also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-25T23:52:19.817Z · EA · GW

Thanks for making concrete bets @aogara :)

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-25T23:43:43.381Z · EA · GW

Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I'm doing the best I can do than if I try to compare myself to some unattainable level. That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-24T11:32:57.131Z · EA · GW

Thanks Chris! Not much: duration and amount of funding. But the projects I applied with were similar, so in a sense I was arguing that independent evaluations of a proposal might provide more signal of the perceived usefulness of this project.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-24T08:51:49.139Z · EA · GW

I submitted an application about using causality as a means for improved value learning and interpretability of NN: https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation My main reason for putting forward this proposal is that I believe the models of the world humans operate, are somewhat similar to causal models, with some high-level variables that AI systems might be able to learn. So using causal models might be useful for AI Safety.

I think there are also some external reasons why it makes sense as a proposal:

  • It is connected to the work of https://causalincentives.com/
  • Most negative feedback I have received is because the proposal is still a bit too high level, and most people believe this is something worth trying out (even if I am not the right person).
  • I got approval from LTFF, and got to the second round of both FLI and OpenPhil (still undecided in both cases, so no rejections).

I think the risk of me not being the right person to carry out research on this topic is greater than the risk of this not being a useful research agenda. On the other hand, so far I have been able to do research well even when working independently, so perhaps the change of topic will turn out ok.

Comment by PabloAMC on Meditations on careers in AI Safety · 2022-03-24T08:37:26.885Z · EA · GW

Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.

The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing some 10 orders of magnitude of speed, which makes QC unlikely to help in generic problems.

Where QC shines is in analyzing and predicting the properties of quantum systems, such as chemistry and material science. This is by itself very useful, and it may bring up new batteries, new drugs... but it is different from AI.

Also, for cryptography there might be some applications but one can already use quantum-resistant classically cryptography, so I'm not very excited about cryptography as an application.