I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.
post by MichaelA
In 2017, I did my Honours research project on whether, and how much, fact-checking politicians’ statements influenced people’s attitudes towards those politicians, and their intentions to vote for them. (At my Australian university, “Honours” meant a research-focused, optional, selective 4th year of an undergrad degree.) With some help, I later adapted my thesis into a peer-reviewed paper: Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. This was all within the domains of political psychology and cognitive science.
During that year, and in a unit I completed earlier, I learned a lot about:
- how misinformation forms
- how it can sticky
- how it can continue to influence beliefs, attitudes, and behaviours even after being corrected/retracted, and even if people do remember the corrections/retractions
- ways of counteracting, or attempting to counteract, these issues
- E.g., fact-checking, or warning people that they may be about to receive misinformation
- various related topics in the broad buckets of political psychology and how people process information, such as impacts of “falsely balanced” reporting
The research that’s been done in these areas has provided many insights that I think might be useful for various EA-aligned efforts. For some examples of such insights and how they might be relevant, see my comment on this post. These insights also seemed relevant in a small way in this comment thread [EA · GW], and in relation to the case for building more and better epistemic institutions in the effective altruism community [EA · GW].
I’ve considered writing something up about this (beyond those brief comments), but my knowledge of these topics is too rusty for that to be something I could smash out quickly and to a high standard. So I’d like to instead just publicly say I’m happy to answer questions related to those topics.
I think it’d be ideal for questions to be asked publicly, so others might benefit, but I’m also open to discussing this stuff via messages or video calls. The questions could be about anything from a super specific worry you have about your super specific project, to general thoughts on how the EA community should communicate (or whatever).
- In 2017, I probably wasn’t adequately concerned by the replication crisis, and many of the papers I was reading were from before psychology’s attention was drawn to that. So we should assume some of my “knowledge” is based on papers that wouldn’t replicate.
- I was never a “proper expert” in those topics, and I haven’t focused on them since 2017. (I ended up with First Class Honours, meaning that I could do a fully funded PhD, but decided against it at that time.) So it might be that most of what I can provide is pointing out key terms, papers, and authors relevant to what you’re interested in.
- If your question is really important, you may want to just skip to contacting an active researcher in this area or checking the literature yourself. You could perhaps use the links in my comment on this post as a starting point.
- If you think you have more or more recent expertise in these or related topics, please do make that known, and perhaps just commandeer this AMA outright!
(Due to my current task list, I might respond to things mostly from 14 May onwards. But you can obviously comment & ask things before then anyway.)
Comments sorted by top scores.
comment by Ramiro ·
2020-06-22T17:07:14.363Z · EA(p) · GW(p)
I'dlike to have read this before having our discussion:
In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.
But their recommendations sound scary:
First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners. Replies from: MichaelA
↑ comment by MichaelA ·
2020-06-23T01:02:13.783Z · EA(p) · GW(p)
Interesting article - thanks for sharing it.
Why do you say their recommendations sound scary? Is it because you think they're intractable or hard to build support for?Replies from: Ramiro
↑ comment by Ramiro ·
2020-06-23T01:18:15.048Z · EA(p) · GW(p)
Sorry, I should have been more clear: I think "treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners" is hard to build support for, and may imply some risk of abuse.
comment by Ramiro ·
2020-05-11T17:58:44.430Z · EA(p) · GW(p)
I've seen some serious stuff on epistemic and memetic warfare. Do you think misinformation in the web has recently been or is currently being used as an effective weapon against countries or peoples? Is it qualitatively different from good old conspiracies and smear campaigns? Do you have some examples? Do standard ways of counter-acting (e.g., fact-checking) can effectively work in the case of an intentional attack (my guess: probably not; an attacker can spread misinformation more effectively than we can spread fact-checking - and warning about it wil increase mistrust and polarization, which might be the goal of the campaign)? What would be your credences on your answers?Replies from: MichaelA, MichaelA
↑ comment by MichaelA ·
2020-05-25T06:53:15.559Z · EA(p) · GW(p)
Unfortunately, I think these specific questions are mostly about stuff that people started talking about a lot more after 2017. (Or at least, I didn't pick up on much writing and discussion about these points.) So it's a bit beyond my area.
But I can offer some speculations and related thoughts, informed in a general sense by the things I did learn:
- I suspect misinformation at least could be an "effective weapon" against countries or peoples, in the sense of causing them substantial damage.
- I'd see (unfounded) conspiracy theories and smear campaigns as subtypes of spreading misinformation, rather than as something qualitatively different. But I think today's technology allows for spreading misinformation (of any type) much more easily and rapidly than people could previously.
- At the same time, today's technology also makes flagging, fact-checking, and otherwise countering misinformation easier.
- I'd wildly speculate that, overall, the general public are much better informed than they used to be, but that purposeful efforts to spread misinformation will more easily have major effects now than previously.
- This is primarily based on the research I've seen (see my other comment on this post) that indicates that even warnings about misinfo and (correctly recalled!) corrections of misinfo won't stop that misinfo having an effect.
- But I don't actually know of research that's looked into this. We could perhaps call this question: How does the "offense-defense" balance of (mis)information spreading scale with better technology, more interconnectedness, etc.? (I take the phrase "offense-defense balance" from this paper, though it's possible my usage here is not in line with what the phrase should mean.)
- My understanding is that, in general, standard ways of counteracting misinfo (e.g., fact-checking, warnings) tend to be somewhat but not completely effective in countering misinfo. I expect this would be true for accidentally spread misinfo, misinfo spread deliberately by e.g. just a random troll, or misinfo spread deliberately by e.g. a major effort on the part of a rival country.
- But I'd expect that the latter case would be one where the resources dedicated to spreading the misinfo will more likely overwhelm the resources dedicated towards counteracting it. So the misinfo may end up having more influence for that reason.
- We could also perhaps wonder about how the "offense-defense" balance of (mis)information spreading scales with more resources. It seems plausible that, after a certain amount of resources dedicated by both sides, the public are just saturated with the misinfo to such an extent that fact-checking doesn't help much anymore. But I don't know of any actual research on that.
↑ comment by MichaelA ·
2020-05-25T06:58:10.267Z · EA(p) · GW(p)
One thing that you didn't raise, but which seems related and important, is how advancements in certain AI capabilities could affect the impacts of misinformation. I find this concerning, especially in connection with the point you make with this statement:
warning about it wil increase mistrust and polarization, which might be the goal of the campaign
Early last year, shortly after learning about EA, I wrote a brief research proposal related to the combination of these points. I never pursued the research project, and have now learned of other problems I see as likely more important, but I still do think it'd be good for someone to pursue this sort of research. Here it is:
AI will likely allow for easier creation of fake news, videos, images, and audio (AI-generated misinformation; AIGM) [note: this is not an established term]. This may be hard to distinguish from genuine information. Researchers have begun exploring potential political security ramifications of this (e.g., Brundage et al., 2018). Such explorations could valuably draw on the literatures on the continued influence effect of misinformation (CIE; e.g., Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012), motivated reasoning (e.g., Nyhan & Reifler, 2010), and the false balance effect (e.g., Koehler, 2016).
For example, CIE refers to the finding that corrections of misinformation don’t entirely eliminate the influence of that misinformation on beliefs and behaviours, even among people who remember and believe the corrections. For misinformation that aligns with one’s attitudes, corrections are particularly ineffective, and may even “backfire”, strengthening belief in the misinformation (Nyhan & Reifler, 2010). Thus, even if credible messages debunking AIGM can be rapidly disseminated, the misinformation’s impacts may linger or even be exacerbated. Furthermore, as the public becomes aware of the possibility or prevalence of AIGM, genuine information may be regularly argued to be fake. These arguments could themselves be subject to the CIE and motivated reasoning, with further and complicated ramifications.
Thus, it’d be valuable to conduct experiments exposing participants to various combinations of fake articles, fake images, fake videos, fake audio, and/or a correction of one or more of these. This misinformation could vary in how indistinguishable from genuine information it is; whether it was human- or AI-generated; and whether it supports, challenges, or is irrelevant to participants’ attitudes. Data should be gathered on participants’ beliefs, attitudes, and recall of the correction. This would aid in determining how much the issue of CIE is exacerbated by the addition of video, images, or audio; how it varies by the quality of the fake or whether it’s AI-generated; and how these things interact with motivated reasoning.
Such studies could include multiple rounds, some of which would use genuine rather than fake information. This could explore issues akin to false balance or motivated dismissal of genuine information. Such studies could also measure the effects of various “treatments”, such as explanations of AIGM capabilities or how to distinguish such misinformation from genuine information. Ideally, these studies would be complemented by opportunistic evaluations of authentic AIGM’s impacts.
One concern regarding this idea is that I’m unsure of the current capabilities of AI relevant to generating misinformation, and thus of what sorts of simulations or stimuli could be provided to participants. Thus, the study design sketched above is preliminary, to be updated as I learn more about relevant AI capabilities. Another concern is that relevant capabilities may currently be so inferior to how they’ll later be that discoveries regarding how people react to present AIGM would not generalise to their reactions to later, stronger AIGM.
Replies from: Ramiro
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Anderson, H. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
- Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
- Koehler, D. J. (2016). Can journalistic “false balance” distort public perception of consensus in expert opinion?. Journal of experimental psychology: Applied, 22(1), 24-38.
- Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
↑ comment by Ramiro ·
2020-05-25T17:17:03.710Z · EA(p) · GW(p)
I think "offense-deffense balance" is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it's particularly concerning how easily it can overrun our defenses - so that, even if we succeed by fact-checking every inaccurate statement, it'll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything I've read about the subject (though notvery sistematically), I haven't seen feasible well-written strategies to address this asymmetry - except for some papers on moderation in social networks and forums (even so, it's quite time consuming, unless moderators draw clear guidelines - like in this forum). I wonder why societies (through authorities or self-regulation) can't agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source - something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.Replies from: MichaelA
↑ comment by MichaelA ·
2020-05-26T02:06:58.661Z · EA(p) · GW(p)
I think I've got similar concerns and thoughts on this. I'm vaguely aware of various ideas for dealing with these issues, but I haven't kept up with that, and I'm not sure how effective they are or will be in future.
The idea of making captcha requirements before things like commenting very widespread is one I haven't heard before, and seems like it could plausibly cut off part of the problem at relatively low cost.
I would also quite like it if there were much better epistemic norms widespread across society, such as people feeling embarrassed if people point out they stated something non-obvious as a fact without referencing sources. (Whereas it could still be fine to state very obvious things as facts without sharing sources all the time, or to state non-obvious things as fairly confident conjectures rather than as facts.)
But some issues also come to mind (note: these are basically speculation, rather than drawing on research I've read):
- It seems somewhat hard to draw the line between ok and not ok behaviours (e.g., what claims are self-evident enough that it's ok to omit a source? What sort of tone and caveats are sufficient for various sorts of claims?)
- And it's therefore conceivable that these sorts of norms could be counterproductive in various ways. E.g., lead to (more) silencing or ridicule of people raising alarm bells about low probability high stakes events, because there's not yet strong evidence about that, but no one will look for the evidence until someone starts raising the alarm bells.
- Though I think there are some steps that seems obviously good, like requiring sources for specific statistical claims (e.g., "67% of teenagers are doing [whatever]").
- This is a sociological/psychological rather than technological fix, which does seem quite needed, but also seems quite hard to implement. Spreading norms like that widely seems hard to do.
- With a lot of solutions, it seems not too hard to imagine ways they could be (at least partly) circumvented by people or groups who are actively trying to spread misinformation. (At least when those people/groups are quite well-resourced.)
- E.g., even if society adopted a strong norm that people must include sources when making relatively specific, non-obvious claims, there could then perhaps be large-scale human- or AI-generated sources being produced, and made to look respectable at first glance, which can then be shared alongside the claims being made elsewhere.
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation. I'd guess that those more general approaches may better avoid the issue of difficulty drawing lines in the appropriate places and being circumventable by active efforts, but may suffer more strongly from being quite intractable or crowded. (But this is just a quick guess.)Replies from: Ramiro
↑ comment by Ramiro ·
2020-05-28T14:35:19.878Z · EA(p) · GW(p)
We could probably also think of things like more generally improving critical thinking or rationality as similar broad, sociological approaches to mitigating the spread/impacts of misinformation.
Agreed. But I don't think we could do that without changing the environment a little bit. My point is that rationality isn’t just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and it’s way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isn't really “statements that are false”, or people who are actually fooled by them. The problem is that, if I’m convinced I’m surrounded by lies and nonsense, I’ll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I haven’t found any paper testing this hypothesis, though. If it is right, then most articles I’ve seen on “fake news didn’t affect political outcomes” might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like “digit 1 in position nth”. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like “digit 1 in positions n, m and o”.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; it’s even worse if the Agent already knows some of the Principal's biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobs - like social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like “military personnel support this, financial investors would never accept that”. If you can convince voters they’ll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my “rationality skills” arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldn’t extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didn’t take 2 min to google it (and find out that, yes, “greenhouse” is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldn’t have done it myself if I didn’t already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldn’t provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against “anthropic global warming”, without even caring to put a consistent credence on them - like first pointing to alternative causes for warming, and then denying the warming itself. He didn't really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of "rating agencies"; I mean, they shouldn't only screen for false statements, but actually provide information about who is accurate - so mitigating what I've been calling the "lemons problem in news". But who rates the raters? Besides the risk of capture, I don't know how to make people actually trust the agencies in the first place.Replies from: MichaelA, MichaelA, MichaelA, MichaelA
↑ comment by MichaelA ·
2020-05-28T23:33:33.224Z · EA(p) · GW(p)
Your paragraph on climate change denial among a smart, scientifically educated person reminded me of some very interesting work by a researcher called Dan Kahan.
An abstract from one paper:
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either un- reflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
Two other relevant papers:
↑ comment by MichaelA ·
2020-05-28T23:28:44.580Z · EA(p) · GW(p)
Parts of your comment reminded me of something that's perhaps unrelated, but seems interesting to bring up, which is Stefan Schubert's prior work on "argument-checking", as discussed on an 80k episode:
Stefan Schubert: I was always interested in “What would it be like if politicians were actually truthful in election debates, and said relevant things?” [...]
So then I started this blog in Swedish on something that I call argument checking. You know, there’s fact checking. But then I went, “Well there’s so many other ways that you can deceive people except outright lying.” So, that was fairly fun, in a way. I had this South African friend at LSE whom I told about this, that I was pointing out fallacies which people made. And she was like “That suits you perfectly. You’re so judge-y.” And unfortunately there’s something to that.
Robert Wiblin: What kinds of things did you try to do? I remember you had fact checking, this live fact checking on-
Stefan Schubert: Actually that is, we might have called it fact checking at some point. But the name which I wanted to use was argument checking. So that was like in addition to fact checking, we also checked argument.
Robert Wiblin: Did you get many people watching your live argument checking?
Stefan Schubert: Yeah, in Sweden, I got some traction. I guess, I had probably hoped for more people to read about this. But on the plus side, I think that the very top showed at least some interest in it. A smaller interest than what I had thought, but at least you reach the most influential people.
Robert Wiblin: I guess my doubt about this strategy would be, obviously you can fact check politicians, you can argument check them. But how much do people care? How much do voters really care? And even if they were to read this site, how much would it change their mind about anything?
Stefan Schubert: That’s fair. I think one approach which one might take would be to, following up on this experience, the very top people who write opinion pieces for newspapers, they were at least interested, and just double down on that, and try to reach them. I think that something that people think is that, okay, so there are the tabloids, and everyone agrees what they’re saying is generally not that good. But then you go to the the highbrow papers, and then everything there would actually make sense.
So that is what I did. I went for the Swedish equivalent of somewhere between the Guardian and the Telegraph. A decently well-respected paper. And even there, you can point out this glaring fallacies if you dig deeper.
Robert Wiblin: You mean, the journalists are just messing up.
Stefan Schubert: Yeah, or here it was often outside writers, like politicians or civil servants. I think ideally you should get people who are a bit more influential and more well-respected to realize how careful you actually have to be in order to really get to the truth.
Just to take one subject that effective altruists are very interested in, all the writings about AI, where you get people like professors who write the articles which are really very poor on this extremely important subject. It’s just outrageous if you think about it.
Robert Wiblin: Yeah, when I read those articles, I imagine we’re referring to similar things, I’m just astonished. And I don’t know how to react. Because I read it, and I could just see egregious errors, egregious misunderstandings. But then, we’ve got this modesty issue, that we’re bringing up before. These are well-respected people. At least in their fields in kind of adjacent areas. And then, I’m thinking, “Am I the crazy one?” Do they read what I write, and they have the same reaction?
Stefan Schubert: I don’t feel that. So I probably reveal my immodesty.
Of course, you should be modest if people show some signs of reasonableness. And obviously if someone is arguing for a position where your prior that it’s true is very low. But if they’re a reasonable person, and they’re arguing for it well, then you should update. But if they’re arguing in a way which is very emotive – they’re not really addressing the positions that we’re holding – then I don’t think modesty is the right approach.
Robert Wiblin: I guess it does go to show how difficult being modest is when the rubber really hits the road, and you’re just sure about something that someone else you respect just disagrees.
But I agree. There is real red flag when people don’t seem to be actually engaging with the substance of the issues which happens surprisingly often. They’ll write something, which just suggests, “I just don’t like the tone” or “I don’t like this topic” or “This whole thing makes me kind of mad” but they can’t explain why exactly.
↑ comment by MichaelA ·
2020-05-28T23:21:37.599Z · EA(p) · GW(p)
I think you raise interesting points. A few thoughts (which are again more like my views rather than "what the research says"):
- I agree that something like the general trustworthiness of the environment also matters. And it seems good to me to both increase the proportion of reliable to unreliable messages one receives and to make people better able to spot unreliable messages and avoid updating (incorrectly) on them and to make people better able to update on correct messages. (Though I'm not sure how tractable any of those things are.)
- I agree that it seems like a major risk from proliferation of misinformation, fake news, etc., is that people stop seeking out or updating on info in general, rather than just that they update incorrectly on the misinfo. But I wouldn't say that that's "the real problem with fake news"; I'd say that's a real problem, but that updating on the misinfo is another real problem (and I'm not sure which is bigger).
- As a minor thing, I think when people spread misinfo, someone else updates on it, and then the world more generally gets worse due to voting for stupid policies or whatever, that's also an externality. (The actions taken caused harm to people who weren't involved in the original "transaction".)
- I agree you can fool/mislead people without lies. You can use faulty arguments, cherry-picking, fairly empty rhetoric that "feels" like it points a certain way, etc.
↑ comment by MichaelA ·
2020-05-28T23:23:00.617Z · EA(p) · GW(p)
I wonder if fact-checking agencies could evolve to some sort of "rating agencies"; I mean, they shouldn't only screen for false statements, but actually provide information about who is accurate
Not sure if I understand the suggestion, or rather how you envision it adding value compared to the current system.
Fact-checkers already do say both that some statements are false and that others are accurate.
Also, at least some of them already have ways to see what proportion of a certain person's claims that the fact-checker evaluated turned out to be true vs false. Although that's obviously not the same as what proportion of all a source's claims (or all of a source's important claims, or whatever) are true.
But it seems like trying to objectively assess various sources' overall accuracy would be very hard and controversial. And it seems like one way we could view the current situation is that most info that's spread is roughly accurate (though often out of context, not highly important, etc.), and some is not, and the fact-checkers pick up claims that seem like they might be inaccurate and then say if they are. So we can perhaps see ourselves as already having something like an overall screening for general inaccuracy of quite prominent sources, in that, if fact-checking agencies haven't pointed out false statements of theirs, they're probably generally roughly accurate.
That's obviously not a very fine-grained assessment, but I guess what I'm saying is that it's something, and that adding value beyond that might be very hard.
comment by MichaelA ·
2020-05-11T09:39:56.747Z · EA(p) · GW(p)
I felt unsure how many people this AMA would be useful to, if anyone, and whether it would be worth posting.
But I’d guess it’s probably a good norm for EAs who might have relatively high levels of expertise in a relatively niche area to just make themselves known, and then let others decide whether it seems worthwhile to use them as a bridge between that niche area and EA. The potential upside - the creation of such bridges - seems notably larger than the downsides - a little time wasted writing and reading the post, before people ultimately just decide it’s not valuable and scroll on by.
I’d be interested in other people’s thoughts on that idea, and whether it’d be worth more people doing “tentative AMAs”, if they’re “sort-of” experts in some particular area that isn’t known to already be quite well represented in EA (e.g., probably not computer science or population ethics). E.g., maybe someone who did a Masters project on medieval Europe could do an AMA, without really knowing why any EAs would care, and then just see if anyone takes them up on it.Replies from: MichaelA
↑ comment by MichaelA ·
2020-05-12T00:51:10.218Z · EA(p) · GW(p)
It's now occurred to me that a natural option to compare this against is having something like a directory listing EAs who are open to 1-on-1s on various topics, where their areas of expertise or interest are noted. Like this [EA · GW] or this [EA(p) · GW(p)].
Here are some quick thoughts on how these options compare. But I'd be interested in others' thoughts too.
Relative disadvantages of this "tentative AMA" approach:
- Less centralised; you can't see all the people listed in one place (or a small handful of places)
- Harder to find again later; this post will soon slip off the radar, unless people remember it or happen to search for it
- Maybe directs a disproportionate amount of attention/prominence to the semi-random subset of EAs who decide to do a "tentative AMA"
- E.g., for at least a brief period, this post is on the frontpage, just as would be an AMA from Toby Ord, Will MacAskill, etc., even though those are much more notable and relevant for many EAs. If a lot of people did "tentative AMAs", that'd happen a lot. Whereas just one post where all such people can comment or add themselves to a directory would only "take up attention" once, in a sense.
- On the other hand, the karma system provides a sort of natural way of sorting that out.
Relative advantage of this "tentative AMA" approach:
- More likely to lead to public answers and discussion, rather than just 1-on-1s, which may benefit more people and allow the discussion to be found again later
comment by MichaelA ·
2020-05-11T09:37:57.281Z · EA(p) · GW(p)
To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment [EA(p) · GW(p)] I wrote a while ago:
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA's epistemic norms. Examples of the norms I'm talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper's abstract:
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)
This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
Replies from: MichaelA
- "retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation]." (source) (see also this source)
- "we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a "false balance"], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions." (emphasis added) (source)
- This seems relevant to norms around "steelmanning" and explaining reasons why one's own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine "controversy" or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they're actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that's all just my own speculative generalisations of the findings on "falsely balanced" coverage.
↑ comment by MichaelA ·
2020-05-19T11:29:46.927Z · EA(p) · GW(p)
Two more examples of how these sorts of findings can be applied to matters of interest to EAs:
- Seth Baum has written a paper entitled Countering Superintelligence Misinformation drawing on this body of research. (I stumbled upon this recently and haven't yet had a chance to read beyond the abstract and citations.)
- In a comment [EA(p) · GW(p)], Jonas Vollmer applied ideas from this body of research to the matter of how best to handle interactions about EA with journalists