"Epistemic maps" for AI Debates? (or for other issues)

post by Harrison Durland (Harrison D) · 2021-08-30T04:59:29.608Z · EA · GW · 1 comment

This is a question post.


  Intro: Epistemic Maps
  Potential benefits for AI debates (and other issues in general)
  Implementation (basically: I have a few idea threads but am really uncertain)
    8 iarwain
    4 Paal Fredrik Skjørten Kvarberg
1 comment


Intro: Epistemic Maps

For a while I have been interested in an idea I’ve been calling an epistemic map (or "epistemap", for lack of a better name). The core idea is a virtual space for concept, entity, and link diagramming, with an emphasis on semantically-rich[1] relationships—for example, claim X relies on assumption Y; claim X conflicts/contrasts with claim Y; study X relied on Y  dataset and Z methodology to make claim W; and so on. Beyond this, there are many different potential variations as to what it might look like, but my particular interest is in collaborative or semi-collaborative work, as I discuss a bit further below. Below are some snapshots from a toy model I made for an internship project where I primarily focused on diagramming some of the published literature on the debated link between poverty and terrorism.

A zoomed-in snapshot of part of the map (1 of 3). Opening the image in a new tab should make the labels more legible.
A zoomed-in snapshot of part of the map (2 of 3). Opening the image in a new tab should make the labels more legible.
A zoomed-in snapshot of part of the map (3 of 3). Opening the image in a new tab should make the labels more legible.
(It seems that even after opening this image in a new tab and zooming in the text will probably still be blurry, hence the zoomed-in snapshots above. I made this map with a free/lite/outdated version of Semantica Pro which I was able to get from someone working on it many years ago.)

Since thinking of this concept, I’ve searched largely in vain for an example implementation or even theoretical description of the kind of concept I have in mind. Like I said, I don't even know what exactly to call this concept; I’ve seen a few things like mind-mapping, “computer-based collaborative argumentation”, and literature mapping in general, but I don’t think I’ve seen anything that really strikes at the idea of what I’m getting at. [The following list was added in an update months later] In particular, some of the key features that I have in mind include:

  1. Node and link system with a free/open structure rather than a mandatory origin node or similar hierarchical structure. (So many mind mapping tools I've looked at fail this by forcing you to use a hierarchical structure)
  2. The links are labeled to describe/codify the relationship, rather than just having generic, unlabeled links that denote "related". (I have found many programs which fail this basic criterion, and I generally am skeptical of the value of measures which rely on this data and try to go beyond some rudimentary claims about the data)
  3. An emphasis on "entities" or just "nodes" in a very broad sense that includes claims, observations, datasets, studies, people (e.g., authors), etc. rather than just concepts (as in many mindmaps I've seen).
  4. Some degree of logic functionality (e.g., basic if/then coding) to automate certain things, such as "Claims X, Y, and Z all depend on assumption W; Study Q found that W is false / assume that W is false; => flag claims X, Y, and Z as 'needs review'; => flag any claims that depend on claims X, Y, or Z...".
  5. The ability to collaborate or at least share your maps and integrate others'. This pokes a rather large can of worms regarding implementation, as I describe later, but one key point here is about harmonizing contributions across a variety of researchers by promoting universal identifiers for "entities" at least including datasets, studies, people (see point 3). This should make it easier/more beneficial for two researchers to combine their personal epistemaps which might each include different references to the Global Terrorism Database (GTD), for example.
  6. (Preferably some capability for polyadic relationships rather than strictly dyadic relationships—although I recognize this may not be as easy/neat as I imagine given that I don't think I've seen this done smoothly in any program, let alone those for concept mapping)

Overall, I don't think I've seen any programs or tools that even meet all of criteria 1 through 4 (let alone 5 and 6), although a few seem to decently handle 1, 2, and 3 even though they don't seem to be designed for the purpose I have in mind. (In particular, these include Semantica Pro, Palantir, Cytoscape, Carnegie Mellon's ORA, and a few others.)

I’ve been really interested to get a sense of whether a project like this could be useful for working through complicated issues that draw on a variety of loose concepts and findings rather than siloed expertise and/or easily-transferable equations and scientific laws (e.g., in engineering). I recognize that transferring an overall argument or finding (e.g., the effect of poverty on opportunity cost which relates to willingness engage in terrorism) will not be as easy, objective/defensible, or conceptually pure as copy+pasting the melting point of some rubidium alloy as determined by some experimental studies (for example). However, I find it hard to believe that we can't substantially improve upon our current methods in policy analysis, social science, etc., and I have some reasons to think that maybe, just maybe, methods similar to what I have in mind could help.

Unfortunately, however, I’m not exactly sure where to begin and I have some skeptical priors along the lines of “If this were a good idea, it’d already have been done.” (I have a few responses to such reasoning—e.g., “this probably requires internet connectivity/usage, computing, and GUIs on a level really only seen in the past 10-20 years in order to be efficient”, “this probably requires a dedicated community motivated to collaborate on a problem in order to reach a critical mass where adoption becomes worthwhile more broadly”—but I’m still skeptical)

Potential benefits for AI debates (and other issues in general)

Although I am not involved in AI research, I occasionally read some of the less-technical discussions/analyses, and it definitely seems like there is a lot of disagreement and uncertainty on big questions like fast takeoff probability and AI alignment probability. Thus, it seems quite plausible that such a system as I am describing could help improve community understanding by:

(I have a few other arguments, but for now I’ll leave it at those points)


Implementation (basically: I have a few idea threads but am really uncertain)

In terms of “who creates/edits/etc. the map”, the short answer is that I have a few ideas, such as a very decentralized system akin to a library/buffet where people can make contributions in “layers” that other people can choose to download and use, with the options for organizations to "curate" sets of layers they find particularly valuable/insightful. However, I have not yet spent a lot of time thinking about how this would exactly work, especially since I want to probe it for other issues first and avoid overcomplicating this question-post. This leads me to my conclusion/questions:



Ultimately, I can go into more detail if people would like; I have some other thoughts scattered across various documents/notes, but I’ve found it a bit daunting to try to collect it all in one spot (i.e., I’ve been procrastinating writing a post on this for months now). I’ve also worried everything might be a bit much for the reader. Thus, I figured it might be best to just throw out the broad/fuzzy idea and get some initial feedback, especially along the lines of the following questions:

  1. Does something like this exist and/or what is the closest concept that you can think of?
  2. (With the note that I have not spent all that much time thinking about specific implementation details) Does this sound like a feasible/useful concept in general? E.g., would there be enough interest/participation to make it worthwhile? Would the complexity outweigh/restrict the potential benefits?
  3. Does it sound like it could be useful for problems/questions in AI specifically?
  4. Are there any other problem areas (especially those relevant to EA) where it seems like this might be particularly helpful?
  5. (Do you have alternative name suggestions?)
  6. (Any other feedback you’d like to provide!)
  1. ^

    i.e., conceptual or labeled; see the illustrations provided.


answer by Aryeh Englander (iarwain) · 2021-08-30T11:26:09.262Z · EA(p) · GW(p)

Does this look close to like what you're looking for? https://www.lesswrong.com/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction [LW · GW]

If yes, feel free to message me - I'm one of the people running that project.

Also, what software did you use for the map you displayed above?

comment by Harrison Durland (Harrison D) · 2021-08-30T17:19:07.900Z · EA(p) · GW(p)

What you describe there is probably one of the most similar concepts I've seen thus far, but I think a potentially important difference is that I am particularly interested in a system that allows/emphasizes semantically-richer relationships between concepts and things. From what I saw in that post, it looks like the relationships in the project you describe are largely just "X influences Y" or "X relates to/informs us about  Y", whereas the system I have in mind would allow identifying relationships like "X and Y are inconsistent claims," "Z study had conclusion/finding X," "X supports Y", etc.

I used a free/lite/outdated version of Semantica Pro which I was able to get from someone working on it many years ago.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-09-01T07:48:46.083Z · EA(p) · GW(p)

(I'm also working on the project.)

We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction - but they are interrelated.

For example, 
Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI already
Argument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1)
Argument 2a: The compute needed to do this is 30 years away.
Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient.
Argument 2c (contra 2b): We don't know what we are optimizing for, exactly.
Argument 2d (supporting 2b): We still manage to do things like computer vision.

Each of these has implications about timelines until AI - we don't just want to look at strength of the arguments, we also want to look at the actual implication for timelines.

Semantica Pro doesn't do quantitative relationships which allow for simulation of outcomes and uncertainty, like "argument X predicts progress will be normal(50%, 5%) faster." On the other hand, Analytica doesn't really do the other half of representing conflicting models - but we're not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn't pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)

It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we've been working on cataloging the questions, we'd potentially be interested in collaborating.

Replies from: Harrison D
comment by Harrison Durland (Harrison D) · 2021-09-01T19:30:09.168Z · EA(p) · GW(p)

Yeah, I definitely felt that one of the downsides of Semantica Pro (or at least, my version of it) was the lack of quantitative or even logic (if-then) functionality, which in my mind would be a crucial feature. For example, I would want to see some kind of logical system that flags claims that depend on an assumption/claim/study that is shown to be flawed (and thus the flagged claims may need to be reevaluated). In my most recent research project, for example, I found a study which used a (seemingly/arguably) flawed experiment design for testing prediction market incentive structures, produced a finding which was seemingly counterintuitive (at least before taking into account the flaws in experiment design), and then went on to be cited by ~50-100 other studies with some of them even referencing it as the basis for their experimental design.

(Venting aside) I'm definitely interested in exploring the idea further.

answer by Paal Fredrik Skjørten Kvarberg · 2021-08-30T11:29:41.272Z · EA(p) · GW(p)

Hi! I've also been thinking and working a bit on this idea. Here are some brief answers to your questions. 

  1. Yes, something like this exist. There are many projects pursuing different varieties of the idea you are sketching. Perhaps the smoothest computer program for this is https://www.swarmcheck.ai/. An older, more complicated software is https://www.norsys.com/. It is also possible to use https://roamresearch.com/, or https://obsidian.md/ for similar applications. https://www.kialo-edu.com/ also does something similar. As you are probably well aware, there is a host of related forecasting initiatives.  In analytical philosophy, theorists have been discussing the structure of an ideal graph like this for a while. See J. S. Ullian and W. V. O. Quine's The Web of Belief (1970) for a short intro to relevant concepts. 
  2. I tend to think so. Everything depends on implementation though. It is not feasible if usage is tedious or complex.
  3. I have given this a bit of thought, and now tend to think that it would be quite useful to get an overview of the main arguments.  
  4. I think it could be useful to most areas, and particularly to see interconnections between cause areas. 
  5. The idea is typically referred to as mind-mapping or argument-diagrams. For clarity, it is probably best to use these names. However, the name I like the most (even though it is not practical) is 'logical knowledge graph', because the edges in the graph structure would constitute logical inferences, and the nodes would be propositions. I also like 'digital knowledge infrastructure'. 
  6. This is a cool project! I, for one, would love to see more thought invested in this. 
comment by Harrison Durland (Harrison D) · 2021-08-30T17:52:26.286Z · EA(p) · GW(p)

I'm glad to hear you are interested, and I appreciate the resources/links!

Re (1): I'm a big fan of Kialo (even though I have not been active in community discussions in the past few years). I actually have been working a forum post that would highlight its potential use for some issues in the EA community.  Still, Kialo is much more narrowly focused on pro/con argumentation structure rather than diagramming concepts/observations/etc. more broadly. Additionally, I have seen Bayesian networks (although I can't remember if I saw Norsys), but these tend to lack the semantically-richer relationship descriptions such as "study X has finding Y" and "study X has some methodological feature/assumption Z." This issue also seems to apply to the other platforms you mentioned (although SwarmCheck seems like it could potentially be a nice tool for some similar issues, perhaps as a more detailed form of Kialo). In hindsight, I should probably have made this emphasis on semantic/etc. relationships clearer in my question/post; I'll make an edit regarding that.

Re (5): I used to sometimes call this concept a mind map or argument diagram, but I don't like doing that anymore since I don't think those terms really capture the idea. While "collaborative mind map" might come somewhat close, many of the mind maps and descriptions of mind maps that I have seen put a strong emphasis on hierarchical relationships (e.g., a central overarching topic node that branches out into individual categories), whereas the system I have in mind would not require such a hierarchical structure. It also is not just an argument diagram, since it would extend beyond the individual arguments/claims themselves, potentially including details and observations not expressed in argument form.

Replies from: paal-fredrik-skjorten-kvarberg
comment by Paal Fredrik Skjørten Kvarberg (paal-fredrik-skjorten-kvarberg) · 2021-09-05T11:13:32.214Z · EA(p) · GW(p)

Good! Yeah, I didn't mean to say that any of these capture all the specifics of your idea, but merely that there is a lot of interest in this sort of thing.  It's probably worthwhile pursuing this in more detail, I'd be interested in seeing more on this. 

1 comment

Comments sorted by top scores.

comment by Bentley Davis · 2022-07-08T13:55:27.521Z · EA(p) · GW(p)

Harrison, I 've been working on something similar for over a decade and I was just thinking a term like "Epistemic Map" might express it well so I searched who else is using it and ran into your articles. I am currently hiring internet researchers to produce maps for contentious issues like "Does Concealed Carry reduce crime?" and build up a resource of examples to test out different visualizations and data models.  Here is my latest description in draft form. Here are some examples  Fictional City Decision , vaccine.

I also work with others with similar goals at the Canonical Debate Lab (CDL) which you might find interesting. We have categorized over 100 different attempts at epistemic tools although it is somewhat out of date. 

My data model is pretty different but we seem to  have similar goals. Other in CDL might align close to what you are looking for. We would love to collaborate in any way you find interesting.

Bentley Davis