Section 37 One function to rule them all?
The preceding sections have been quite involved. Can’t we keep it simple? Can’t we just code every causal claim in the same way, e.g. as somehow some kind of positive influence, and leave it at that? That might save us a lot of time when coding especially when we don’t really have the necessary information, and it would save us thinking about all the philosophy?
By all means.
37.1 One universal type of function for coding causal claims with exclusively lo/hi variables.
There are some obvious candidate functions. Coding a connection between (packet of) influence variables and consequence variables could be consistently understood as any one of these:
- Total control
- Barest control
- Necessary
- Sufficient
- Somehow-positive control
Probably only the last makes sense in our context. It’s amazing though that there are whole schools of social science built on the exclusive use of sufficient and even necessary (Dul, xx) conditions. However if we really want to use one universal function for everything, we have to remind ourselves that there is no going back – we couldn’t decide to use just, say, sufficient conditions and then change our minds and ask to code one specific causal link as, say, expressing a necessary connection.
Indeed, there are such strategies, here are two:
The purest and least sophisticated strategy to encode causal information, “content-neutral coding”, does no coding of the causal content of the link at all. We interpret an arrow as meaning only
C has some kind of causal influence on E, but we aren’t saying, or don’t know, what
The next step would be a strategy like “purely-positive coding” in which we do provide some minimal encoding of the content, but always as some kind of “positive” influence, like this:
C has some kind of, in some sense, positive or increasing causal influence on E, but we aren’t saying, or don’t know, more than that.
But we should remember that, in contrast, the (raw) information provided by our respondents can may in fact have arbitrary causal content. So they might actually say things like this:
C is almost irrelevant to E
or
C drastically reduces E
or
C is necessary for E
or
C definitely has no causal influence on E
as well as
C has a strong and positive influence on E
etc.
We would struggle to meaningfully encode most of these kinds of information using either “purely-positive” or “content-neutral” coding.
By all means we could use “content-neutral coding” to encode willy-nilly the example statements just given as arrows from C to E (or whatever). But we wouldn’t be able to do much with the resulting network. We would hardly be justified even in aggregating arrows (combining several cases in which several sources mentioned the same arrow, for example, (in the resulting network) using the width of the combined arrows to show the number of mentions). That would be to mix apples and pears.
We will have to try be at least a bit more sophisticated in how we encode causal information as arrows in a causal map.