Questions and Answers
Book Interview in The Cameron Journal (April 14, 2023): https://www.cameronjournal.com/?p=40547
(1) Tell us about this project, what made it compelling for you to work on for 10 years?
(2) What philosophies are the basis of the book?
(3) What is misinformation and why is it dangerous?
(4) What is an idea system?
(5) Could you explain the title?
(6) What is similar between the War on Terror, Mao, and the Witchcraft trials? Why did you pick those?
(7) What is human cognition and why is that important re: misinformation?
(8) American politics seems to be driven by ideology rather than policy or the good of people, how/why is that possible and how can be break out of the pattern?
(9) How can we know truth in an era of “filtered” or “modified” information that is essentially deceptive?
(10) How can we get people thinking better if information is unreliable?
(11) How do we create better systems for society if every old system has the seeds of the one before it?
(12) How can we use the advice in your book to educate people better
Interview with Cameron Cowan @ The Cameron Journal (cameronjournal.com)
Published April 14, 2023
Link to full interview:
https://www.cameronjournal.com/?p=40547
CC: Tell us about this project, what made it compelling for you to work on for 10 years?
GC: Simply put, what sustains my energy is how complex the “big question” is, and how difficult it is to answer it. The big question that interests me is how humans think—which affects the meanings, life, groups, environments, rules, etc. that they form. The book project—now entitled Revolution and Witchcraft—was a long journey for me in identifying and addressing the numerous small questions that are parts of the big question.
Much of the “fun” is in the specifics. Theoretically, a lot of things really don’t come together naturally. In the book, there are many thinkers or scholars who “chime in” to form the theoretical model I have created, but they are not really having one real-time dialogue. Rather, they make up and communicate their thoughts in different contexts, using different concepts and vocabularies. To learn what each of them is saying requires: one, you at least become acquainted with their intellectual foundations; two, are sufficiently patient to understand the fascinating and perplexing points regarding cognition. Then, the fun part is to see if their points align with one another in a way that is useful to you.
Following this, processing the empirical cases also took time. I started with one subject, the thoughts of the George W. Bush administration. This required that I read every one of Bush’s public statements during his first four-year term, plus many other speeches, writings, and documents related to the War on Terror. After I wrapped up my initial analysis of the case in the form of a doctoral dissertation, I wanted to go further. I sought a model of sorts that I could apply (or “generalize”) to understand human thinking beyond one case. To cut a long story short, I tried to wrap my head around the European witch-hunt case, which involved going through a couple of hundred books created by an entirely new group of scholars. The Mao-era case study turned out to be a much more complex research project than I had thought, and that took me a couple of years to process.
I think a driving motivation when I read through the cases is that I really wanted to “do justice to the cases.” Even when I was just getting into these cases, there was an inkling in me that something was amiss. The cases could be processed better if that something was taken into account. That something was finally identified to be regarding complex cognition—how humans truly think or could be pushed to think.
CC: What philosophies are the basis of the book?
GC: Social constructionism and dialectical theory would be two “philosophies” that come to mind. I am glad that you have asked that, because I can only reference some of these ideas vaguely in the book in order to maintain readers’ interests.
In relation to social constructionism, the book fundamentally posits that “social realities”— like a witchcraft epidemic, revolutionary project, or external threat—are significantly constructed by practices by humans. When we are in social realities, these look like real natural realities to us, and the real-ness is reinforced by people and institutions. But if we look very closely, these realities are always constructed by humans playing an active role in the assemblage: the uses of language, symbols, information, cognitive procedures, and sometimes material reality. This book explores how humans process these elements and co-determine the shape of what is socially real. They do not create ideas merely for entertainment, like we sometimes do. They create and reinforce ideas—sometimes competitively, using bloody means—that are meant to be taken as real representations of the world. My mentor Hugh Mehan repeatedly called these processes the “politics of representation.” I will not go into depth in discussing how this approach borrows from and intersects with the traditions of phenomenology, pragmatism, hermeneutics, philosophy of language, etc., but knowledgeable readers can see the traces of these traditions.
Regarding dialectical theory, discourse and interactional processes to varying extents engage contradictions that exist between “the world” and the ideas. In the book, people can see the surprising ways in which a powerful idea system, activated by people, seems to effectively “resolve” the many contradictions. But these systems contain contradictions of their own, which do not become apparent until they undergo a whole set of events and processes. When their developmental potentials are exhausted and the contradictions become too hard to handle, maybe another new idea system (or a heavily modified form of an idea system) appears to handle the new contradictions better, then the original idea system may dwindle and wane, becoming a dormant repertoire in the cultural system waiting for its emergence another day.
CC: What is misinformation and why is it dangerous?
GC: Imagine the most well-meaning human beings or the brightest minds you know. They can still get things wrong if they are given wrong or imperfect information.
In the book, I sometimes use terms that are less crude, such as “filtered information.” I consider this misinformation. Sometimes the social system blocks out a whole range of complex information, allowing only a limited range to be reached and considered. Even if an individual piece of information is technically right, on the whole it is misinformation.
Why is misinformation dangerous? There are many reasons for this.
First, misinformation feeds into many other missteps in thinking. For example, if you have the wrong information to start with, even if you are completely logical in your reasoning later on, you can still be wrong. With the right set of information, you can reach the most wrongful conclusion.
Second, misinformation is organized in a way that makes it so much more difficult to see the truth. Suppose you are wrong, but there is so much misinformation out there “proving” that you are right, it is very difficult to change your mind because individuals cannot disprove all that information—you may only do that with a leap of faith. Even if you are not just any individual but in fact control a major institution, it may take enormous work and effort to dispel misinformation, if it can even be done at all. You are sometimes talking about challenging a great deal of authorities and institutions.
Third, and the major point: misinformation has a multiplying dynamic. From just a small amount of wrong or filtered information, especially when it appears in a particular place, it can bridge people’s imaginative and speculative gaps. This is how rumors work. You may not have enough evidence for anything, but you have a huge amount of less-than-solid evidence in some places, which intermixes with information that is solid. Mixing this pool of information with everyday exaggeration and verbal distortions, you can easily affirm the constructions of a “social reality” that we talked about. Conversely, if the misinformation is absent in a certain critical juncture, a productive dynamic would have been significantly thwarted. The book not only talks about how much misinformation matters in terms of how much danger it carries, but also the exact contexts (time and place) in which it appears can matter significantly.
CC: What is an idea system?
GC: Here is the full definition of an idea system in the book. An idea system is “a set of ideas, more or less loosely organized, that are related to one another as a system and, as such, will have a distinct set of superordinate operational principles and dynamics that cannot be reduced to the individual ideas or other smaller elements that constitute it” (p. 12).
Let’s say that you and I think: “1+1 = 2.” This is an idea, but it involves more than just the idea. There is a whole system of stuff that goes into producing this idea. An entire system of symbols, a complete logical system, a history of experience, and so forth.
People in a position of power often know what idea systems are, especially if they have to design and manage them. If you run a family, a subcultural group, a church, a summer program, a major company, or a nation-state, religious denomination, etc., you are likely to intuitively know the difference between individual ideas in them versus a whole apparatus that creates, sustains, and thwarts ideas. You may need a whole system of codes, informational processing, preexisting knowledge, and most likely rules and sanctions.
The book concludes with a metaphor: an idea system is a sort of a cognitive and informational matrix, carrying the metaphor associated with the film series known as The Matrix. An idea system is not a mechanical system; its functions are not entirely fixed or pre-determined, not even by the designers or manipulators. Human capacities to transform and create meaning are enormous; they can make new things happen within an established idea system, sometimes using processes borrowed from other idea systems. They can sometimes corrupt or hijack an idea system; this may be liberating but can also be incredibly dangerous.
CC: Could you explain the title?
GC: The book is entitled Revolution and Witchcraft: The Code of Ideology in Unsettled Times. The name of this book is not intended to denigrate revolution or witchcraft beliefs and activities. In the context of this work, “revolution” and “witchcraft” represent two typical ways of creating ideas.
One way is called an empirical or evidentially driven mode of codification. This means that you start with the attitude that some natural facts already exist in reality, and then you try to use empirical information and cognitive processes to find out these facts about the already established reality. The notion of witchcraft in early modern Europe represents this mode of codification in the book. People had an idea that the Devil and witches existed and did scary and abhorrent things, and they tried to locate empirical information and process layers of empirical evidence about it that indicate witchcraft’s existence and specify its characteristics. The merits of ideas are judged by their match with the already existing reality.
Another way is called an ideationally driven mode of codification. This means that you start with knowing that certain idealistic conditions and visions have not yet been realized in the current world, but that the ideals are inherently “just” and “true.” You try to modify the world so that the “just” and “true” ideals can come into being. To do this, you take on the ideals as a kind of precept to look at the existing conditions of the world, matching them against the ideals. The notion of the Communist Revolution in China represents this mode of codification. People knew the ideal to be largely valid and true, and they tried to transform existing conditions according to the visions. To carry out their tasks efficiently, they thus establish chains of ideas, concepts, concepts, and typologies, etc. to process information and generate data. They might not know the detailed empirical details necessary to make the ideals come into being, and a considerable number of errors might be made, the visions needed revision, but the point is to create ideas that transform the world toward a “better” (more idealistic) direction. The merits of ideas are judged by their usefulness to make that match happen.
By the way, I have been encouraged to make these two modes of codification clearer via visual means. So I have created and posted a set of conceptual diagrams on codeofideology.com; readers should check it out under the “For Instructor” section.
The subtitle “the code of ideology in unsettled times” invites readers interested in the ideologically driven movements, policies, and consequences to examine cognition in detail. The word “code” in the book has a specific meaning; it is tied to a computer programming metaphor, but it can be simply defined as a “conventionalized symbol.” Codification in the book basically stands for the thinking processes that lead to idea creation, but with a level of attention paid to how information and (conventionalized) symbols are made to interact in one’s mind. On the one hand, you have a computer program, on the other users of the program. Part of the thinking process is influenced by how things are programmed, even though subjects who have learned coding can out-advantage other users who haven’t in terms of activating the program for their uses; they can even go beyond simply activating and actually starting to “hack” the program. Sometimes the system maintains its integrity, sometimes it changes significantly, and sometimes it can wreak havoc. The phrase “unsettled times” alludes to the situations of chaos when an idea system underwent instability.
CC: What is similar between the War on Terror, Mao, and the Witchcraft trials? Why did you pick those?
GC: The three cases all contain an elaborate idea system, even if their contents, setup, and temporal contexts are radically different. But because they are different, they make a good starting point to create some “generalizable” observations about idea systems.
Another similarity is that these are all idea systems that have been responsible for extreme actions and behaviors. Yet another is this: they also have a stereotype to them, i.e. that the social actors are crazy, impulsive, and irrational in one way or the other.
I have explained the chronological order by which I picked the cases: first Bush, then the witch-hunts, then the Mao-era chaos. I am not sure if I can fully explain why I picked them and not the other ones. I was drawn to accounts and behaviors that fascinated me. It took me a long time to make coherent sense of what the Bush Administration was doing in terms of building and using ideological constructs. But I knew the project was not complete, because when I initially looked at the witch-hunt materials I was not quite able to apply my insights to them. Likewise, when I first had the Mao-era materials in front of me, I could not apply my cumulative insights to them. I sensed that there are differences in terms of the processes that each case brings out, but for a long time I was struggling to articulate what these differences were. It was only much later when I came up with a theory and a coherent language to talk about idea systems that I was able to classify them into three ideal-typical categories (evidential, ideational, and hybrid codifications). Then, the project in some sense became “complete”—by which I mean complete enough to form closure; I now feel like I have a coherent theory that could be applied to a wide range of cases.
Lastly—and this is a bit off tangent—I think by the end of my study I felt that those who depicted the social actions simplistically, perhaps with a subjective attitude, should really read the cases with care; the abyss we gaze at can really gaze back into us. The cases may speak to common human conditions that we all possess. In fact, there is a tendency for people to process these cases as a whole “blob/lump” of mixtures of things that vaguely associate with one another. Thus, they are using a rather simplistic cognitive model to look at complex cognition, which is an irony.
CC: What is human cognition and why is that important re: misinformation?
GC: I would distinguish cognition against perception. Suppose you look at a picture of yourself with several friends in high school, and then have many thoughts about it. That the picture is a coherent object would be your perception; the many thoughts that you generate would be cognition. Simply put, cognition involves more work in the “consciousness,” so to speak.
Furthermore, cognition may also be differentiated against processes that are in the so-called unconscious domain. Say, after you look at that picture and complex nostalgic feelings and complex emotions rush into you, these outcomes could happen without you being able to explain them. There are gray and intersecting areas, although this issue is not important in my book.
Cognition, perception, and unconscious processes (e.g., emotive processes), each has its importance in the making of ideas. Manipulation of each of these processes involves a different kind of activity. Some people may even argue that ideological manipulators play with the perceptive and unconscious stuff a lot more. But I have focused more on the cognition—especially complex and not basic cognition—because I feel that people underestimate and undertheorize the intellectual activities that take place. People use their “full” intellect to make and shape ideas that in turn activate the “will.” Through cognition, people can create very long chains of ideas inside the social system, leading to very precise actions. Elaborate designs of cognitive pathways can lay the groundwork for very fast-and-easy cognitive, emotive, and perceptive processing. If we only have weak theories and examples in complex cognition but very rich theories and examples in the unconscious, emotive, motivational domains, then we can form the wrong impressions about why things turn out the way they do.
CC: American politics seems to be driven by ideology rather than policy or the good of people, how/why is that possible and how can be break out of the pattern?
GC: Not just American politics. In general, I think people—especially powerful people—have become much better at using and propagating ideologies—in terms of both simple and complex forms. This change compounds with technological revolutions; the dwindling of older, more-functional institutions; and—as worded by my friend and beta-reader of the book Noah Faingold—an ethos for “shortcuts.”
Speaking of shortcuts, people would take shortcuts only because there are shortcuts. “Ideology” provides a way for people to make clear sense of things quickly, and if you know the code and have the power to generate or control information, it does not really take much work to create just the right ideas—or a package of materials with which people can create the ideas you intend them to create—good enough to draw votes, make your stance clear, and so on.
Also, in sociology we sometimes use the word “field,” referring to competition. Ideas have become a “field”—meaning that if you do not create ideas faster than others, there are also consequences: the ideas of others will get in their way. There is a pressure to create ideas quickly to beat others.
How can be break out of the pattern? If by “we” you mean ordinary folk, I think there are only small things we can do (see my response to your other questions). I think breaking out of a pattern takes many significant institutional reforms, and I am not sure there are preconditions for this kind of major reform to happen right now. Historically, major crises also create the conditions for changes, changes that mend some fundamental conditions that create the crisis. But until we get to that point, prospects of major transformation may be difficult to see.
CC: How can we know truth in an era of “filtered” or “modified” information that is essentially deceptive?
GC: Let’s flip the role a bit. If you only give other people filtered and modified information—and you basically monopolize access to information, having social authorities backing you—how can other people figure out the “truth” from what you give them?
I think the recipients most likely cannot know the truth under such a circumstance. What they can sometimes discern is if they there are cracks, contradictions, and alternative information. By cracks, I am thinking of an analogy something like this (modified from Jean Baudrillard’s Simulation and Simulacra and also a movie called Eternal Sunshine and the Spotless Mind): that you are looking at something that looks like the sky connected to the sea. Then, somehow in the middle there are some cracks or holes in where the sky is supposed to be, and if you get closer and look at them carefully you think you are seeing a set of cameras filming you, and that the “sky” is a canvas. Although a few cracks or holes contain alternative information, it is significant enough for you to think about the current taken-for-granted reality. Maybe it is possible to tear open the canvas. But then the scenario is transformed into one in which you have access to non-deceptive information.
CC: How can we get people thinking better if information is unreliable?
GC: There are several general points that I think are particularly relevant to the age we live in. One is: a lot of information is not the truth, and lots of lots of unreliable information does not help. I think there is a change of mentality that comes with having a lot of information at one’s disposal. People think that they can see the truth when they have a lot of information. Going back to the canvas analogy earlier: we used to live in a time when the canvas portrait was not as realistic, and there were probably a lot more cracks and tears. As information technology keeps improving, the canvas image of the sky becomes increasingly realistic, and the cracks and tears are fewer and fewer. So, people need to know the general condition has changed. Because things look like they are true, it is important to pay close attention to glitches—or perhaps incompatibilities in multiple truths. The previous game was that there was not enough information to substantiate a reality, so having a lot of information can prove something, even if the proof is highly incomplete. In today’s game, when some kind of proof can be generated for anything, people need to make a conscious effort to explore some weird glitches in the idea system—e.g., if the canvas image is true, why would this set of cameras suddenly appear in sight?
Beyond this point, a few famous investors have some words of wisdom, I think. Ray Dalio has called for “radical open-mindedness” in gathering and processing information. Warren Buffett and Charlie Munger are famous for their advocacy of people staying within certain “circles of competence” and not overreaching. Perhaps my favorite one is a quote attributed to Benjamin Graham, “You can get in way more trouble with a good idea than a bad idea....” We can think of what they represent separately in terms of their class position and politics. But I think they have these neat maxims because the investing world is filled with the constant flow of unreliable information. I think these are good maxims for general people (beyond the purpose of investing), and they encourage people to adopt a different kind of mentality.
Lastly, educationally, I think we need a new kind of curriculum that trains people in “cognitive literacy,” which also ties to social ethics and personal development. I had a section on this in an earlier draft of the book, but I did not publish it because it was not fully fleshed out. In essence, people need to be trained to experience multiple ways of thinking and to be placed in challenging thinking and situational scenarios, including scenarios involving unreliable information, practical consequences, peer pressure, etc. They need a set of terminologies that would allow them to dissect the specifics and talk about their analyses with one another. They also need a set of scenarios that would connect them to the intensity of the real-world experiences. Revolution and Witchcraft was originally written in mind to serve that kind of broad-based educational purposes (at the urging of humanistic educator Prof. John McNeil), though the final product may be too difficult for most educators. Still, the book does a lot of legwork of putting together complex scenarios and a set of terminologies. Scholars and educators may still be able to integrate this book into their activities in some ways.
CC: How do we create better systems for society if every old system has the seeds of the one before it?
GC: I hope I am interpreting your question right.
Personally, we have to make sure that the system we know to be “better” actually works. For most of us, we only have the power to design radically new systems that operate on a small scale. Even so, there are all kinds of unintended consequences, because what happens in our plans is likely going to have gaps and flaws. Think about the bugs that come with any new computer program. Some flaws or limitations emerge when conditions change—say, a new computer virus invented by a competitor, a bug that can be exploited, etc. Any good programs (“better systems”) require honest and sometimes tedious work of experimentation, trouble-shooting, and improvement—and even with these many times we may have to abandon new ideas and revert to “the old,” at least temporarily.
By the “seeds of the old system,” I suppose you mean a core part of an old system that is tied to certain ills. Sometimes you can eliminate these selected parts, sometimes you can only work with them or around them. Suppose one day the honest work you have put in yields some fruit, when you have a new thing that is good enough, you may think about eliminating the part that you don’t want. Again, most of us only have the power to create small-scale systems, but that can be a “prototype” of sorts.
Lastly, always remember that many systems work in some local environments better than other ones. So, we have to be open to the possibility that the “better systems” may not be applicable to all environments.
CC: How can we use the advice in your book to educate people better?
GC: What I gave was not so much related to prescriptive advice. Rather, I wanted people to understand in detail the programming of ideas, and how that affects what we see. Many people think in big blobs/lumps of things and think that little errors do not matter. I am showing them that, in different ways, they matter and how they matter. And that major problems are caused by the little errors, some of which are systematically produced. Perhaps what I was hoping for was that the book would stimulate cultural transformation in some ways, particularly in terms of “cognitive culture.”
In the book, I mentioned that fairmindedness can go a long way. I cited Chung Yung, or the “doctrine of the mean,” as a high standard of fairmindedness; but I believe every common person—well, at least the majority of people—have a fairminded voice inside them that can be evoked. Even attempting to be consistently (not selectively) fairminded is very difficult in some situations. (I cannot say that I can practice it.) You may have to be willing to go against the belief popular in society and the information presented to you; you have to be willing to admit fault and listen to others, you may have to adopt a different way of thinking. But my book’s advice is exactly that fairmindedness is very important in our age. It is important for people who are educated and resourceful and people who aren’t. It is important in big and small amounts. It matters.
Link to full interview:
https://www.cameronjournal.com/?p=40547
Questions
Readers are welcome to submit questions to the author. The author may edit some questions and organize them into a question-and-answer section on this website.