The Varol piece was new, and as I read it, it reminded me of the Sivers piece, so I’m pairing them together. I’m a little conflicted with the message. On the one hand, I agree with both writers about the sentiments they are expressing. In Varol’s case, often citation becomes a short-hand for original thinking. Rather than expression your own unique ideas, you regurgitate what you’ve consumed from others (whether you are citing it or not, as is on display in the Good Will Hunting example). Likewise, Sivers is on to something when he suggests that integrating facts into our mental apparatus should not require us to cite our sources when it’s no longer the appropriate context. It makes sense to cite sources when writing something that will be graded in school, but it is stilted while in informal settings.
Where I feel conflicted is when there is a need to trace ideas back to verify the content. I don’t think it’s a new phenomenon, but it has certainly accelerated in recent years that misinformation is being thrown out into the void at a rapid pace. The internet has optimized itself on three facts of human nature – we like sensation, we like things that are familiar (that accords with what we already believe), and we are less critical of our in-group. Therefore, information bubbles get set up online, which creates a digital environment that’s conducive to rapid spreading of memetic viruses. When you think about it, it’s a marvelous analogy: the online information bubble is a closed environment where people are like-minded, which amounts to a roughly analogical immune system. A memetic virus then latches hold on one person, who spreads it to people in their network. Since the folks in the network share similar belief structures, the homogeneous group quickly spreads the meme throughout the information bubble. The meme is then incorporated into the belief network of the individuals through repetition and confirmation bias exposure. It writes itself into the belief web, in the same way viruses incorporate themselves into DNA.
I’m using the example of a memetic virus, but I think this framework is equally applied to more benign examples. Scientists release findings in the form of pre-peer reviewed news releases, which gets amplified and distorted through the media, which is then amplified and distorted through information bubbles. See here for an example:
At each phase, part of the signal is lost or transformed, like a social media game of telephone. When one person in the chain misunderstands the data, that impacts how the idea gets replicated. Over time, it becomes the digital version of a cancerous mutation of the base information.
This is why it’s important that we take care of how information is communicated, because as soon as you print something like “the majority of people believe x,” or “studies showed a y% decrease in the effect,” without a proper context of what the data is saying (or its limitations), that gets incorporated into people’s webs of belief. If you are a part of the population that believes something and you read that information, it reinforces your prior beliefs and you continue on in replicating the idea.
And so I’m torn. On the one hand, I shouldn’t need to cite my sources when having a casual conversation (a la Sivers), and I shouldn’t be substituting original thoughts with the ideas of others (a la Varol), but at the same time, I want it to be the case that when I encounter something that it should be verifiable and scruitable. I don’t know what the solution to this is, other than to flag it and remind myself to be wary of absolutist language.
I recently joined a book club, and last week we met virtually to discuss The Immortal Life of Henrietta Lacks by Rebecca Skloot.
The book has been circling my periphery for some time, coming up in recommended reads lists for at least a year. When it came time for me to suggest the next read, I chose this book without really knowing much about the subject. I was vaguely aware that Henrietta Lacks’s cells were instrumental to many scientific and medical advances, and I was aware that the obtaining of the cells was likely done unethically, as was the case for many Black Americans who found themselves under medical scrutiny in the middle of the last century. Since I review research ethics applications on two ethics boards I serve on, and because of the ongoing conversation around Black lives, I thought this would be a good book for us to read and learn from.
In short, the book is fantastic as a piece of writing.
But the story of Henrietta Lacks and her family is heartbreaking. The book paints a vivid portrait of who Henrietta was, and gives intimate glimpses into the life of her decedents. It also presents a comprehensive history of both the rise of research ethics since the end of World War Two and of the many advances made by science thanks to Henrietta’s cells. However, those advances were done with cells acquired neither with proper consent nor compensation. For many years after her early death, Henrietta’s name became lost to obscurity outside of her family, but everyone in the cellular biology community knew her cells because of how abundant they were. In a tragic twist, the very medical advances that gave way to better understandings of radiation, viruses, and vaccines, were often not available to the impoverished Lacks family. While the Lacks’s remained stuck in poverty, others profited.
I highly recommend everyone read this book.
As we discussed the book last week, I realized that this was an example of why it’s important to enlarge the domain of one’s ignorance. Learning about history shouldn’t be an exercise in theory; often we forget that history is presented as an abstraction away from the stories of individual people. If we forget about their individual lives, we can sometimes take the wrong lessons from history. As the saying goes, those who don’t learn from history are doomed to repeat it. In this case, we continue to exploit the voiceless, and profit on the backs of the disenfranchised – those who don’t have the power to speak back.
Reading books like this gives me a greater context for history, and it helps me understand the lived-history of people. I review research projects to understand the ethical consequences of our search for knowledge. If I lack a historical context – the history of how research was and is carried out – then I run the risk of perpetuating the same injustices on the people of today that the research is meant to help.
Research is supposed to be dispassionate, but we must understand and situate it within its proper historical context.
In an allusion to Picard, I close with this: constant vigilance is the price we must pay for progress.
As with many other people right now, I have chosen to go back and re-watch favourite television shows. I decided that with Star Trek: Picard’s recent release, it would be a great time to go back to the beginning (of the modern era, anyway) and revisit Star Trek: The Next Generation. I had probably watched every episode in my teen years, but I had always watched it in syndication, so this is my first time going through the show in order.
Approaching the series in my 30’s has been a real treat. I have more life and cultural experience to draw upon as I watch these incredibly written episodes play out. I knew the show was amazing, but I never appreciated how well it engages with moral issues.
I want to highlight one excellent episode from the third season – episode 7, “The Enemy.” The characters provide us with a moral issue about autonomy, and a good lesson in leadership.
The story centres on the conflict that arises when the protagonists rescue an enemy officer from an out of bounds planet. The officer, from a race of people called Romulans, is gravely wounded and requires a blood transfusion. There is only one member of the crew whose blood could be usable, but that crew member, Worf, has a history with the enemy’s peoples – Worf’s parents had been killed during a Romulan attack when he was a child. Worf, still carrying his anger for their death all these years, refuses to give his blood.
Meanwhile, a Romulan ship is en route to recover the officer. There is a tenuous peace treaty that prevents an all out war, but the Romulans have a history of subterfuge and deceit. It is believed they will cross the border and assume an antagonistic stance to provoke a war. Worf’s Captain, Jean Luc Picard, is seeking any means that would avoid an armed encounter, and decides to plead with Worf to reconsider his decision.
In this moment, it would be expedient to Picard and his crew to order Worf to donate his blood. He is about to contend with an adversary whom has no issue with breaking a peace treaty by provoking an attack (whether or not his side is initially in the wrong). Picard is seeking to recover a still-stranded crew member on the planet below, keep his ship safe, maintain the territorial sovereignty of the Federation, and maintain tenuous diplomatic relations with a rival group. This is all threatened because the one solution to his problem, keeping the enemy officer alive, is being blocked by a crew member whose personal history and honour motivate him to not help the enemy.
There is a beautiful scene where Picard appeals to Worf for him to reconsider:
Picard: So, there is no question that the Romulan officer is more valuable to us alive than dead. Worf: I understand. Picard: Lieutenant, sometimes the moral obligations of command are less than clear. I have to weigh the good of the many against the needs of the individual and try to balance them as realistically as possible. God knows, I don’t always succeed. Worf: I have not had cause to complain, Captain. Picard: Oh, Lieutenant, you wouldn’t complain even if you had cause. Worf: If you order me to agree to the transfusion, I will obey of course. Picard: I don’t want to order you. But I ask you, I beg you, to volunteer. Worf: I cannot.
In silence, Picard slowly walks back around his desk and sits in his chair.
Picard: Lieutenant. Worf: Sir? Picard: That will be all.
We then learn from the ship’s Chief Medical Officer that the Romulan has died. Picard has lost the only bargaining chip he had to keep things peaceful with the approaching enemy ship.
Picard could have chosen to order Worf to allow the blood transfusion. Instead, he chooses to respect his crew member’s personal wish, and as a leader deal with the hand he’s given. He also knows that making an order against the personal rights of a crew member under his command sets a dangerous precedence – that anyone is disposable if the captain judges it. Instead, he accepts that this closes off options. He knows that this places him not just on the back-foot, but also with his arms tied behind his back as he prepares for the possibility that his ship will be destroyed. However, the burden of command requires him to take these realities as they come and make the best decisions that he can. Events are being shaped around him that are beyond his control, but he strives to make the best decision that he can. He’s not perfect, but he becomes a role model in striving to do the right thing.
Even if the right thing might mean the death of he and his crew.
It’s a wonder piece of science fiction that I’m glad to be discovering anew.
Note – this is an experimental posting format. I’ve thought about increasing the number of posts I commit to per week, but I don’t want to add unnecessary work if I’m not willing to stick it out. Let’s be honest: sometimes it’s really hard to get a single post out each Monday that I’m satisfied with, so increasing my posting frequency just to for the sake of increasing my output is a terrible idea. I will run a short experiment to see how easy it is for me to get out a Friday Round-up for the next month. If the experiment goes well, I’ll consider making it a part of the regular rotation. You can find the first round-up post here from April 24th.
Have you ever noticed the tendency that when you’re thinking about a topic, you seem to notice it everywhere? I first became aware of its phenomenology back in my university days, where stuff that I was learning in my lectures seemingly popped up randomly in my non-class time. Turns out, there is a word for that feeling – the Baader-Meinhof phenomenon, also known as the frequency bias. It’s why you start to see your car’s model everywhere after buying one. I bring this up because today’s articles are all loosely connected with scientific literacy in the digital age (especially as it relates to COVID). The more I read about thoughts concerning how to understand research about the pandemic, the more content I noticed about the topic of scientific literacy in general. This might be the phenomenon/my bias at play, or maybe the algorithms that govern my feeds are really in tune with my concerns.
Here is my round-up list for the week ending on May 1st:
My round-up for the week started with this short article that was open in one of my browser tabs since last week. There is a lot of information floating around in our respective feeds, and most of it can charitably be called inconclusive (and some of it is just bad or false). We’ve suddenly all become “experts” in epidemiology over the last month, and I want to remind myself that just because I think I’m smart, doesn’t mean I have the context or experience to understand what I’m reading. So, this article kicked off some light reflection on scientific and data literacy in our media landscape.
This next piece pairs nicely with our first link, and includes reporting and discussion of recent flair ups on Twitter criticizing recent studies. Absent of the pressure being applied by the pandemic, what this article describes is something that normally takes place within academic circles – experts putting out positions that are critiqued by their peers (sometimes respectfully, sometimes rudely). Because of the toll the pandemic is exacting on us, these disagreements are likely more heated as a result, which are taken to be more personally driven. I link this article not to cast doubt over the validity of the scientific and medical communities. Rather, I am linking to this article to highlight that our experts are having difficulty grappling with this issues, so it’s foolish to think us lay-people will fare any better in understanding the situation. Therefore, it’s incumbent on journalists to be extra-vigilant in how they report data, and to question the data they encounter.
The Ars Technica piece raises a lot of complex things that we should be mindful of. There are questions such as:
Who should we count as authoritative sources of information?
How do we determine what an authoritative source of information should be?
What role does a platform like Twitter play in disseminating research beyond the scientific community?
How much legitimacy should we place on Twitter conversations vs. gated communities and publication arbiters?
How do we detangle policy decisions, economics, political motives, and egos?
How much editorial enforcement should we expect or demand from our news sources?
There are lots of really smart people who think about these things, and I’m lucky to study at their feet via social media and the internet. But even if we settle on answers to some of the above questions, we also have to engage with a fundamental truth about our human condition – we are really bad at sorting good information from bad when dealing at scale. Thankfully, there are people like Claire Wardle, and her organization FirstDraft that are working on this problem, because if we can’t fix the signal to noise ratio, having smart people fixing important problem won’t amount to much if we either don’t know about it, or can’t action on their findings. I was put onto Claire Wardle’s work through an email newsletter from the Centre for Humane Technology this week, where they highlighted a recent podcast episode with her (I haven’t had time to listen to it as of writing, but I have it queued up: Episode 14: Stranger Than Fiction).
All of this discussion about knowledge and our sources of it brought me back to grad school and a course I took on the philosophy of Harry Frankfurt, specifically his 1986 essay On Bullshit. Frankfurt, seemingly prescient of our times, distinguishes between liars and bullshitters. A liar knows a truth and seeks to hide the truth from the person they are trying to persuade. Bullshit as a speech act, on the other hand, only seeks to persuade, irrespective of truth. If you don’t want to read the essay linked above, here is the Wikipedia page.
I hope you find something of value in this week’s round-up and that you are keeping safe.
My wife and I are fortunate to be able to work from home. I started working from home Wednesday of last week, and initially found the transition to be manageable. Thanks to my wife’s discipline, we were keeping our normal sleep schedules, and we are able to maintain our normal working routine from the safety of home.
One thing that really threw me this weekend was when I allowed myself the permission to completely relax my schedule. Because of the pandemic, a lot of my pressing obligations and scheduled social time have been put on hold, meaning I have more free time than is typical. During the work week last week, I kept up with work and tending to things around the house, but by the weekend I decided to jump into playing video games, reading, and podcast listening. I also enjoyed a few beers Friday and Saturday nights while exploring the Borderlands, so going to bed before midnight was quickly forgotten.
This had two interesting consequences. First, by Sunday I physically felt bad. Not sick, but my body felt sluggish, I was tired, I had a headache, and my motivation was sapped. While I didn’t drink to excess, I did wonder aloud to my wife if what I was feeling was a low-grade hangover from my general poor choices over the weekend.
The (amusing) second consequence was that my wife remarked that she now sees why I impose schedules and routines on my daily life. She normally encourages me to relax and play games, but this is the first time in a long time that she has seen me proactively partake in games. It was a little like a crash-and-burn by Sunday night – I don’t do moderation very well and her mock-horror was a good reminder of that.
In reflecting on the last few days of the self-isolation, I have learned that it’s important for me to keep regular routines and impose discipline on my otherwise chaotic whims. I’ve known this about myself for some time, but this weekend helped to reinforce why I prefer keeping routines and habits active. Like a child, I crave structure.
I apologize for the late post this week. I had a few ideas kicking around in my head, but given the updates, I felt this ramble-post was a better attempt to capture some of the zeitgeist, rather than my usual attempt to feign some sort of authority on whatever it is I’m trying to accomplish on this site. Maybe I’ll rant another time about the scummy people who are profiteering through the COVID-19 scare.
Most of the information circulating concerns how an individual can help protect themselves from contracting the virus. Obviously this information is spread around not to protect any one individual, but because it’s the government’s attempt to flatten the curve and ease the economic and public health downsides to the current behaviours of people, from clogging up emergency rooms with sniffles to wholesale runs on items in the grocery store.
I’m not entirely sure what I should write about this week. It’s pretty hard to form a coherent thought when the majority of my bandwidth is occupied with keeping up with the shifting narrative around what’s going on. Thanks to technology, information (or misinformation) spreads quickly, and we are seeing multiple updates per day as a result. At my place of employment, they took the unprecedented step of shutting down face-to-face curriculum delivery. Unlike the faculty strike from Fall 2017, the College is working to keep the educational process running. While it may be that in the School of Engineering it can be impossible to replicate lab or shop time, the majority of faculty are working hard to translate their delivery to an online format.
So far, our employer has done a good job, in my opinion, with taking prudent steps to a.) keep people meaningfully occupied in their work so that no one has to lose their salary, and b.) do its part to stop the spread of the virus. I’m not saying that things couldn’t be better, but given the circumstances they’ve done a good job.
I’ve been thinking about the purpose of social isolation as a pandemic response. As I said above, the point is less about protecting oneself and is instead about protecting hospitals from being overwhelmed. If we’ve learned anything from countries around the world that are going through the worst right now, it’s that it becomes impossible to protect our vulnerable when there is a shortage of hospital beds. Hospitals are having to triage patients to focus on saving those who can be saved, who have the highest chances of recovering.
It is because of this that I’ve been thinking about the concept of a “brother’s keeper.” It’s not necessarily enough that governments or citizens remain mindful about the well-being of our vulnerable populations. Oftentimes while we are focusing on immediate dangers before us, we tend to not anticipate higher-order consequences of our policies or decisions.
Closing schools is great in theory – children are rabid spreaders of contagions, whether they are actually symptomatic or not, which means they infect their parents (some of whom are front line medical workers). But when we close schools, you have second order consequences that parents struggle with childcare, or children living in poverty lose access to food that is supplied at school.
When you close borders, you stop carriers of the virus from entry. But it also means that our international students (who are in some cases being vacated from post-secondary residences as school’s work to limit social contact among students) have no where to go. Airports are limiting international travel and the cost of purchasing tickets are skyrocketing. By them being in a foreign country, these students are vulnerable and caught in difficult positions on how to keep themselves safe.
By shutting down public spaces, you are helping to keep people from accidentally infecting each other. But when you close down businesses such as restaurants, you cut off people from the economic means they need to support themselves. Sure, the government is offering assistance to persons and businesses alike, but that will provide little comfort to people who can neither travel for groceries, nor pay for the supplies they need.
And let’s not forget what panic purchasing is doing to our supply chain – leaving store shelves cleared out of supplies, which means folks like the elderly are left without.
The hardest part I’m finding in all of this is the feeling of being powerless. You can’t control other people, and so you are forced to anticipate their moves to ensure you won’t be left without. But it’s this kind of thinking that leads to more drastic measures being taken. The virus also makes you feel powerless because you feel like an invisible stalker is coming for you – you don’t know who will be the final vector that leads to you. And you aren’t totally sure if our ritualistic hand-washing and hand-sanitizing is actually keeping us safe, or merely providing comfort. You can’t predict the future, and you can’t be sure you’re doing everything you can; you always feel like there is more you could be doing.
This reminds me of the story of the tinfoil house and pink dragons. A person covers their house in tinfoil, and when asked about it they say it keeps the pink dragons away. When asked if it works, the person shrugs and says “I don’t know, but I haven’t been attacked yet.” Of course, asking “if it works” is the wrong question here because there are no pink dragons. But as Taleb tells us in his book about Black Swans, there are always those highly unprobable events with massive downsides that we don’t see coming. Public policy and budgets are created to deal with clear and present dangers, and those policies and budgets are eroded when it’s felt that the money is not being allocated optimally. Therefore, you run into problems where you are never sure if the resources you spend to prevent something actually works – it’s really hard to prove causality in something that never happens.
Instead, we often are left to scramble to try and get ahead of trouble when we are already flat-footed, which means that our vision narrows as we focus on the fires in front of us that needs to be put out. Fighting fires is great (even heroic at times), but often the measures we take to deal with crisis have unanticipated second-order consequences that become difficult to deal with.
I’m not sure how to deal with this, but it makes me wonder about being my brother’s keeper, and what I can do to protect them.
From time to time, I catch myself thinking some pretty stupid stuff for entirely dumb reasons. A piece of information finds a way to bypass any critical thinking faculties I proudly think I possess and worms its way into my belief web. Almost like a virus, which is a great segue.
A perfect example of this happened last week in relation to the COVID-19 news, and I thought it important to share here, both as an exercise in humility to remind myself that I should not think myself above falling for false information, and as my contribution to correcting misinformation floating around the web.
Through a friend’s Stories on Instagram, I saw the following screencap from Twitter:
My immediate thought was to nod my head in approval and take some smug satisfaction that of course I’m smart enough to already know this is true.
Thankfully, some small part at the back of my brain immediately raised a red flag and called for a timeout to review the facts. I’m so glad that unconscious part was there.
It said to me “Hang on… is hand-sanitizer ‘anti-bacterial’?”
I mean, yes, technically it is. But is it “anti-bacterial” in the same way that it is getting implied in this tweet? The way the information is framed, it treats the hand-sanitizer’s anti-bacterial properties as being exclusively what it was designed for, like antibiotics. For example, you can’t take antibiotics for the cold or flu, because those are not bacterial infections but viral infections.
According to the author on the topic of alcohol-based hand sanitizers (ABHS),
There are some special cases where ABHS are not effective against some kinds of non-enveloped viruses (e.g. norovirus), but for the purposes of what is happening around the world, ABHS are effective. It is also the case that the main precaution to protect yourself is to thoroughly wash your hands with soap and water, and follow other safety precautions as prescribed.
The tweet, while right about the need for us to wash our hands and not overly rely on hand-sanitizers, is factually wrong generally. Thanks to a mix of accurate information (bacteria =/= virus) and inaccurate information(“hand sanitizer is not anti-bacterial”), and a packaging that appeals to my “I’m smarter than you” personality, I nearly fell for its memetic misinformation.
There are a number of lessons I’ve taken from this experience:
My network is not immune to false beliefs, so I must still guard against accepting information based on in-group status.
Misinformation that closely resembles true facts will tap into my confirmation bias.
I’m more likely to agree with statements that are coded with smarmy or condescending tonality because it carries greater transmission weight in online discourse.
Appeals to authority (science) resonate with me – because this was coming from a scientist who is tired of misinformation (I, too, am tired of misinformation), I’m more likely to agree with something that sounds like something I believe.
Just because someone says they are a scientist, doesn’t make the status true, nor does it mean what they are saying is automatically right.
Even if the person is factually a scientist, if they are speaking outside of their primary domain, being a scientist does not confer special epistemological status.
In the aftermath, the tweet was pulled and the person tried to correct the misinformation, but the incident highlights that the norms of Twitter (and social media more broadly) are entirely antithetical to nuance and contextual understanding.
It’s interesting how much information spread (memetics) resembles pathogen spreading. If the harmful thing attacking us is sufficiently designed to sidestep our defenses, whether that’s our body’s immune system or our critical thinking faculties, the invading thing can easily integrate within, establish itself within our web, and prepare to spread.
The one thing that really bums me out about this event is the inadvertent harm that comes to scientific authority. We as a society are caught in a period of intense distrust of the establishment that is coinciding with the largest explosion of information our species has ever seen. The result of this is not that good information is scarce, but rather the signal-to-noise ratio is so imbalanced that good information is getting swept away in the tide. If people grow distrustful of the sources of information that will help protect us, then forget worrying about gatekeepers that keep knowledge hidden; there will be no one left to listen.
During a throwaway thought experiment in his 1641 treatise, Meditations on First Philosophy in which the existence of God and the immortality of the soul are demonstrated, René Descartes posited the idea of an evil genius or demon that systematically deceives us to distort our understanding of the world. Contrary to first year philosophy students everywhere (a younger version of myself included), Descartes did not actually believe in the existence of an evil manipulator that was holding us back from understanding the nature of the real world. Instead, he was using it as part of a larger project to radically re-conceive epistemology in an era of rapid advancements in science that was threatening to overturn centuries of our understanding of the world. He felt that knowledge was built upon shaky ground thanks to an over-adherence on the received authorities from Greek antiquity and the Church’s use of Aristotelian scholasticism. Similar to Francis Bacon twenty years earlier, Descartes set out to focus on knowledge that stood independent of received authority.
Through Meditations one and two of his book, Descartes considers the sources of our beliefs and considers how we come to know what we think we know. He wants to find an unshakable truth to build all knowledge from, and through an exercise of radical doubt he calls into question many of the core facts we hold – first that knowledge gained from the senses are often in error, that we often can’t distinguish the real from fantasy, and through the use of the evil genius, that perhaps even our abstract knowledge like mathematics could be an illusion.
When I teach this to first year students, they either don’t take his concerns seriously because of the force of the impressions the real world gives us in providing sense data for knowledge (a stubbed toe in the dark seems to forcefully prove to us that the external world to our senses is very real), or they take Descartes too seriously and think Descartes really thought that a demon was actively deceiving him. Regardless of which side the student falls on, they will then conclude that Descartes’ concerns are not worth worrying about; that this mode of thinking is the product of an earlier, less sophisticated age.
Unless you are a scholar delving into Descartes’ work, the real purpose of teaching the Meditations is to provide students with a framework to understand how one can go about thinking through complex philosophical problems. Descartes starts from a position of epistemic doubt, and decided to run with it in a thought experiment to see where it took him. The thought experiment is a useful exercise to run your students through to get them to think through their received opinions and held-dogmas.
However, in light of my rant a few weeks back about informed consent and vaccines, I’ve discovered a new contemporary use for thinking about Descartes’ evil genius. In some sense, the evil genius is *real* and takes the form of fear that shortcuts our abilities to learn about the world and revise our held beliefs. Descartes posited that the evil demon was able to put ideas into our heads that made us believe things that were completely against logic. The demon was able to strip away the world beyond the senses and even cast doubt on abstract concepts like mathematics.
Much in the same way Descartes’ demon was able to “deceive” him into believing things that were contrary to the nature of reality, our fear of the unknown and of future harm can cause us to hold beliefs that do not map onto facts about the world. Worse yet, the story we tell about those facts can get warped, and new explanations can be given to account for what we are seeing. This becomes the breeding ground for conspiracy thinking, the backfire effect, and entrenched adherence to one’s beliefs. We hate to be wrong, and so we bend over backwards to contort our understanding of the facts to hold-fast to our worldview.
In truth, we are all susceptible to Descartes’ demon, especially those whom believe themselves to be above these kinds of faults of logic. In psychology, it’s called the Dunning-Kruger effect, of which there are all sorts of reasons given why people overestimate their competence. But in the context of an entrenched worldview that is susceptible to fear of the unknown lurks Descartes’ Demon, ready to pounce upon us with false beliefs about the world. Its call is strong, its grip is tight, and the demon is there to lull us into tribalism. We fight against those we see as merchants of un-truth and in a twisted sense of irony, the weapons of truth we yield only affect those already on our side, while those we seek to attack are left unaffected. It becomes a dog-whistle that calls on those who already think and believe as we do.
If we hope to combat this modern Cartesian demon, we’ll need to find a new way of reaching those we see on the other side.
In the ethics of conducting research with human participants, there is the concept of “informed consent.” At its foundation, informed consent is the process of communicating a sufficient amount of information about a research project to a prospective participant so that the prospect is able to decide whether they want to consent to being a participant in a study. There is a lot of nuance that can go into selecting what gets communicated because you have a lot of necessary information that needs be shared but you don’t want to share so much information that the participant is overwhelmed by the volume of information.
When I review research ethics applications, I am privy to a lot of information about the project. In the course of reviewing the project, I have to make judgement calls about what should be included in the informed consent letters that participants read. It would be counter-productive if the participant had to read all the documentation I am required to read when reviewing an application, so we use certain best practices and principles to decide what information gets communicated as a standard, and what is left in the application.
There is, of course, some challenges that we must confront in this process. As I said, when reviewing a research project, you have to balance the needs of the project with the needs of a participant. All research, by virtue of exploring the unknown, carries with it an element of risk. When you involve humans in a research project, you are asking them to shoulder some of the risk in the name of progress. Our job as researchers and reviewers is to anticpate risk and mitigate it where possible. We are stewards of the well-being of the participants, and we use our experience and expertise to protect the particpants.
This means that one challenge is communicating risk to participants and helping them understand the implications of the risks of the research. In many instances, the participants are well aware of risks posed to their normal, every-day lived experiences and how the research intersects with it. The patient living with a medical condition is aware of their pain or suffering, and can appreciate risks associated with medical interventions. A person living in poverty is acutely aware of what it means to live in poverty, and understands that discussing their experiences can be psychologically and emotionally difficult. Our jobs (as reviewers and researchers) is to ensure that the participant is made aware of the risk, mitigate it as much as we can without compromising the integrity of the research program, and to contextualize the risk so that the participant can make choices for themselves without coercion.
The concept of informed consent is hugely important, arguably the most important component of research projects involving humans as participants. It is an acknowledgement that people are ends in themselves, not a means to furthering knowledge or the researcher’s private or professional goals. Indeed, without a respect for the autonomy of the participant, research projects are likely to not be moved into action even when research funds are available.
All of this is a preamble to discuss the anger I felt when I read a recent CBC report on how anti-vaxxer advocates are using the concept of informed consent as a dog-whistle to their adherents, and are using informed consent as a way of both furthering their awareness and raising money with well-meaning politicians and the public.
In fairness, I can see the chain of reasoning at play that tries to connect informed consent with concerns about vaccines. For instance, in the article there is a photo of supporters of a vaccine choice group with a banner that reads “If there is a risk there must be a choice.” This sentiment is entirely consistent with the principles of informed consent. The problem with this application is that the risk is not being communicated and understood properly within context, and instead fear, misinformation, and conspiracies that lead to paternalistic paranoia are short-cutting the conversation. Further, the incentive structures that are borne out of the economics of our medical system are doing little to address these fears. Because so little money is flowing from the government to the medical system, doctors are forced to maximize the number of patients they see in a day just to ensure enough money is coming into the practice to pay for space, equipment, staff, insurance, and supplies. Rather than seeking quality face-to-face time with a patient, doctors have to make a choice to limit patient time to just focus on a chief complaint and address questions as efficiently as they can.
I don’t think it’s all the doctor’s fault either. I think we as patients, or more specifically we as a society, have a terrible grasp of medical and scientific literacy. I don’t have a strong opinion about what the root cause of this is, but some combination of underfunded schooling, rapid technological innovation, growing income disparities, entertainment pacification, a lack of mental health support, increasingly complex life systems, and precarious economic living in the average household are all influencing the poor grasp people have about what makes the world around us work. Rather than being the case that we are hyper-specialized in our worldviews, I think it’s the case that “life” is too complex for the average person to invest time into understanding. Let’s be clear, it is not the case that the average person isn’t smart enough to grasp it (even if sometimes my frustration with people leads me to this conclusion). Instead, I think that people are pulled in so many directions that they don’t have the time or economic freedom to deal with things that don’t immediately pay off for them. People are so fixated on just making it day-to-day and trying not to fall behind that it becomes a luxury to have the leisure time to devote to these kinds of activities.
What this results in, then, is the perfect storm of ignorance and fear that congeals into a tribal call to rebel against the paternalism of a system that is ironically also too cash-strapped to allow the flexibility to educate people on the nature of risk. People don’t have the time and ability to educate themselves, and doctors don’t have the time to share their experiences and knowledge with their patients.
Within this gap, opportunistic charlatans and sophists thrive to capitalize on people’s fears to push their own agendas. This is why bad actors like the disgraced former doctor Andrew Wakefield and movement leader Del Bigtree are able to charge fees to profit from speaking at anti-vaccination events. I’m not saying a person who spreads a message should do it for free. What I am saying is that they are able to turn a personal profit by preying on people’s fears while doing little to investigate the thing they claim to worry about.
We must find a way to communicate two simultaneous truths:
There is an inherent risk in everything; bad stuff happens to good people, and you can do everything right and still lose. Nevertheless, the risks involved when it comes to vaccines are worth shouldering because of the net good that comes from it and the risks themselves are vanishingly small.
In the 22 years since Wakefield published his study and the 16 years since its retraction, there has not been any peer-reviewed credible evidence that supports many of the claims given by the anti-vaxx movement. The movement is predicated on fears people have of the probability of something bad happening to them or their loved ones. The motivation behind the fear is legitimate, but the object of the fear is a bogeyman that hides behind whatever shadows it can find as more and more light is cast on this area.
The anti-vaxx ideology knows it cannot address head-on the mounting scientific evidence that discredits its premise, and so it instead focuses on a different avenue of attack.
This bears repeating: the anti-vaxx ideology cannot debate or refute the scientific evidence about vaccination. We know vaccines work. We know how they work; we know why they work. We understand the probabilities of the risk; we know the type and magnitudes of the risks. These things are known to us. Anti-vaxx belief is a deliberate falsehood when it denies any of what we know.
Because of this, the anti-vaxx ideology is shifting to speak to those deep fears we have of the unknown, and instead of dealing with the facts of medicine, it is sinking its claws into the deep desire we have for freedom and autonomy. It shortcuts our rational experience and appeals to the fears evolution has given us to grapples with the unknown – the knee-jerk rejection of things we don’t understand.
Informed consent as a concept is the latest victim of anti-vaxx’s contagion. It’s seeping in and corrupting it from the inside, turning the very principle of self-directed autonomy against a person’s self-interest. It doesn’t cast doubt by calling the science into question. Instead, it casts doubt precisely because the average person doesn’t understand the science, and so that unknown becomes scary to us and we reject or avoid what brings us fear.
Anti-vaxx ideology is a memetic virus. In our society’s wealth, luxury, and tech-enabled friction-free lives, we have allowed this dangerous idea to gain strength. By ignoring it and ridiculing it until now, we have come to a point where it threatens to disrupt social homeostasis. Unless we do something to change the conditions we find ourselves in – unless we are willing to do the hard work – I fear that this ideology is going to replicate at a rate that we can’t stop. It will reach a critical mass, infect enough people, and threaten to undo all the hard work achieved in the past. We have already seen the evidence of this as once-eradicated diseases are popping up in our communities. The immunity and innoculations have weakened. Let’s hope those walls don’t break.