Cross-Domain Knowledge

I’m a huge fan of cross-domain knowledge. Coming from an academic background in philosophy, I feel my greatest strategy for creating and building a career is leaning in hard to knowledge and skills that are learned in one domain or context, then applying it to a unique area. You get a large confidence boost when you make connections by spotting patterns and connections that map analogical cases to each other.

The first time I truly appreciated this was in my days working for the university gambling lab. We were collecting data on slot machine players by recruiting participants into our study to measure the effects properties of the machine user-interface had in gambler’s cognitive awareness. In other words, did how the graphics and sounds play on the screen help the gambler understand their relative wins and losses over time. In one study, the simulation we were using for participants to play on during the trial had been modified, but on some of the laptops the wrong version of the software was copied over, and we didn’t realize the mistake until the end of the day. Of the three laptops, two had the right software, and one did not. At the end of each session, we uploaded the user data to a secure repository and deleted the local files, which meant that once we were back in the lab, there was no way of knowing which participant file batch came from the defective software.

We thankfully caught the issue early and limited the damage, but afterwards we had an issue with figuring out which files to exclude from analysis. On the face of it, there was no way of knowing from the participant’s biometric data which simulation they used. So instead we had to dig into the debug files that were spit out by the machine to verify that the simulation ran successfully.

All the files were generated in an XML format, however I had neither experience in basic coding nor reading XML files. I had to figure out a way of showing whether the version of the software was correct. To me, the XML files were largely gibberish.

But, I was able to spot a pattern in the files that reminded me of my formal logic courses from undergrad. While I did poorly in the courses at the time, I did retain some of the strategies taught for understanding the structuring of the syntax of formal logic arguments, specifically how nested arguments worked and how assumptions were communicated. I started to see the same structure in the XML code, how sub- and sub-sub arguments were written to call different files into the program, and where those files were being drawn from.

And there it was. At the bottom of one of the debug files, was a list of the files being called on by the simulation. In the broken simulation, the file path to a certain sound that was meant to be played was empty, meaning that when the simulation was supposed to play and auditory cue, there was no file name to look for, and so the simulation moved on.

I compared this with the files we knew came from the working simulators and saw that this was the main difference, giving us the key for finding the bad data points and justifiably excluding them from the overall data set. By finding this, I saved an entire day’s worth of data files (a cost savings that includes the some-30 participant files, their remuneration, three research assistant wages, per diem costs, travel, and consumable materials on site).

I grant that computer programming is entirely built on the foundations of formal logic and mathematics, so it’s not that I was gaining a unique insight into the problem by bringing knowledge from one separate domain into another. However, this was one of the first times I encountered a problem where I lacked the traditional knowledge and skill to address it, so I came at the problem from another angle. It was a case where I gained confidence in myself to be resourceful and tap into previous learning to address new/novel problems.

As I noted above, being trained in academic philosophy has pushed me in this direction of career development. On a superficial level, relying on cross-domain knowledge is a career survival strategy because philosophy doesn’t always teach you skills that are easily applicable to the working world. I have sadly, never once, had to use my understanding of Plato’s arguments in my workplace. But on a deeper level, I think training in philosophy naturally pushes you into this kind of problem-solving. Most of my experiences in philosophy involves approaching a thought experiment or line of thinking, considering what it’s trying to tell us, then testing those arguments against counter-factuals and alternative arguments or explanations. To do this well, you have to reduce a problem down into its constitutive parts to tease out relevant intuitions, then test them out, often by porting those intuitions from one context into another to see if they still hold as both valid and sound.

It’s not all that dissimilar to the processes used by engineers or designers to gather data and accurately define the problem they are intending to design for. Whereas the engineer will apply the tools they’ve been taught fairly linearly to create a design for the problem, my strategy is to adopt cross-domain knowledge to make connections where they might previously had not been apparent. The results can often be solved quicker or more efficiently if I had the relevant domain knowledge (e.g. an understanding of coding), however when I lack the specific experience to address the problem, as a generalist thinker I have to rely on analogical thinking and a wider exposure to ideas to suss out those connections. What I lack in a direct approach, I make up for in novelty and creative/divergent thinking, which has the benefit of sometimes opening up new opportunities to explore.

Stay Awesome,

Ryan

What I Read in 2020

Here we are at the dawning of a new year, which for me means it’s time to post an update on my reading over the last year. For my previous lists, you can see them here: 2019, 2018, 2017, and 2016. It’s hard to believe this is my fifth reading list!

TitleAuthorDate CompletedPages
1Creative CallingChase Jarvis22-Jan304
2The Age of Surveillance CapitalismShoshana Zuboff25-Jan704
3Animal FarmGeorge Orwell27-Jan112
4Alexander HamiltonRon Chernow02-Feb818
5RangeDavid Epstein12-Feb352
6The Bookshop on the CornerJenny Colgan29-Feb384
7Call Sign ChaosJim Mattis12-Mar320
8The Hitchhiker’s Guide to the GalaxyDouglas Adams19-Mar208
9The AlchemistPaulo Coelho22-Mar208
10Guns, Germs, and SteelJared Diamond06-Apr496
11UpstreamDan Heath16-May320
12SymposiumPlato18-May144
13Gulliver’s TravelsJonathan Swift25-May432
14Anything You WantDerek Sivers11-Jun96
15Extreme OwnershipJocko Willink & Leif Babin18-Jun384
16The Code. The Evaluation. The ProtocolsJocko Willink 23-Jun93
17How Will You Measure Your LifeClayton M. Christensen28-Jun236
18The Last WishAndrzej Sapkowski05-Jul384
19The Expectant FatherArmin A. Brott & Jennifer Ash06-Jul336
20The Coaching HabitMichael Bungay Stanier14-Jul234
21The Immortal Life of Henrietta LacksRebecca Skloot23-Jul400
22WorkingRobert A. Caro08-Sep240
23Crime and PunishmentFyodor Dostoyevsky15-Sep544
24Every Tool’s A HammerAdam Savage18-Sep320
25Love SenseDr. Sue Johnson20-Sep352
26NaturalAlan Levinovitz22-Sep264
27The Kite RunnerKhaled Hosseini06-Oct363
28My Own WordsRuth Bader Ginsburg10-Oct400
29Kitchen ConfidentialAnthony Bourdain20-Oct384
30Stillness is the KeyRyan Holiday06-Nov288
31The Oxford InklingsColin Duriez07-Nov276
32The Infinite GameSimon Sinek14-Nov272
33The Ride of a LifetimeRobert Iger21-Nov272
34As a Man Thinketh & From Poverty to PowerJames Allen26-Nov182
35Medium RawAnthony Bourdain06-Dec320
36A Christmas CarolCharles Dickens06-Dec112
37The Little Book of HyggeMeik Wiking12-Dec288
38Nicomachean EthicsAristotle30-Dec400
Total12242

Overall, I’m happy with how the year went for reading. In reviewing the list, a few things stood out to me. First is that I surpassed my total books read for the year over 2019 by 13 entries. While we can certainly have a discussion about the merits issues of using the number of books read as an accurate key performance indicator of comprehension or progress, it was nice to see that I stepped things up a bit. I was fairly consistent in making my way through the books, with only a dip in April (likely because of the life-adjustment that came from working from home) and the silence seen from mid-July to the start of September thanks to the birth of our son in early-August.

I’m also happy to see that I read fewer self-help and business books last year and instead dove into more fiction, memoirs, and books about history. In my previous roundup, I had commented about wanting to be more intentional with my reading after feeling burnt out on certain genres of books.

One significant change in my reading habits this past year was that I joined a reading group/book club. A friend organized it just as things went into lockdown in March. We meet online every few weeks to discuss books selected in a rotation by the group. I commented earlier that I read 13 more books this year than last, and I’d attribute the book club to being the single biggest reason for the boost in completions (we cleared 12 by year’s end). Here are the books that we read:

  1. Call Sign Chaos by Jim Mattis
  2. Symposium by Plato
  3. Gulliver’s Travels by Jonathan Swift
  4. How Will You Measure Your Life by Clayton M. Christensen
  5. The Immortal Life of Henrietta Lacks by Rebecca Skloot
  6. Crime and Punishment by Fyodor Dostoyevsky
  7. The Kite Runner by Khaled Hosseini
  8. Kitchen Confidential by Anthony Bourdain
  9. The Oxford Inklings by Colin Duriez
  10. As a Man Thinketh & From Poverty to Power by James Allen
  11. A Christmas Carol by Charles Dickens
  12. Nicomachean Ethics by Aristotle (finished in the final days, though we haven’t met to discuss it yet.

I’d normally create a separate post about my top reads for the year, but I’ll include it here for simplicity. In chronological order of when I finished, my top 5 reads of the year are:

  1. Alexander Hamilton by Ron Chernow (among my top reads ever; I was fortunate to see the stage play before the shutdown in March)
  2. Call Sign Chaos by Jim Mattis (the first book I chose for the book club; I was struck by how Mattis talks about self-education and reflection)
  3. The Expectant Father by Armin A. Brott & Jennifer Ash (since we were expecting this year, this book was a nice roadmap to know what to expect, and it provided some comfort along the way)
  4. The Immortal Life of Henrietta Lacks by Rebecca Skloot (I recommend everyone read this book; it reminds me of the important work we do on the research ethics boards I sit on, and why we must be critical of research)
  5. My Own Words by Ruth Bader Ginsberg (I started this collection of writings and speeches before RBG died, and was sadly reminded after finishing of what we lost in her death).

This was a pretty good year for reading. It felt good to get lost in more fiction, and I’ll have things to say in the future about the value I’m finding in reading as part of a group. In the meantime, Happy New Year, and it’s time to keep tackling my reading backlog.

Stay Awesome,

Ryan

Friday Round-up – August 7, 2020

This was a light week for consuming content that stuck with me, so here is the sole round-up list for the week ending on August 7th:

💭Reflection – Citing our sources – How to Think for Yourself | Ozan Varol blog post and Don’t Quote. Make it Yours and Say it Yourself | Derek Sivers blog post

The Varol piece was new, and as I read it, it reminded me of the Sivers piece, so I’m pairing them together. I’m a little conflicted with the message. On the one hand, I agree with both writers about the sentiments they are expressing. In Varol’s case, often citation becomes a short-hand for original thinking. Rather than expression your own unique ideas, you regurgitate what you’ve consumed from others (whether you are citing it or not, as is on display in the Good Will Hunting example). Likewise, Sivers is on to something when he suggests that integrating facts into our mental apparatus should not require us to cite our sources when it’s no longer the appropriate context. It makes sense to cite sources when writing something that will be graded in school, but it is stilted while in informal settings.

Where I feel conflicted is when there is a need to trace ideas back to verify the content. I don’t think it’s a new phenomenon, but it has certainly accelerated in recent years that misinformation is being thrown out into the void at a rapid pace. The internet has optimized itself on three facts of human nature – we like sensation, we like things that are familiar (that accords with what we already believe), and we are less critical of our in-group. Therefore, information bubbles get set up online, which creates a digital environment that’s conducive to rapid spreading of memetic viruses. When you think about it, it’s a marvelous analogy: the online information bubble is a closed environment where people are like-minded, which amounts to a roughly analogical immune system. A memetic virus then latches hold on one person, who spreads it to people in their network. Since the folks in the network share similar belief structures, the homogeneous group quickly spreads the meme throughout the information bubble. The meme is then incorporated into the belief network of the individuals through repetition and confirmation bias exposure. It writes itself into the belief web, in the same way viruses incorporate themselves into DNA.

I’m using the example of a memetic virus, but I think this framework is equally applied to more benign examples. Scientists release findings in the form of pre-peer reviewed news releases, which gets amplified and distorted through the media, which is then amplified and distorted through information bubbles. See here for an example:

At each phase, part of the signal is lost or transformed, like a social media game of telephone. When one person in the chain misunderstands the data, that impacts how the idea gets replicated. Over time, it becomes the digital version of a cancerous mutation of the base information.

This is why it’s important that we take care of how information is communicated, because as soon as you print something like “the majority of people believe x,” or “studies showed a y% decrease in the effect,” without a proper context of what the data is saying (or its limitations), that gets incorporated into people’s webs of belief. If you are a part of the population that believes something and you read that information, it reinforces your prior beliefs and you continue on in replicating the idea.

And so I’m torn. On the one hand, I shouldn’t need to cite my sources when having a casual conversation (a la Sivers), and I shouldn’t be substituting original thoughts with the ideas of others (a la Varol), but at the same time, I want it to be the case that when I encounter something that it should be verifiable and scruitable. I don’t know what the solution to this is, other than to flag it and remind myself to be wary of absolutist language.

Stay Awesome,

Ryan

Vigilance and the Price of Progress

I recently joined a book club, and last week we met virtually to discuss The Immortal Life of Henrietta Lacks by Rebecca Skloot.

The book has been circling my periphery for some time, coming up in recommended reads lists for at least a year. When it came time for me to suggest the next read, I chose this book without really knowing much about the subject. I was vaguely aware that Henrietta Lacks’s cells were instrumental to many scientific and medical advances, and I was aware that the obtaining of the cells was likely done unethically, as was the case for many Black Americans who found themselves under medical scrutiny in the middle of the last century. Since I review research ethics applications on two ethics boards I serve on, and because of the ongoing conversation around Black lives, I thought this would be a good book for us to read and learn from.

In short, the book is fantastic as a piece of writing.

But the story of Henrietta Lacks and her family is heartbreaking. The book paints a vivid portrait of who Henrietta was, and gives intimate glimpses into the life of her decedents. It also presents a comprehensive history of both the rise of research ethics since the end of World War Two and of the many advances made by science thanks to Henrietta’s cells. However, those advances were done with cells acquired neither with proper consent nor compensation. For many years after her early death, Henrietta’s name became lost to obscurity outside of her family, but everyone in the cellular biology community knew her cells because of how abundant they were. In a tragic twist, the very medical advances that gave way to better understandings of radiation, viruses, and vaccines, were often not available to the impoverished Lacks family. While the Lacks’s remained stuck in poverty, others profited.

I highly recommend everyone read this book.

As we discussed the book last week, I realized that this was an example of why it’s important to enlarge the domain of one’s ignorance. Learning about history shouldn’t be an exercise in theory; often we forget that history is presented as an abstraction away from the stories of individual people. If we forget about their individual lives, we can sometimes take the wrong lessons from history. As the saying goes, those who don’t learn from history are doomed to repeat it. In this case, we continue to exploit the voiceless, and profit on the backs of the disenfranchised – those who don’t have the power to speak back.

Reading books like this gives me a greater context for history, and it helps me understand the lived-history of people. I review research projects to understand the ethical consequences of our search for knowledge. If I lack a historical context – the history of how research was and is carried out – then I run the risk of perpetuating the same injustices on the people of today that the research is meant to help.

Research is supposed to be dispassionate, but we must understand and situate it within its proper historical context.

In an allusion to Picard, I close with this: constant vigilance is the price we must pay for progress.

Stay Awesome,

Ryan

Friday Round-up – May 1, 2020

Note – this is an experimental posting format. I’ve thought about increasing the number of posts I commit to per week, but I don’t want to add unnecessary work if I’m not willing to stick it out. Let’s be honest: sometimes it’s really hard to get a single post out each Monday that I’m satisfied with, so increasing my posting frequency just to for the sake of increasing my output is a terrible idea. I will run a short experiment to see how easy it is for me to get out a Friday Round-up for the next month. If the experiment goes well, I’ll consider making it a part of the regular rotation. You can find the first round-up post here from April 24th.

Have you ever noticed the tendency that when you’re thinking about a topic, you seem to notice it everywhere? I first became aware of its phenomenology back in my university days, where stuff that I was learning in my lectures seemingly popped up randomly in my non-class time. Turns out, there is a word for that feeling – the Baader-Meinhof phenomenon, also known as the frequency bias. It’s why you start to see your car’s model everywhere after buying one. I bring this up because today’s articles are all loosely connected with scientific literacy in the digital age (especially as it relates to COVID). The more I read about thoughts concerning how to understand research about the pandemic, the more content I noticed about the topic of scientific literacy in general. This might be the phenomenon/my bias at play, or maybe the algorithms that govern my feeds are really in tune with my concerns.

Here is my round-up list for the week ending on May 1st:

📖Article – What You Need to Know about Interpreting COVID-19 Research | The Toronto Star

My round-up for the week started with this short article that was open in one of my browser tabs since last week. There is a lot of information floating around in our respective feeds, and most of it can charitably be called inconclusive (and some of it is just bad or false). We’ve suddenly all become “experts” in epidemiology over the last month, and I want to remind myself that just because I think I’m smart, doesn’t mean I have the context or experience to understand what I’m reading. So, this article kicked off some light reflection on scientific and data literacy in our media landscape.

📖Article – Experts Demolish Studies Suggesting COVID-19 is No Worse Than Flu | Ars Technica

This next piece pairs nicely with our first link, and includes reporting and discussion of recent flair ups on Twitter criticizing recent studies. Absent of the pressure being applied by the pandemic, what this article describes is something that normally takes place within academic circles – experts putting out positions that are critiqued by their peers (sometimes respectfully, sometimes rudely). Because of the toll the pandemic is exacting on us, these disagreements are likely more heated as a result, which are taken to be more personally driven. I link this article not to cast doubt over the validity of the scientific and medical communities. Rather, I am linking to this article to highlight that our experts are having difficulty grappling with this issues, so it’s foolish to think us lay-people will fare any better in understanding the situation. Therefore, it’s incumbent on journalists to be extra-vigilant in how they report data, and to question the data they encounter.

📽Video – Claire Wardle: Why Do We Fall for Misinformation | NPR/TED

The Ars Technica piece raises a lot of complex things that we should be mindful of. There are questions such as:

  • Who should we count as authoritative sources of information?
  • How do we determine what an authoritative source of information should be?
  • What role does a platform like Twitter play in disseminating research beyond the scientific community?
  • How much legitimacy should we place on Twitter conversations vs. gated communities and publication arbiters?
  • How do we detangle policy decisions, economics, political motives, and egos?
  • How much editorial enforcement should we expect or demand from our news sources?

There are lots of really smart people who think about these things, and I’m lucky to study at their feet via social media and the internet. But even if we settle on answers to some of the above questions, we also have to engage with a fundamental truth about our human condition – we are really bad at sorting good information from bad when dealing at scale. Thankfully, there are people like Claire Wardle, and her organization FirstDraft that are working on this problem, because if we can’t fix the signal to noise ratio, having smart people fixing important problem won’t amount to much if we either don’t know about it, or can’t action on their findings. I was put onto Claire Wardle’s work through an email newsletter from the Centre for Humane Technology this week, where they highlighted a recent podcast episode with her (I haven’t had time to listen to it as of writing, but I have it queued up: Episode 14: Stranger Than Fiction).

📖 Essay – On Bullshit | Harry Frankfurt

All of this discussion about knowledge and our sources of it brought me back to grad school and a course I took on the philosophy of Harry Frankfurt, specifically his 1986 essay On Bullshit. Frankfurt, seemingly prescient of our times, distinguishes between liars and bullshitters. A liar knows a truth and seeks to hide the truth from the person they are trying to persuade. Bullshit as a speech act, on the other hand, only seeks to persuade, irrespective of truth. If you don’t want to read the essay linked above, here is the Wikipedia page.

I hope you find something of value in this week’s round-up and that you are keeping safe.

Stay Awesome,

Ryan

Appealing to my Smarmy Brain

selective focus photography of spiderweb
Photo by Nicolas Picard on Unsplash

From time to time, I catch myself thinking some pretty stupid stuff for entirely dumb reasons. A piece of information finds a way to bypass any critical thinking faculties I proudly think I possess and worms its way into my belief web. Almost like a virus, which is a great segue.

A perfect example of this happened last week in relation to the COVID-19 news, and I thought it important to share here, both as an exercise in humility to remind myself that I should not think myself above falling for false information, and as my contribution to correcting misinformation floating around the web.

Through a friend’s Stories on Instagram, I saw the following screencap from Twitter:

My immediate thought was to nod my head in approval and take some smug satisfaction that of course I’m smart enough to already know this is true.

Thankfully, some small part at the back of my brain immediately raised a red flag and called for a timeout to review the facts. I’m so glad that unconscious part was there.

It said to me “Hang on… is hand-sanitizer ‘anti-bacterial’?

I mean, yes, technically it is. But is it “anti-bacterial” in the same way that it is getting implied in this tweet? The way the information is framed, it treats the hand-sanitizer’s anti-bacterial properties as being exclusively what it was designed for, like antibiotics. For example, you can’t take antibiotics for the cold or flu, because those are not bacterial infections but viral infections.

Rather than leaving this belief untested, I jumped on ye ol’ Googles to find out more. I found a write-up in the National Center for Biotechnology Information discussing alcohol sanitizers.

According to the author on the topic of alcohol-based hand sanitizers (ABHS),

A study published in 2017 in the Journal of Infectious Diseases evaluated the virucidal activity of ABHS against re-emerging viral pathogens, such as Ebola virus, Zika virus (ZIKV), severe acute respiratory syndrome coronavirus (SARS-CoV), and Middle East respiratory syndrome coronavirus (MERS-CoV) and determined that they and other enveloped viruses could be efficiently inactivated by both WHO formulations I and II (ethanol-based and isopropanol-based respectively). This further supports the use of ABHS in healthcare systems and viral outbreak situations.

There are some special cases where ABHS are not effective against some kinds of non-enveloped viruses (e.g. norovirus), but for the purposes of what is happening around the world, ABHS are effective. It is also the case that the main precaution to protect yourself is to thoroughly wash your hands with soap and water, and follow other safety precautions as prescribed.

The tweet, while right about the need for us to wash our hands and not overly rely on hand-sanitizers, is factually wrong generally. Thanks to a mix of accurate information (bacteria =/= virus) and inaccurate information(“hand sanitizer is not anti-bacterial”), and a packaging that appeals to my “I’m smarter than you” personality, I nearly fell for its memetic misinformation.

There are a number of lessons I’ve taken from this experience:

  1. My network is not immune to false beliefs, so I must still guard against accepting information based on in-group status.
  2. Misinformation that closely resembles true facts will tap into my confirmation bias.
  3. I’m more likely to agree with statements that are coded with smarmy or condescending tonality because it carries greater transmission weight in online discourse.
  4. Appeals to authority (science) resonate with me – because this was coming from a scientist who is tired of misinformation (I, too, am tired of misinformation), I’m more likely to agree with something that sounds like something I believe.
  5. Just because someone says they are a scientist, doesn’t make the status true, nor does it mean what they are saying is automatically right.
  6. Even if the person is factually a scientist, if they are speaking outside of their primary domain, being a scientist does not confer special epistemological status.
  7. In the aftermath, the tweet was pulled and the person tried to correct the misinformation, but the incident highlights that the norms of Twitter (and social media more broadly) are entirely antithetical to nuance and contextual understanding.

It’s interesting how much information spread (memetics) resembles pathogen spreading. If the harmful thing attacking us is sufficiently designed to sidestep our defenses, whether that’s our body’s immune system or our critical thinking faculties, the invading thing can easily integrate within, establish itself within our web, and prepare to spread.

The one thing that really bums me out about this event is the inadvertent harm that comes to scientific authority. We as a society are caught in a period of intense distrust of the establishment that is coinciding with the largest explosion of information our species has ever seen. The result of this is not that good information is scarce, but rather the signal-to-noise ratio is so imbalanced that good information is getting swept away in the tide. If people grow distrustful of the sources of information that will help protect us, then forget worrying about gatekeepers that keep knowledge hidden; there will be no one left to listen.

Stay Awesome,

Ryan

“The Same Fifteen Academic Studies”

Despite my rant a few weeks back on the podcast-book marketing relationship, there are a few authors I will check out when they appear on podcasts I’m subscribed to.  For instance, Ryan Holiday just released his new book and is out promoting it to various podcast audiences.

He appeared on Rolf Pott’s podcast, Deviate, and had a conversation about what it’s like to write a big idea book.  Towards the end of the episode, he makes an off-hand remark on getting ideas from recently published books, and how he chooses not to do this because it tends to result in recycling the same academic studies.  Given how much I rant about animated bibliographies and short term content bias, I was happy to see some convergence in our ideas – that my amateur attempt on commenting on culture is shared by people I admire and hold in regard.

I’ve transcribed his remarks below, but you can go here to listen to the episode yourself.  Holiday’s comments begin at the 50:37 mark:

Potts: And that’s why you should feel blessed not to be an academic, right, because that’s such a useful model from which to write a book.  The academic world has these different hoops to jump through that often aren’t as useful.  And I would think that, sometimes, there’s types of research, like, do you do much Google or Wikipedia research, or is it mostly books?

Holiday: Yeah, I mean, you have to be careful, obviously, relying on Wikipedia, but yeah you do wanna go get facts here and there, and you gotta check stuff out.  I like to use obituaries.  Let’s say I’m writing about a modern person and they’ve died.  New York Times obituaries, Washington Post obituaries often have lots of really interesting stuff.   Then you can be really confident that dates and places and names are all correct because they’ve been properly fact-checked.  So I like to do stuff like that.  I watch documentaries from time to time.  In this book, there wasn’t really a great book about Marina Abramović, but there was some really great New York Times reporting about her Artist is Present exhibit, and there’s also a documentary with the same name.  So I’m willing to get stuff from anywhere provided I believe it’s verified or accurate, but you can’t be choosy about where your stuff comes from.  And in fact, if you’re only drawing from the best selling books of the last couple of years, just as an aside, an example, I find when I read a lot of big idea business books, it feels like they’re all relying on the same fifteen academic studies.  It’s “the will power” experiment, and “the paradox of choice” experiment, and the “Stanford Prison” experiment!  They think it’s new because it’s new to them, but if they’d read a little bit more widely in their own space, they’d realize that they’d be better off going a bit deeper or treading on some newer ground.

*****
(Note: I’ve lightly edited the transcript to remove filler words and some idiolects).

Stay Awesome,

Ryan

The Value of Probing Assumptions

I was on a consultation call a few weeks ago about an ethics application.  The project was seeking feedback from participants about access to specific mental health information, and in my feedback to their application, I noted that their demographic question concerning the gender of the participant was probably too narrow.  The applicant asked for some advice how to address the comment.

On the one hand, they considered dropping the question as it a.) didn’t obviously connect to their research question, and b.) the literature supporting this branch of mental health was pretty well-studied in terms of incidence rates for the condition along the sex dimension, so they might not learn anything new by asking for the participant’s gender or sex.  On the other hand, if they left it in, they had to contend with whether they should use sex or gender as the focus of the question.  Since the mental health topic they were researching was a medical condition, it seemed like (biological) sex was the more salient feature, whereas my feedback suggested that if they chose gender, they would need to ensure it was inclusive.

While discussing the implications on the phone, I tried to tease out what the purpose of the study was.  Their study was collecting qualitative information about how people access information.  In the context of the demographic information, they weren’t seeking to know how a person’s sex/gender relates to the condition itself.  But, I wonder aloud, it seemed the purpose of the study was to understand how people seek information, which could arguably be influenced by one’s culture, behaviour, socialization, and experience of how the world treats them.  In that way, you would want to focus less on a person’s physiology and instead you might discover interesting differences in how a person seeks information based on their life-experiences.

The applicant noted that they started the phone call intending to drop the question from the survey, and through my line of questions and probes, was convinced to keep the question and modify it to be more inclusive.

I am not telling this story as a normative push on how we should conduct inquiry (though by reading through the lines, you should get a sense of how I feel about the topic).  Instead, I share this story as an example of why posing good questions is important to remove ambiguity and clarify thought.  One of the goals of our ethics board when we review applications is to make implied premises explicit so that we can be sure of what we take as a given when we set out to study a research question.  We often default to accepted practice and proceed with common tools, but sometimes we don’t think carefully through the implications of what using those tools means.  By leveraging my outsider status, I have an opportunity to get the applicants to explain concepts and lines of reason without assuming I share the same understanding of the material that they do.  This helps to spot those areas where the project is weakened by unsupported claims and assumptions.

Stay Awesome,

Ryan