This week’s post is late. The proximal cause is that because of the Tech Shabbat experiment, I was shutting my computer down for the weekend. Weekends are the most common time I write and prepare to publish my posts. Therefore, an unintended consequence of the Tech Shabbat is that I didn’t have a post ready for Monday.
However, that is a poor excuse when we consider the distal causes of why the post is late, because the Tech Shabbat was a known event in my calendar. It wasn’t something that was unanticipated, and I knew roughly what participating in the Tech Shabbat would entail. I knew, for example, that I had to get my course marking done before the Shabbat if I wanted to give my students their feedback with sufficient time for them to use my feedback in their next assignment. I was able to always get my grading done before the Tech Shabbat began each week of the experiment, so why did I not do the same for the weekly blog posts?
The Tech Shabbat became a convenient excuse to blame, when really the blame lies with a poor writing habit. Maybe I would have finished the posts had I not participated in the Tech Shabbat, but instead of dwelling in a possible else-world, I should focus on fixing the things I have control over, such as my schedule and how I set my priorities.
Proximal causes are easy to fixate on, and are often more expensive to address (it’s why we spend lots of money on shiny new toys that promise to fix our problems). Distal causes can be harder to spot and require longer, steady investment to overcome.
Almost a decade ago, I co-started a semi-formal group with some friends. It was intended as a bit of a mutual-beneficial group – we were all just starting out in our careers and felt that getting together monthly to practice public speaking would help us in our jobs. The nature of the group has evolved somewhat now that we are having kids and have grown comfortable in our jobs. Instead, we treat the monthly meetup as both social time and a chance for us to share experiences with each other.
This month, we’ve been challenged to try out the Tech Shabbat as discussed in Tiffany Shlain’s book 24/6: The Power of Unplugging One Day a Week (note: I haven’t read the book). In essence, we pick one day a week to abstain from screens – no smartphones, no computers, no television. It’s not a complete removal from all technology (for instance, I use my smart speaker to stream podcast episodes and listen to live radio), but instead we seek to disconnect from an increasingly interconnected existence.
I have completed three of the Shabbats, with the final one this weekend. Overall, this has been a very positive experience for me. There are some challenges and moments where I have to play fast and loose with the rules (like this weekend when I got lost on a hike…).
It’s also not clear if I should abstain from using our smart speaker at home; I’ve been using it to listen to podcast episodes and radio over the internet. I’ll even admit that there are moments of boredom or tedium where I feel a strong pull to give up the challenge and open a social media app. But despite any of these missteps or moments of weakness, I can say without any qualification that I’ve enjoyed the experience. I may look forward to the close of the 24-hours, but I do so with a sense of mental calm. The break gives me a bit of a reset, a chance to journal and bring order to my life. Instead of mindlessly consuming content, I’ve chosen activities that create memories and allow me to be more present in the now.
I’m not sure if I’ll keep the Tech Shabbat once the group activity is over, but it has given me a lot to reflect on. Cal Newport has discussed taking a more hardline stance on cutting unnecessary tech out of our lives. I’m sympathetic to the idea, though in practice I have to balance my quirky experiments with my wife’s needs, and I doubt she would entertain any drastic measures like what Newport suggests. Regardless, just taking the opportunity to pause and reflect is a worthwhile activity, which the Tech Shabbat has afforded me over the month.
The Varol piece was new, and as I read it, it reminded me of the Sivers piece, so I’m pairing them together. I’m a little conflicted with the message. On the one hand, I agree with both writers about the sentiments they are expressing. In Varol’s case, often citation becomes a short-hand for original thinking. Rather than expression your own unique ideas, you regurgitate what you’ve consumed from others (whether you are citing it or not, as is on display in the Good Will Hunting example). Likewise, Sivers is on to something when he suggests that integrating facts into our mental apparatus should not require us to cite our sources when it’s no longer the appropriate context. It makes sense to cite sources when writing something that will be graded in school, but it is stilted while in informal settings.
Where I feel conflicted is when there is a need to trace ideas back to verify the content. I don’t think it’s a new phenomenon, but it has certainly accelerated in recent years that misinformation is being thrown out into the void at a rapid pace. The internet has optimized itself on three facts of human nature – we like sensation, we like things that are familiar (that accords with what we already believe), and we are less critical of our in-group. Therefore, information bubbles get set up online, which creates a digital environment that’s conducive to rapid spreading of memetic viruses. When you think about it, it’s a marvelous analogy: the online information bubble is a closed environment where people are like-minded, which amounts to a roughly analogical immune system. A memetic virus then latches hold on one person, who spreads it to people in their network. Since the folks in the network share similar belief structures, the homogeneous group quickly spreads the meme throughout the information bubble. The meme is then incorporated into the belief network of the individuals through repetition and confirmation bias exposure. It writes itself into the belief web, in the same way viruses incorporate themselves into DNA.
I’m using the example of a memetic virus, but I think this framework is equally applied to more benign examples. Scientists release findings in the form of pre-peer reviewed news releases, which gets amplified and distorted through the media, which is then amplified and distorted through information bubbles. See here for an example:
In this thread @mbeisen does a superb job of tracing one particular bit of bullshit back to its source which, to my surprise at least, is not in a university press release. https://t.co/DxBRBc8ADh
At each phase, part of the signal is lost or transformed, like a social media game of telephone. When one person in the chain misunderstands the data, that impacts how the idea gets replicated. Over time, it becomes the digital version of a cancerous mutation of the base information.
This is why it’s important that we take care of how information is communicated, because as soon as you print something like “the majority of people believe x,” or “studies showed a y% decrease in the effect,” without a proper context of what the data is saying (or its limitations), that gets incorporated into people’s webs of belief. If you are a part of the population that believes something and you read that information, it reinforces your prior beliefs and you continue on in replicating the idea.
And so I’m torn. On the one hand, I shouldn’t need to cite my sources when having a casual conversation (a la Sivers), and I shouldn’t be substituting original thoughts with the ideas of others (a la Varol), but at the same time, I want it to be the case that when I encounter something that it should be verifiable and scruitable. I don’t know what the solution to this is, other than to flag it and remind myself to be wary of absolutist language.
I had a realization recently: I don’t think I’ve gone camping in the last thirteen years. That might not seem like much, but when I reflect on my childhood, it was full of camping. I was in Beavers, Cubs, Scouts, Army Cadets, and the Duke of Edinburgh program. My mom also used to take my sister and I camping during the summers. If I entered Beavers at 5, and my last outdoor adventure was my trip to Kenya in 2007, then I had an almost uninterrupted period of camping that spanned 16 years. At 33 now, I have only recently crossed the threshold for more years of my life not camping than all the years I spent in youth programs.
Camping was easy when I was in youth programs – so long as you participated, it was almost a default activity. But once I left for post-secondary schooling, it fell by the wayside. Camping didn’t seem very accessible to me – I didn’t have money to spend on equipment or transportation, and I chose to spend my leisure time occupied with other things. Soon, a year became two, then five, and now more than a decade has passed.
It’s not that I haven’t thought about this. A few years back, I decided to become a paying member of the Bruce Trail with the aim to avail myself of the various sections of trail nearby (admittedly, I haven’t done it yet…) My hike along the Path of the Gods route in Italy back in October was my most recent attempt to embrace activity in nature. It was a hard route for me and finishing it left me with an amazing sense of accomplishment.
Since then, I’ve been mulling it over in the back of my mind. The pandemic has both prevented me from attempting camping this year and gave me additional time to think about being more intentional with reconnecting with the outdoors.
One of the problems, I realized, is that my idea of camping is a little skewed. Since camping in my childhood was bound up in intensive adventures, hiking and camping has been intertwined in my mind with multiple days away in the woods – several night-stays while travelling a few dozen kilometres a day, sometimes while hiking in mountainous terrain far from civilization. In this way, camping requires planning, specialized equipment, and lots of experience or paid guides. In other words, camping and hiking requires a lot of time and money.
But recently, I’ve been rethinking of camping in a new light. Thanks to the magic of YouTube’s algorithm, I stumbled across Steve Wallis’s channel through his video Highway Rest Area Stealth Camping. I’ve since gone down a deep rabbit hole of his content. In short, he’s a guy out of Alberta who likes to stealth camp (short term, low impact camping, sometimes in areas where you aren’t allowed to camp). He will go out for a night, set up a hammock somewhere, and vlog the experience. He buys cost-effective gear from Canadian Tire and insists that camping shouldn’t be complicated or about expensive gear. I realized in watching his vlogs that he’s right: camping and hiking isn’t about long, expensive trips, and it doesn’t have to be an onerous undertaking.
I’ve since started looking at what kinds of opportunities I can avail myself nearby. I’ve dug out my old camping gear to see what I’ve got in storage. I ordered an inexpensive hammock online (since I don’t own a tent) and plan to try camping in my backyard for the fun of it. I’ve also started looking at the trails nearby and got out this weekend with the dog. It was a quick jaunt near a river that took an hour and was a short drive from my house. It was a lot of fun, and I felt great afterwards.
All of this has taught me three things: first, if I want to find the time to have adventures, I have to make the time. Second, camping and hiking aren’t the purview of the elite outdoors people, but should be enjoyed by anyone who wants it. Third, I should have the courage to try things out solo. It was easy when I was young and under the guidance of adults. Now that I’m an adult myself, I can’t wait around for someone to take my hand. I have to learn to rely on myself, and trust that I can do it.
Stay Awesome,
Ryan
Post-script – I wanted to title this post The Accessible Outdoors, but I didn’t want to confuse the topic. I’m not talking about accessibility in the sense of barriers to people’s ability to physically enjoy the outdoors. Sadly, as I write this, I remember reading a piece online about efforts of people to make camping and hiking more accessible to persons with disabilities and persons of colour, but I can’t find the article at this time.
The book has been circling my periphery for some time, coming up in recommended reads lists for at least a year. When it came time for me to suggest the next read, I chose this book without really knowing much about the subject. I was vaguely aware that Henrietta Lacks’s cells were instrumental to many scientific and medical advances, and I was aware that the obtaining of the cells was likely done unethically, as was the case for many Black Americans who found themselves under medical scrutiny in the middle of the last century. Since I review research ethics applications on two ethics boards I serve on, and because of the ongoing conversation around Black lives, I thought this would be a good book for us to read and learn from.
In short, the book is fantastic as a piece of writing.
But the story of Henrietta Lacks and her family is heartbreaking. The book paints a vivid portrait of who Henrietta was, and gives intimate glimpses into the life of her decedents. It also presents a comprehensive history of both the rise of research ethics since the end of World War Two and of the many advances made by science thanks to Henrietta’s cells. However, those advances were done with cells acquired neither with proper consent nor compensation. For many years after her early death, Henrietta’s name became lost to obscurity outside of her family, but everyone in the cellular biology community knew her cells because of how abundant they were. In a tragic twist, the very medical advances that gave way to better understandings of radiation, viruses, and vaccines, were often not available to the impoverished Lacks family. While the Lacks’s remained stuck in poverty, others profited.
I highly recommend everyone read this book.
As we discussed the book last week, I realized that this was an example of why it’s important to enlarge the domain of one’s ignorance. Learning about history shouldn’t be an exercise in theory; often we forget that history is presented as an abstraction away from the stories of individual people. If we forget about their individual lives, we can sometimes take the wrong lessons from history. As the saying goes, those who don’t learn from history are doomed to repeat it. In this case, we continue to exploit the voiceless, and profit on the backs of the disenfranchised – those who don’t have the power to speak back.
Reading books like this gives me a greater context for history, and it helps me understand the lived-history of people. I review research projects to understand the ethical consequences of our search for knowledge. If I lack a historical context – the history of how research was and is carried out – then I run the risk of perpetuating the same injustices on the people of today that the research is meant to help.
Research is supposed to be dispassionate, but we must understand and situate it within its proper historical context.
In an allusion to Picard, I close with this: constant vigilance is the price we must pay for progress.
I have a vague recollection of when Madison Holleran died by suicide in 2014, though less about her as a person and more because of the conversation it sparked around mental health and how social media can portray a perfect life despite the hidden struggles of the person. I’ve yet to read this book, however as I was reflecting on this post I realized that this isn’t a book about a famous person, but it still stands as a monument to a life. That felt like a weird mental juxtaposition against the conversation going on about monuments in general and what we choose to remember. During a recent conversation with my grandmother, she was showing me photos of friends from her past that have since passed away. For nearly every person on the planet, your legacy extends only as far as your genes and the living memories of those who knew you. And yet, sometimes we pulp trees into paper and create a monument that will be read in the future. Monuments are not accidental – it’s a reflection of what we choose to remember. Madison’s life was tragically cut short, but at least she remains more than a fragile memory.
There is a lot of misinformation around the effects of wearing a mask. Here is a good quick summary. tldr: it prevents the wearer from spreading germs and it does not prevent one from breathing adequately. I’ve demonstrated this for myself by donning a non-surgical mask for the last two weeks of running on the elliptical. To date, in the 30 masked-miles I’ve run (roughly 3.5-hours of exertion), I have yet to have any symptoms related to hypoxia.
Two paragraphs stood out in this post that resonated with me:
By all accounts, COVID-19 is a ridiculously bad time to graduate. It isn’t just a bizarre year from the perspective of the job market. Graduates who have a job will face an unusual first year as part of the workforce. With organizations and the people generally unprepared and dealing with multiple stressors, they’re unlikely to get the training that they need on the job.
These are moments when you realize how big a role dumb luck plays in any professional success we enjoy. It is so easy to attribute things that are going well to our smarts and hard work. But, there’s so much more to any success than that.
Reading this made me reflect on my own career to this point. I finished my undergrad in 2009, the year after the 2008 economic downturn. I was fortunate to be accepted into grad school, where I stretched a 1-year program into a 3-year experience by the time I finished writing my thesis. That put me into the formal job market at the tail end of 2012, four full years after the markets took a dive. I was lucky to enter the working world while the economy was rebounding, and I didn’t have to face the same setbacks and struggles that many of my cohort felt (that is, had I not did my 5th year “victory lap” in high school, I would have finished undergrad a year earlier with my secondary school classmates). In this, I was very fortunate that my choices became opportunities of timing, and something worth keeping in mind as context.
I have this bad habit of coming up with thoughts for blogs as I’m trying to sleep. I promise myself I’ll remember to jot it down in the morning – that it’s not worth staring at my screen in the darkness when sleep is so close by.
And yet, here I am – kicking myself over the n-th missed idea that never came to fruition.
Perhaps there’s not a lot I can do when inspiration strikes me other than keeping a notebook on hand to capture transient thoughts. However, if the pandemic and working from home has taught me anything about creative activities, it’s that I shouldn’t wait for inspiration to take hold, but rather inspiration should find me already hard at work at the process of making. That is to say, it’s more important that I build regular practice and development into my routines so that I increase the chances of inspiration catching me as I work.
I’m not the first person to suggest this strategy. It’s common advice from many creative folks. What’s new is that I’m seeing the advice in action in my own work: the more I write and practice, the more ideas flow out of me.
If I do this, if I do the work in between the deliverables, I suspect I’ll capture a lot more of those posts from the ether.
I spent a large chunk of my weekend grading essays from my students. Their task was to watch the movie The Road, adapted from the novel by Cormac McCarthy and write a paper based on themes and ideas presented in the course. Based on the course content presented so far, I encourage students to examine the story’s protagonist and argue whether he is a good candidate to be considered a tragic hero as defined by Aristotle.
While grading papers, I mused about Aristotle’s strict criteria for what makes for a tragic hero. The tragic hero must be noble and good (though not a paragon of virtue), but possesses a minor flaw of character or error in judgment (hamartia), which when applied to circumstances brings about some sort of downfall or negative consequence (an inevitable reversal of circumstances, or peripeteia). It’s not that the character is vicious, but merely that their minor flaw is the cause of the negative outcome. However, the negative outcome must be caused by the character (and not, for instance, by the gods), and the consequences of outcome must be in excess of the original cause. The character must also see that they are the reason for their suffering (anagnorisis – the move from ignorance to knowledge). In the context of a narrative or telling of the story, this would elicit pity and fear, a purification of emotions (catharsis) for the audience.
On the one hand, Aristotle is spelling this all out as a way of formalizing and categorizing types of art (Aristotle was a philosopher and biologist by vocation). He might have even considered writing this down as a way of formalizing a set of guidelines to critique plays, finding a way to point out what makes some plays good and others not.
But I had another thought. Aristotle’s teacher, Plato, took a dim view of the arts. In his Republic, Plato was comfortable with banishing the poets from his ideal city, and only allow art that held up the moral authority. I’m wondering if Aristotle had something like this in mind – that art could be used as a moral education tool.
Maybe, the best examples of art are ones that teach the audience lessons, albeit in a less direct route (than, say, fables). If this were true, then we could interpret Aristotle’s criteria the following way. A piece of art is valuable as a moral training tool when the audience can build an emotional connection with the suffering of others. Rather than it being a spectacle for them to lose themselves in, the art gives the audience a moral framework to judge themselves against. The tragic figure is like them: not a god or immortal, but an example of a good person trying to do good things. The tragic figure might even be a little aspirational, something the audience can work towards. They aren’t depraved in the soul, but they are responsible for their actions, even if those actions have negative consequences.
Instead of blaming their suffering on an external cause, the tragic figure realizes that they are the cause of their own suffering. The audience sees this, sees that they could be this person, and through their emotional connection, learns to empathize with the tragic figure. In a sense, they could be the same person, were the circumstances be different. The audience feels the pain, takes pity upon the otherwise good person, and maybe even fears this happening to them.
Given that Aristotle’s ethics was predicated on relative moral excellence, it’s possible that he intended art to be educative, though I don’t have the scholarship background to confirm whether this is true (or plausible). To be clear, I don’t think art must function in this capacity. I think it’s perfectly reasonable to have art for its own sake, or for the creative expression of what’s inside the artist.
Still, the thought of morally educative art is interesting. I’ve often thought of what kinds of art I’d want to expose my own children to in the development of their moral character. What kinds of lesson would I want them to absorb and learn from as they develop an internal sense of ethics and morality?
My Monday post this week is late. Instead of trying to cobble something together, I will share this video from T1J’s YouTube channel published last week. It gave me a lot to think about.
“Now these stories are very complex and nuanced, and American schools generally do a bad job of teaching Black history. But the point I’m making is, it’s not true that Martin Luther King Jr. did some peaceful protests and gave some speeches and then single-handedly changed everyone’s minds. The progress we’ve seen is due to the combined efforts of Black leaders and activists throughout history, some of whom disagreed on the best path forward, but all of whom contributed towards shaping the world and making the world a little better for people of color. Another thing people fail to realize is that Martin Luther King Jr. was very unpopular during his time. So, whether or not something is palatable to the white masses is not a good measure of whether it is the right thing to do.”