I received some sad news last week – my time as a teacher has (for now) come to a close. The Chair has decided to take the course out of the general education elective rotation and will offer a different slate of courses to ensure students have a variety of electives to choose from.
This is not entirely unexpected. I taught my first section in September 2016, and am currently in my 13th straight delivery. In this time, I’ve had a little over 300 students, meaning I’ve graded some 3,000 assignments and 600 essays.
Back in 2016, I snapped this photo of my last day in the classroom for my very first semester of teaching (all the rest of my deliveries have been online).
It has been a great experience and has taught me a lot about empathizing with students and overcoming my biases and assumptions of how one ought to teach. It was also humbling to see some student work come in that, frankly, was better than anything I could have written.
I appreciate the patience my students have shown me these last four years as I have moved cities, gotten married, graded while on my honeymoon, and when welcoming our child into the world.
I’m looking forward to a bit of a break from teacher life, but I hope to get another opportunity in the future.
I spent a large chunk of my weekend grading essays from my students. Their task was to watch the movie The Road, adapted from the novel by Cormac McCarthy and write a paper based on themes and ideas presented in the course. Based on the course content presented so far, I encourage students to examine the story’s protagonist and argue whether he is a good candidate to be considered a tragic hero as defined by Aristotle.
While grading papers, I mused about Aristotle’s strict criteria for what makes for a tragic hero. The tragic hero must be noble and good (though not a paragon of virtue), but possesses a minor flaw of character or error in judgment (hamartia), which when applied to circumstances brings about some sort of downfall or negative consequence (an inevitable reversal of circumstances, or peripeteia). It’s not that the character is vicious, but merely that their minor flaw is the cause of the negative outcome. However, the negative outcome must be caused by the character (and not, for instance, by the gods), and the consequences of outcome must be in excess of the original cause. The character must also see that they are the reason for their suffering (anagnorisis – the move from ignorance to knowledge). In the context of a narrative or telling of the story, this would elicit pity and fear, a purification of emotions (catharsis) for the audience.
On the one hand, Aristotle is spelling this all out as a way of formalizing and categorizing types of art (Aristotle was a philosopher and biologist by vocation). He might have even considered writing this down as a way of formalizing a set of guidelines to critique plays, finding a way to point out what makes some plays good and others not.
But I had another thought. Aristotle’s teacher, Plato, took a dim view of the arts. In his Republic, Plato was comfortable with banishing the poets from his ideal city, and only allow art that held up the moral authority. I’m wondering if Aristotle had something like this in mind – that art could be used as a moral education tool.
Maybe, the best examples of art are ones that teach the audience lessons, albeit in a less direct route (than, say, fables). If this were true, then we could interpret Aristotle’s criteria the following way. A piece of art is valuable as a moral training tool when the audience can build an emotional connection with the suffering of others. Rather than it being a spectacle for them to lose themselves in, the art gives the audience a moral framework to judge themselves against. The tragic figure is like them: not a god or immortal, but an example of a good person trying to do good things. The tragic figure might even be a little aspirational, something the audience can work towards. They aren’t depraved in the soul, but they are responsible for their actions, even if those actions have negative consequences.
Instead of blaming their suffering on an external cause, the tragic figure realizes that they are the cause of their own suffering. The audience sees this, sees that they could be this person, and through their emotional connection, learns to empathize with the tragic figure. In a sense, they could be the same person, were the circumstances be different. The audience feels the pain, takes pity upon the otherwise good person, and maybe even fears this happening to them.
Given that Aristotle’s ethics was predicated on relative moral excellence, it’s possible that he intended art to be educative, though I don’t have the scholarship background to confirm whether this is true (or plausible). To be clear, I don’t think art must function in this capacity. I think it’s perfectly reasonable to have art for its own sake, or for the creative expression of what’s inside the artist.
Still, the thought of morally educative art is interesting. I’ve often thought of what kinds of art I’d want to expose my own children to in the development of their moral character. What kinds of lesson would I want them to absorb and learn from as they develop an internal sense of ethics and morality?
During a throwaway thought experiment in his 1641 treatise, Meditations on First Philosophy in which the existence of God and the immortality of the soul are demonstrated, René Descartes posited the idea of an evil genius or demon that systematically deceives us to distort our understanding of the world. Contrary to first year philosophy students everywhere (a younger version of myself included), Descartes did not actually believe in the existence of an evil manipulator that was holding us back from understanding the nature of the real world. Instead, he was using it as part of a larger project to radically re-conceive epistemology in an era of rapid advancements in science that was threatening to overturn centuries of our understanding of the world. He felt that knowledge was built upon shaky ground thanks to an over-adherence on the received authorities from Greek antiquity and the Church’s use of Aristotelian scholasticism. Similar to Francis Bacon twenty years earlier, Descartes set out to focus on knowledge that stood independent of received authority.
Through Meditations one and two of his book, Descartes considers the sources of our beliefs and considers how we come to know what we think we know. He wants to find an unshakable truth to build all knowledge from, and through an exercise of radical doubt he calls into question many of the core facts we hold – first that knowledge gained from the senses are often in error, that we often can’t distinguish the real from fantasy, and through the use of the evil genius, that perhaps even our abstract knowledge like mathematics could be an illusion.
When I teach this to first year students, they either don’t take his concerns seriously because of the force of the impressions the real world gives us in providing sense data for knowledge (a stubbed toe in the dark seems to forcefully prove to us that the external world to our senses is very real), or they take Descartes too seriously and think Descartes really thought that a demon was actively deceiving him. Regardless of which side the student falls on, they will then conclude that Descartes’ concerns are not worth worrying about; that this mode of thinking is the product of an earlier, less sophisticated age.
Unless you are a scholar delving into Descartes’ work, the real purpose of teaching the Meditations is to provide students with a framework to understand how one can go about thinking through complex philosophical problems. Descartes starts from a position of epistemic doubt, and decided to run with it in a thought experiment to see where it took him. The thought experiment is a useful exercise to run your students through to get them to think through their received opinions and held-dogmas.
However, in light of my rant a few weeks back about informed consent and vaccines, I’ve discovered a new contemporary use for thinking about Descartes’ evil genius. In some sense, the evil genius is *real* and takes the form of fear that shortcuts our abilities to learn about the world and revise our held beliefs. Descartes posited that the evil demon was able to put ideas into our heads that made us believe things that were completely against logic. The demon was able to strip away the world beyond the senses and even cast doubt on abstract concepts like mathematics.
Much in the same way Descartes’ demon was able to “deceive” him into believing things that were contrary to the nature of reality, our fear of the unknown and of future harm can cause us to hold beliefs that do not map onto facts about the world. Worse yet, the story we tell about those facts can get warped, and new explanations can be given to account for what we are seeing. This becomes the breeding ground for conspiracy thinking, the backfire effect, and entrenched adherence to one’s beliefs. We hate to be wrong, and so we bend over backwards to contort our understanding of the facts to hold-fast to our worldview.
In truth, we are all susceptible to Descartes’ demon, especially those whom believe themselves to be above these kinds of faults of logic. In psychology, it’s called the Dunning-Kruger effect, of which there are all sorts of reasons given why people overestimate their competence. But in the context of an entrenched worldview that is susceptible to fear of the unknown lurks Descartes’ Demon, ready to pounce upon us with false beliefs about the world. Its call is strong, its grip is tight, and the demon is there to lull us into tribalism. We fight against those we see as merchants of un-truth and in a twisted sense of irony, the weapons of truth we yield only affect those already on our side, while those we seek to attack are left unaffected. It becomes a dog-whistle that calls on those who already think and believe as we do.
If we hope to combat this modern Cartesian demon, we’ll need to find a new way of reaching those we see on the other side.
While reading a post from A Learning A Day, I thought I’d keep the irony train rolling by linking to Rohan’s linked post from Derek Sivers about the perceived need to quote an idea’s source. Specifically, I wanted to respond to this point:
2. School teaches us to reference. But we’re not trying to impress a teacher anymore. And every unnecessary fact dilutes our point.
I often reflect on the learning objectives I expect to achieve in a course lesson while teaching. I try to parse out the meaningful things I want students to learn from rote procedural tasks that don’t serve a purpose. The last thing I want to do is to reinforce the wrong lesson or derive the wrong conclusion from a student’s performance (e.g. did a student do well on a test because they understood the material, or because they are good at taking tests?).
Derek’s point above about references is well-taken and got me thinking: why do I want students to cite their sources? I brainstormed a few reasons and listed them below with comments.
I want a student to be mindful of their research process (procedure).
Having gone through writing my master’s thesis, it’s easy to lose track of references and citations if you don’t stay on top of it. This isn’t super relevant to most assignment learning objectives, but it’s a good practice to have before launching into a bigger endeavor or capstone project.
I want a student to critically examine their own knowledge (what do they believe to be true facts, where did that fact come from, and why do they think it’s true).
I’m not sure if making students cite their sources achieves this aim on its own, but I suppose I could use citation requirements to help guide them through this process.
I want a student to be mindful of idea ownership and give credit to people who have done work.
I’ve used this mostly in plagiarism cases where students copied work and submitted it as their own. I try to distinguish between sloppy citing and outright theft, and I remind students that they shouldn’t get marks for work they didn’t do. I’m still undecided if this is a rule of the academy or a legitimate thing to prevent fraudulently passing work off as your own in the future. This point, though, is mostly relevant in academic contexts as opposed to Derek’s notes about doing this during conversations.
I want an easy way to see if the student did the work.
This is a trick I’ve developed to see whether a student giving me their opinion is right by chance, or if they have informed their opinion by doing the course reading. The same result could be gained if students inserted relevant information without citations, but the citations help to highlight this when I’m reading through their submission. In other words, it makes my job easier.
I want to reinforce good academic writing habits.
Using references is part of what it means to write academically, and is used as part of the integrity process. This is only a good reason if my objective is to teach/reinforce academic writing for students.
This is the way it has always been done.
More cynically, requiring citations is part of the tradition, and who am I to question it? It’s not a good reason to require it, but it is what it is. I won’t included in the list to the left, but a more sadistic version of this is “I had to do it, so you have to (go through this rite of passage) too!”
I want to remain consistent with departmental policies and culture.
Whether written or unstated, most departments adhere to some level of standards. This was less the case for me in undergrad and it depended largely on the preferences of the prof. By the time of my thesis, I ended up developing a hybrid referencing system that did not strictly follow any of the major citation methods. I received no comments from anyone who reviewed my thesis on my citation practices.
It’s important to trace an idea’s lineage as much as possible to spot fabrication.
If you are going to insert facts or conclusions into your work, it’s important to point to where you found them. Without a citation or an adequate way of accounting for how you know what you purport to know, it’s possible that the information is made up. Being able to trace these things helps, albeit this is more useful from a scholarship point of view, as I suspect a lay-reader isn’t concerned with checking a text for factual accuracy and instead takes it on authorial authority.
Related – to see if a student is able to either properly reference work, or at the very least charitably restate ideas without dropping important content from the idea.
This perhaps falls under sloppy citation practices, but on occasion students will misunderstand a piece of text and paraphrase or summarize information incorrectly. Knowing where the student is drawing their source from can have pedagogical merit if you take the time to compare the student’s work with the source and discuss the divergence.
Related – when an author cites their sources, a reader can use the bibliography of sources for further reading.
This is perhaps more for book nerds, but I love having references to be able to learn more for things that pique my interest. This is, however, not the context Derek is referencing when he discusses giving citations during a normal conversation. If Derek’s conversation partner was interested and want to know more, I’m sure they would ask Derek for more information.
More abstractly, knowledge and academics is a web of mutually reinforcing facts, so academic writing is an extension of that reality.
This one is a bit of a stretch as to why a student who is not adding to a body of knowledge is required to rigorously cite their sources in a pedagogical exercise, but I include this more epistemological point to try and be exhaustive.
It’s a symbolic representation that the student (in most contexts) is not generating new or novel work/insights that creates new knowledge, but instead is remixing ideas from other sources.
I think this is a good reminder of what the goal of the assignment should be (students are often far too ambitious in what they think they can reasonably achieve in x-number of pages), but I wouldn’t consider this to be an adequate reason to insist on proper citations.
Like other skills, the act of referencing needs to be practiced.
I’m sympathetic to this, but as Derek is implying, you should be practicing skills that transfer into other domains or that you will need. In most instances, outside of school you don’t need to cite sources.
Citing references is part of the argumentation process. In order to build a successful argument, you must clearly express and state your premises, which includes any premises taken from the work of others (either their premises or their conclusions).
I’m also sympathetic to this as I think everyone should keep in mind that arguments need to be made to help convey ideas. It shows the logical chain from premise to conclusion and seeks to make the implicit explicit, and the unstated stated.
Other than a subset of the reasons above, a strict requirement for citations is often unnecessarily enforced in the classroom, and is almost never required outside of the academic setting. I think there are some good pedagogical reasons to have students go through the effort to cite their sources, but you should be intentional when teaching as to when those cases apply. For instance, I am less strict about my students citing sources and instead I look for them to directly apply material from the course in their assignments (instead of giving me their opinions).
I enjoyed Derek’s point about how citing sources is a common trope in pop non-fiction, which sounds like a convergence on my ideas concerning animated bibliographies, or Ryan Holiday’s “15 academic studies” comment from a few weeks back. Maybe Derek’s right – we should have more courage to integrate knowledge into our existing schema and be prepared to state things as facts instead of citing our sources. I’m not sure I’m prepared to abandon the practice wholesale, but it has given me something to chew on.
I’ve been reading Scott Young’s recently released book, Ultralearning, and I think it’s a pretty good summary of how one can take on an intense learning project for personal and professional development. It functions like an autodidact’s road map with plenty of good tips, insights, and stories to round things out. Elements of the animated bibliography are present, but I don’t find it contrived in its execution. The stories help frame the chapter and serve as an introduction to the core material.
It’s funny how last week I was talking about mnemonic devices, because after drafting that post I ended up reading about the concept in Chapter 10 of the book as it dealt with ways of supporting retention of material you learn.
In chapter 9 of the book, Young talks about ways of providing feedback in the learning process, whether the feedback is provided from others or feedback you can use in your own learning process. He parses out three kinds of feedback that I found interesting, not only for my own personal use in learning, but also as something I should keep in mind as a teacher.
The three kinds of feedback he outlines are outcome feedback, informational feedback, and corrective feedback. Each type of feedback serves a specific purpose, and you should be mindful of the context the feedback is given, as the wrong type of feedback can set you back in your learning.
Outcome feedback – provides information on whether you are getting answers right or if you are meeting a pre-identified set of learning objectives. It tells you that you are right but doesn’t give any indication of why (or why you are wrong).
Informational feedback – provides further information to explain the underlying reason why something is right or wrong. It can be informative to re-affirm what you have learned, and can identify key areas of strength or weakness, however it does not create a concrete process forward.
Corrective feedback – provides, as the name indicates, a path forward for the learner in terms of how to overcome deficiencies. It details not only how one is right/wrong, why they are right/wrong, but how to address or avoid being wrong. This type of feedback not only requires a level of comprehension of the material, but requires sufficient understanding to teach the underlying processes to the learner through explanation, demonstration, suggestion, etc.
As a teacher, it’s important to know what kind of feedback is warranted and under what circumstances. Most of us tend to focus just on outcomes, but students often don’t learn from pure outcome assessment. Rather, you need to take the further steps to go beyond an evaluation and ensure you are addressing the underlying deficiency present in the student’s performance. Outcome assessment is awesome because it’s quick and definitive, but it’s also lazy if your goal is to improve your students. On the other hand, corrective feedback is desirable but it’s labour-intensive and must be done carefully so as not to remove critical thinking from your student – you don’t want them to merely follow your instructions but instead you want to promote their thinking and reasoning through problems without your guidance.
In January of 2008, I was walking through my university campus’s student centre and passed by a table for the UW Campus Response Team, whom were recruiting volunteers for the new semester. I doubled back, chatted with the team members, and signed-up to participate in their interview process. I had taken first aid courses periodically during my cub scout and army cadet days, plus I had ran some basic first aid courses while abroad, so it felt like a good fit.
In retrospect, my “experience” was quite paltry, but I had shown the team managers that I had enough of the “right stuff” that they invited me to join the team and participate in the weekend training course they put on for new recruits. It’s an intense crash course in first aid skills that were well beyond my experiences and the training spanned several hours Friday night and all days Saturday and Sunday, before you perform your final scenario test to qualify as a secondary responder.
The material covered was largely derived from emergency first responder courses, along with some material covered for pre-hospital trauma professions (e.g. fire fighters and paramedics). The training was designed to create heuristics in the responder’s mind to quickly flow through critical details while gathering as much information as possible and start treatment momentum. The last thing you want is for a responder to have to intentionally think through what steps they should follow, because it shunts cognitive capacity away from situational awareness and into operational procedures.
In an effort to automate one’s thinking, you end up doing a lot of mock scenarios and skill drills. As a responder, you end up creating a script in your mind to follow. The script is based on a common set of things to attend to, which you follow according to handy mnemonics and other memory aids.
Despite the mnemonics functioning to provide mental triggers for actions, you still need to learn the process to go along with the mnemonics, and from the start of training weekend, you only have precious few hours after training concludes for the day to encode the information out of your working memory and into longer term storage.
I needed a way to quickly drill myself and aid in recall. The system I settled on was to get some window writable markers and write out my mnemonic devices on the bathroom mirror. Every time I used or walked passed the washroom, I would attempt to fill in as many of the mnemonics as I could remember, and note where I made mistakes. Through constant repetition, I was able to turn:
Mechanism of injury?
Count the casualties
Signs and Symptoms
Past medical history
Last meal/beverage intake
It was a quick and dirty way to give myself quick feedback on these concepts that I could readily apply to my first aid treatment during training and eventually on shift. Any time I lost momentum or felt nervous about the judges evaluating me, I would mentally go back to my bathroom mirror and fill in the blanks. I haven’t been on the first aid team in almost a decade but these concepts easily come back to me, even during my crazy nights at the bar. It’s a testament to the stickiness of the ideas and the effectiveness of the drills.
A recent article talking about knowing when to quit/retire from teaching had me reflecting on my own experiences with quitting. Truthfully, I can’t recall many instances where I quit something. Often, I will drag out experiences long after they have been useful, and instead of quitting as an active decision, I’m more likely to let things fall away through neglect. Perhaps there isn’t a strong difference between the two since my history is littered with things that I eventually stopped doing. I suppose in my mind, the difference comes down to whether I made a decision to stop – whether I took ownership over the act.
The strongest instance where I actively made a decision was when I stopped hosting at the local karaoke bar. I was three or four years into my tenure as a host, and for the most part I enjoyed the experience. I had a regular crew of friends who would come in and make the night interesting. However, towards the end, I grew to resent patrons coming in who weren’t my friends. I worked the slowest night, so if things were quiet, we’d shut down early. But if patrons filtered in and kept purchasing stuff, we’d stay open. Catering to the average customer felt like a chore, rather than chumming with friends with our own song preferences and inside jokes.
I started to dislike going into work, and even to this day I don’t sing much like I did while I was a host. I’ll grab the mic from time to time, but I don’t go out to enjoy karaoke anymore. I still work security at the bar, but I stopped hosting all together.
I made the decision to stop hosting because a small part of me knew it was time to move on. I learned what I could from the experience, cherished the memories it gave me, but I recognized that I no longer wanted to spend time doing it. I think that’s the critical part in the art of quitting. It’s not about actually quitting or the how. Instead, it’s about recognizing when the time has come and why.
Sometimes we have to slog it out in things we hate. We don’t quit those things because we assign value to the activity (or someone else has assigned value and we are dragged along for the ride). But quitting is more than stopping a thing you don’t like. It’s about recognizing when the thing is no longer of value to you; that it won’t take you where you need it to go. It is the recognition that your time is better suited elsewhere. The art of quitting ultimately comes down to taking an active role in how you choose to spend your time.
I finished reading Complications by Atul Gawande last week and really enjoyed it. It was his first book and covered stories from his apprenticeship phase of becoming a surgeon. I thought back to the first book I read from him, The Checklist Manifesto, and realized that while I enjoyed the topic Manifesto covered, I found it lacking a certain charm that Complications had.
Manifesto felt like a good idea that was stretched a bit too thin to fit the book format, and was heavily supplemented with references to studies done by other researchers. This isn’t meant as a criticism – it was a good book! But what I felt Complications (and his other book Better) had is the first hand reflection on one’s professional development. It’s not just a memoir of one’s life, nor is it a tell-all, but instead it’s a focused meditation on the training, learning, failures, achievement, and lessons one gains from devoting themselves to their vocation.
Over the last three and a half years of reading, I’ve found I really enjoyed these kinds of books. I looked over my reading list and pulled a bunch of examples randomly below. Some of them are about medicine, others are of actors, and a few books from the business world. The common thread is that it’s less about the personal biography of the person and more about the development of the professional (for this reason, I didn’t include Elon Musk’s and Enrico Fermi’s biographies, or career retrospectives like the books from James Comey and Hillary Rodham Clinton).
It describes a world bigger than the person telling the story, and their attempt to grapple with the epistemological, ethical, and professional obligations that comes from entering a profession, and where their limits lie. These are not stories about heroes – the stories are about human error and fallibility, and learning to deal with that revelation. It also keeps its eye towards what it means to serve others, and where the profession should go in the future.
Ultimately, these books differ from the animated bibliography in one crucial area. The animated bibliography is often a book that results from a person researching and stitching together the ideas of others. In some cases, these books will require the author to attempt to put the ideas into practice, but in my opinion this is in service of selling the credibility of the book. However, the books I’m discussing here and listing below are different because they are an account of people who are learning by doing. They are applying what they previously learned during formal education, and reflecting on the outcomes to see what lessons can be derived. In some sense, the books are an autopsy that try to tease out causes, or at least serve as a cautionary tales for those who come later.
Last week, I tried a new tactic to engage with my students. I was inspired by two workshops I attended during Conestoga’s annual E3 (Employees for Excellence in Education) Conference. The first workshop covered how to write good assignment prompts, with clarity and purpose in mind, and the second covered strategies for writing for online courses. In the course I manage online, my students were preparing to submit their first major philosophical paper, and historically my students do poorly on the writing side. I largely attribute this to it being their first time trying to write a philosophical paper and their only exposure to this point was either essays in high school or non-philosophy essays for other courses in college. After sitting in on these two workshops, I reflected on what I could do, in an online course, that would improve my student’s ability to write. It’s challenging to engage with online students for two reasons:
first, you (almost) never meet your students face to face, so you lose the ability to use tone, voice, inflection, and body language to convey information, and
second, online courses are atemporal, which means you don’t engage with your students at the same time.
An idea I’ve been kicking around for some time is creating a video for my students as an added bit of content for the course. The problem with this option is it’s still fairly static and easy for students to skip if they feel it doesn’t contribute to improving their assessments. It also goes in one direction, where I speak at my camera rather than engaging with the students.
However, I’ve been mulling over another option. I have borrowed a web camera from my podcasting partner, I have a good microphone, and I delivered a webinar with a live Q&A in the middle of May. I considered running a livestream last semester, however when I offered the option to the students, I had no requests for it. But this semester, I decided to set it up and run it, regardless if students attended or not. At worst, it would be a wasted hour of my time. However, the benefits would be two-fold: my students would have a chance to interact with me and ask me questions about their assignment, and it would give me practice with a new skill set.
I picked a date and time, figured out how to broadcast (in the end, I went with Twitch, but next time I’ll test out YouTube Live) and went for it. I had 4-7 students drop in, which is fairly low engagement, however the questions were really good and I had a lot of fun actively engaging with students again.
One unfortunate thing was I didn’t set up the system to auto-record, so I don’t have a copy of the livestream to review or upload. I ended up recording a second (static) video to cover the main points so that my students had something to reference when they were completing and submitting their essays this past weekend.
It was a good experience and I plan to run at least one livestream per semester moving forward. I have yet to grade the papers, so I don’t know if I had a material impact on their performance, but in time I hope that my students will get better with the added direction I can give them. I also now have a video that I can post to help them think through the process of writing a philosophical paper. If nothing else, it’s good to build handy resources and have them available for your students. My goal is to help my students improve their thinking and writing as a result of taking my course. Even if their papers are 1% better as a result of my direction, it’s worth it.