Engineers love clear problems with delineated right and wrong answers. Data, especially quantified data they think, is objective and clean. Without painting too strong of a stereotype, they don’t like to muck around with soft skills, or social/political factors in problems. They like to keep engineering pure.
The problem with this view is that it’s not correct – it makes an underlying assumption about what makes something a right or wrong answer to a problem. Most problems that engineers deal with when designing a solution are not value neutral. When we think of problems with clear right or wrong answers, we think of problems that are purely mathematical or having discrete binary solutions (e.g. “will the object handle the forces that it will be subjected to under normal conditions?”). The secret is that all problems have “right” and “wrong” solutions based on the underlying values you are trying to optimize for.
An engineering problem that is optimizing for maximizing return on investment might have different solutions than one that optimizes for addressing systemic inequity for particular people. The tradeoffs are not just opportunity costs, but instead are tradeoffs on which values inform the vision of the final outcome of your solution. When you seek to return on investment, to maximize profit, the answers are pretty clear – drive down expenses, raise prices as high as the market will bear, communicate the value proposition to the customer, and produce enough goods at the right rate to meet demand without excess goods sitting idle. When you seek to address systemic inequity, your solutions will have decidedly different considerations – your expenses will go up as you pay fair wages, prices might not maximize your margins, you will be more candid with your customers, and your manufacturing and distribution will be likely slower and more intentional as you make ethical considerations in your processes. You will also consider all sorts of other externalities that pop up as a result of your solutions, boosting the positives while capping the downsides.
This is not to say that all solutions will be equally easy to implement under any one set of values systems that you choose. However, it’s fallacious to believe that the same answer will always be given for “can we build this?” and “should we build this?” if you aren’t also examining the underlying values that you set in your assumptions.
We should think of our beliefs and the evidence we engage with as if we had a little homunculus tv courtroom in our brain adjudicating whether to admit evidence into the record. Obviously, this is incredibly difficult to pull off in real time, but it’s a nice thought experiment to pause and consider the weight of a claim being made.
This idea came to me while watching a YouTube video covering the recent downfall of a famous hustle influencer, where the presenter made an observation that she (the presenter) would normally not take people’s personal lives into consideration when judging their professional work, but the case that the influencer sold conferences and products marketed as relationship coaching courses under the pretenses of having a great marriage was swiftly undermined by her (the influencer) getting a divorce approximately two years later.
I was impressed with this statement by the presenter – she was right! Under normal circumstances, the personal life of a person shouldn’t bear weight on something like this, but given the fact that the evidence under consideration was whether someone was misleading about their personal life and getting others to pay for her “expertise,” it would be grounds to consider this piece of evidence as relevant or bearing weight. My homunculus courtroom judge ruled that the testimony was admissible.
This is a silly thought experiment to anthropomorphize cognitive thought-processes that are otherwise just a black box to me. I suppose it’s a little farfetched to think that we have this much control over our beliefs, but maybe the next time I listen to a claim (or gossip, or something that doesn’t jive with my experience… or claims that I want to be true…), I will remember my homunculus courtroom and think twice about the claim’s believability.
I was reflecting on Seth Godin’s musings about the number of moons in our solar system. The initial assumptions we use to make predictions about our world can sometimes be orders of magnitude off from truth.
We as humans don’t like to be wrong, but we shouldn’t be overly concerned with our initial assumptions being off the mark. After all, if we knew the truth (whatever “truth” happens to be in this case), there would be no need to start from initial assumptions. It is because we are starting from a place of ignorance that we have to start from an assumption (or hypothesis) in order to move forward.
The problem lies with whether we realize we are making assumptions, and how committed we are to holding on to them. Assumptions made about the physical world can often be value-neutral, but assumptions that intersect with the lived experiences of people always come pre-packaged with history that’s value-loaded. It’s fine to make an assumption that your experiences are shared with others, but that assumption can only be carried so far. At some point, you have to acknowledge that there will be a lot missing in your initial assumptions that need to be accounted for.
The lesson then is this: when working from an estimation or prediction, be careful with your initial assumptions. It’s fine to begin with your own experiences, but always put an asterisks beside it because your experience is likely not universal. We must guess, then check. Test, verify, then revise.
Aiming at truth is a noble goal, but we should settle for asymptotically moving closer towards it as it more likely reflects reality.
The Varol piece was new, and as I read it, it reminded me of the Sivers piece, so I’m pairing them together. I’m a little conflicted with the message. On the one hand, I agree with both writers about the sentiments they are expressing. In Varol’s case, often citation becomes a short-hand for original thinking. Rather than expression your own unique ideas, you regurgitate what you’ve consumed from others (whether you are citing it or not, as is on display in the Good Will Hunting example). Likewise, Sivers is on to something when he suggests that integrating facts into our mental apparatus should not require us to cite our sources when it’s no longer the appropriate context. It makes sense to cite sources when writing something that will be graded in school, but it is stilted while in informal settings.
Where I feel conflicted is when there is a need to trace ideas back to verify the content. I don’t think it’s a new phenomenon, but it has certainly accelerated in recent years that misinformation is being thrown out into the void at a rapid pace. The internet has optimized itself on three facts of human nature – we like sensation, we like things that are familiar (that accords with what we already believe), and we are less critical of our in-group. Therefore, information bubbles get set up online, which creates a digital environment that’s conducive to rapid spreading of memetic viruses. When you think about it, it’s a marvelous analogy: the online information bubble is a closed environment where people are like-minded, which amounts to a roughly analogical immune system. A memetic virus then latches hold on one person, who spreads it to people in their network. Since the folks in the network share similar belief structures, the homogeneous group quickly spreads the meme throughout the information bubble. The meme is then incorporated into the belief network of the individuals through repetition and confirmation bias exposure. It writes itself into the belief web, in the same way viruses incorporate themselves into DNA.
I’m using the example of a memetic virus, but I think this framework is equally applied to more benign examples. Scientists release findings in the form of pre-peer reviewed news releases, which gets amplified and distorted through the media, which is then amplified and distorted through information bubbles. See here for an example:
At each phase, part of the signal is lost or transformed, like a social media game of telephone. When one person in the chain misunderstands the data, that impacts how the idea gets replicated. Over time, it becomes the digital version of a cancerous mutation of the base information.
This is why it’s important that we take care of how information is communicated, because as soon as you print something like “the majority of people believe x,” or “studies showed a y% decrease in the effect,” without a proper context of what the data is saying (or its limitations), that gets incorporated into people’s webs of belief. If you are a part of the population that believes something and you read that information, it reinforces your prior beliefs and you continue on in replicating the idea.
And so I’m torn. On the one hand, I shouldn’t need to cite my sources when having a casual conversation (a la Sivers), and I shouldn’t be substituting original thoughts with the ideas of others (a la Varol), but at the same time, I want it to be the case that when I encounter something that it should be verifiable and scruitable. I don’t know what the solution to this is, other than to flag it and remind myself to be wary of absolutist language.
I started this blog for two reasons – because I wanted a public way of practicing what I was learning at the time, and to force myself to write consistently. I decided posting once per week was a manageable target, and I’ve been relatively successful for the last few years. Recently, I’ve added the Friday Round-up as a way to force myself to write more and to share interesting content I stumble upon. When I added the Friday posts, I questioned whether it was worth putting in the effort – was I adding value to any part of the process? On some level, I feel it’s worth it, if for nothing else than to force myself to be a bit more reflective on what I consume. However, Derek Sivers’s point about forcing one’s self to post rapidly comes with some trade-offs. I imagine Seth Godin (another prolific blog poster) sometimes feels the same way by posting daily – that most of his posts aren’t what he would consider good. The mentalities are a bit different; Godin posts as part of his process, whereby you have to make a lot of crap to find the good stuff. Sivers would rather keep the crap more private to give him time to polish up the gems. I’m not sure which style is better. Both admit to keeping the daily writing practice, which is probably the more important lesson to draw from their examples, but it’s still worth considering.
After drafting the above, I kept reading some bookmarked posts from Sivers’s page and found this one written in 2013 after a friend of his died. It’s a heartbreaking reflection on how one spends their time, which included this:
For me, writing is about the most worthy thing I can do with my time. I love how the distributed word is eternal — that every day I get emails from strangers thanking me for things I wrote years ago that helped them today. I love how those things will continue to help people long after I’m gone.
I’m not saying my writing is helping anyone, but the thought that my words will live beyond me touched something within.
I’ve known the author of this YouTube channel for a few years, and I follow him on ye ol’ Instagrams (I love his scotch and cigar posts). But I didn’t know until last month that he also reviews books as part of the BookTube community. I wanted to share this link to show him some love, and because it reminds me of one of my roomies in undergrad who introduced me to the world (and language) of poker. While I’m a terrible player, I have fond memories of watching my roomie play online, if for nothing else than the humor of him yelling at the screen.
Oh, and I like Maria Konnikova’s writing, so I think I’ll check out her book. Another good book by a poker player about thinking better – Annie Duke’s Thinking in Bets.
I think this video does a good job to interrogate my love of certain kinds of comedic news. I was a late-convert to Jon Stewart, and felt crushed when he announced his (much deserved) retirement. While I’ll admit I haven’t given Trevor Noah a fair shake, I pretty much stopped watching the Daily Show after the change-over. Similarly, I’ve watched other shows that riff on the format, whether on cable (such as Samantha Bee), subscription services (like Hasan Minhaj), or online content (I get John Oliver through YouTube). It’s not lost on me that all of the names listed above are Daily Show alumni. My consumption also includes shows that are inspired by the presentation format, like Some More News on YouTube. Still, it’s rare that I consistently follow any one show because I tend to find the material or subjects to be somewhat hollow. The only exceptions to this, as noted by Wisecrack, are Oliver’s and Minhaj’s shows, which I feel to be both smart and wise in the material they present. Rather than trying to punch for the sake of cracking jokes, their shows punch at topics that are meant to help people that aren’t in on the joke. That is, their shows aren’t just speaking to the in-crowd as a private way of mocking the out-group. This was a great video essay that made me think.
I purchased Hannah Ardent’s The Origins of Totalitarianism as a birthday present for myself a few years ago (I know, I’m weird). I still haven’t cracked into it as of writing, but last week I received an email update from my alma mater, and in it they discussed how one of the faculty members had recently returned from time spent doing research at the Hannah Ardent Centre for Politics and Humanities at Bard College. The email also described the regular reading group that occurs, and how it recently moved online to promote physical distancing. I checked out their YouTube page and found this series that I hope to carve out some time to follow along with. Origins is a pretty hefty book, and Ardent is a pretty powerful thinker, so I’m glad to have a resource to help me understand the nuances of her work better.
From time to time, I catch myself thinking some pretty stupid stuff for entirely dumb reasons. A piece of information finds a way to bypass any critical thinking faculties I proudly think I possess and worms its way into my belief web. Almost like a virus, which is a great segue.
A perfect example of this happened last week in relation to the COVID-19 news, and I thought it important to share here, both as an exercise in humility to remind myself that I should not think myself above falling for false information, and as my contribution to correcting misinformation floating around the web.
Through a friend’s Stories on Instagram, I saw the following screencap from Twitter:
My immediate thought was to nod my head in approval and take some smug satisfaction that of course I’m smart enough to already know this is true.
Thankfully, some small part at the back of my brain immediately raised a red flag and called for a timeout to review the facts. I’m so glad that unconscious part was there.
It said to me “Hang on… is hand-sanitizer ‘anti-bacterial’?”
I mean, yes, technically it is. But is it “anti-bacterial” in the same way that it is getting implied in this tweet? The way the information is framed, it treats the hand-sanitizer’s anti-bacterial properties as being exclusively what it was designed for, like antibiotics. For example, you can’t take antibiotics for the cold or flu, because those are not bacterial infections but viral infections.
According to the author on the topic of alcohol-based hand sanitizers (ABHS),
There are some special cases where ABHS are not effective against some kinds of non-enveloped viruses (e.g. norovirus), but for the purposes of what is happening around the world, ABHS are effective. It is also the case that the main precaution to protect yourself is to thoroughly wash your hands with soap and water, and follow other safety precautions as prescribed.
The tweet, while right about the need for us to wash our hands and not overly rely on hand-sanitizers, is factually wrong generally. Thanks to a mix of accurate information (bacteria =/= virus) and inaccurate information(“hand sanitizer is not anti-bacterial”), and a packaging that appeals to my “I’m smarter than you” personality, I nearly fell for its memetic misinformation.
There are a number of lessons I’ve taken from this experience:
My network is not immune to false beliefs, so I must still guard against accepting information based on in-group status.
Misinformation that closely resembles true facts will tap into my confirmation bias.
I’m more likely to agree with statements that are coded with smarmy or condescending tonality because it carries greater transmission weight in online discourse.
Appeals to authority (science) resonate with me – because this was coming from a scientist who is tired of misinformation (I, too, am tired of misinformation), I’m more likely to agree with something that sounds like something I believe.
Just because someone says they are a scientist, doesn’t make the status true, nor does it mean what they are saying is automatically right.
Even if the person is factually a scientist, if they are speaking outside of their primary domain, being a scientist does not confer special epistemological status.
In the aftermath, the tweet was pulled and the person tried to correct the misinformation, but the incident highlights that the norms of Twitter (and social media more broadly) are entirely antithetical to nuance and contextual understanding.
It’s interesting how much information spread (memetics) resembles pathogen spreading. If the harmful thing attacking us is sufficiently designed to sidestep our defenses, whether that’s our body’s immune system or our critical thinking faculties, the invading thing can easily integrate within, establish itself within our web, and prepare to spread.
The one thing that really bums me out about this event is the inadvertent harm that comes to scientific authority. We as a society are caught in a period of intense distrust of the establishment that is coinciding with the largest explosion of information our species has ever seen. The result of this is not that good information is scarce, but rather the signal-to-noise ratio is so imbalanced that good information is getting swept away in the tide. If people grow distrustful of the sources of information that will help protect us, then forget worrying about gatekeepers that keep knowledge hidden; there will be no one left to listen.