Is the world really good or bad? Or is there another way to see it?
I remember when I was told that if I wanted a product designed to fit me perfectly, it would be very expensive. Expensive was code for bad. No one wants to pay more. We’re supposed to pay less according to the good/bad view, and less is good.
I remember the bell curves from standardized testing. I was told that “normal kids” scored in the middle. In the good/bad view, “normal kids” was code for good. The kids to the left of the hump were certainly bad at something. The kids to the right of the hump were probably known as weird — which, like expensive, was another way to say bad.
All of my memories have traces of some “that’s good, this is bad” type of segregation, each one pointing out that good, in the cultural, socio-economic, demographic, sense, was really another way of saying average. “Be a good boy so you can grow up to be a good man,” they said. And being a good boy meant “fitting in” and doing what the others were doing. Being a good man meant earning a stable income at a “good company” … that’s code for a company that pays well and hires thousands of people.
I grew up with this bias about the concept of good and bad and how that pertained to fitting in, career, success, customized products, standardized tests, and more. It was so engrained that even having this bias was considered good. Which meant that not having this biased, good/bad view of the world was a sign you were weird (again, a bad thing).
All of this changed when I read Language and the Pursuit of Happiness by Chalmers Brothers (Dr. Chelsey, pardon the extra reference … I’ll connect everything shortly). That’s when I learned to set aside the question, “Is this good or bad.” What if I asked instead, “Does this work or not work?” It was my first exercise in how the power of words can reframe a question (without biases, for example) and arrive at a completely new set of outcomes.
So when author Sarah Wachter-Boettcher describes how product designers miss the mark because they can’t relate to audiences, I’m particularly drawn to how the decision makers deliberately disregarded readily-available facts. When the author points out how Northpointe’s COMPAS software misidentifies potential recidivists to the point of being racist, it’s difficult to ignore how strongly the company intentionally overlooks factual evidence as they deny making mistakes. As the author reprints excerpts from tech company annual reports showing a published desire to diversify hiring but show no noticeable change in that pattern, I’m compelled to notice that these seemingly-brilliant people are ignoring the very facts they are printing and evidence that there are plenty of “underrepresented” candidates hoping to be hired.
Allison Parish highlights this willful ignorance of the results (results being particular types of facts) of a the popular hacker ethic. The people who buy into the ethic are presumed to be some of the smartest around, yet the results of their ethic-based actions effectively demonstrate that they destroyed the ethic. They ignore the results of their actions and continue their belief that they are champions of the ethic.
Estee Beck points this out again in the seemingly willful ignorance of social media companies to truthfully identify the word sharing as something distinctly different than what they are asking their users to participate in, which is truly prosumerism (you might say the user is being used).
In each piece, the authors expose biased viewpoints evident in the tech world they know so well. But Beck says it best when it comes to dissolving ignorance of facts, outcomes and results when writing, “I argue that it is up to educators, especially writing teachers, to sustain critical literacies in their classrooms in service of connecting, and possibly subverting, the market-driven prosumerism for an exchange benefiting humankind without financial incentive.”
In short: Words matter.
Beck implies that sharing, in the social media context, isn’t sharing at all. If users understood this, that literacy might reframe the whole concept.
Changing the words we use can ignore or include the facts. Changing the words used to frame a problem or solution can determine if there’s even a problem or solution to begin with.
The story in my head, as I read each article, is that all of the people, focus groups, executives, recruiters, and more looked at their products, hiring practices, marketing campaigns, and more and, at some point, said to themselves, “This is good.”
And that means good in the “how-I-was-raised-to-define-good” sense of the word:
- That means average is good, so build products for the average person.
- That means customization is expensive, and since expensive is bad, not customizing things to each individual must be good.
- That means “people like us” is good, so hiring practices that bring in more people like us must also be good.
- That even means buying into an ethic that destroys authority can be seen as good, because authority means the “top 1%” … essentially not average … and we’re defending the “average man” (See what I did there?) and that’s good.
Every person who built a product or acted with bias in any way quite possibly (or most likely) thought they were doing “good” in the good/bad view of how I was programmed to see the world.
This sense of good has its roots in the dawn of computing. “The Modern History of Computing” runs replete with examples of white men solving problems only white men had and probably saying, “This is good.” Because it was cheaper. Because it solved the “average” person’s problem (white men being the “average” person, of course). Computing is rooted in a time when this bias not only existed, it was reinforced in every way.
Today, consumers are diverse and, as Wachter-Boettcher states, the internet now the underpins all business in all sectors. Suddenly, in this context, the old way of looking at things starts to break.
Some online entrepreneurs tried on a new lens through which to see things. Let’s suppose some used the aforementioned work/doesn’t work lens. By changing the comparison from good/bad to work/doesn’t work, dramatic shifts of thought began to occur. We saw breakthroughs few sectors other than technology are capable of creating.
Chris Anderson’s “The Long Tail” tells of music retail services who looked at the brick-and-mortar model of promoting only mass-market hits and asking something like, “On the internet, does that work or not work?” It’s easy to think that the store model could simply be shifted online and top hits would be the most popular. But, as it turns out (and as Wachter-Boettcher points out), people aren’t average. They were only buying what was popular because that’s all there was available in the physical retail model. But music audiences that have every choice imaginable will choose exactly what fits them … because they can.
Is it cheaper? That’s not really the question anymore.
That question is part of the old good/bad paradigm. The new question is, “does it work or not work?” As Anderson points out, online audiences are collectively spending more money on music and movies than when they had only a retail option. So yes, it works.
Works/doesn’t-work-thinking is just one example of how framing the question can dissolve bias and lead to better-fitting products and services in an internet-connected world. Can I get exactly what I want instead of the average pop version? Sure, because that works. Can I pay a lot or a little? With lots of choices, pay whatever works for you.
These are answers the good/bad paradigm was incapable of achieving because good/bad thinking (that’s just one example) is inherently biased to what we were taught was good or bad. When our thinking is framed differently, such as in the works/doesn’t work approach, we are compelled to look at facts such as profitability, outcomes, and effects of our product designs, actions, and policies.
As Wachter-Boettcher, Parrish, and Beck eloquently demonstrate, the words we use to define the models (e.g. designs, policies, practices, and algorithms) we put in place can be more important than the models themselves. As they each point out, the majority of tech leaders describe the things they do and create as “good.” These authors are asking them if it works.
Brothers, Chalmers. Language and the Pursuit of Happiness. New Possibilities Press, 2005.
Wachter-Boettcher, Sarah. 2017. Technically Wrong: Sexist Apps, Biased Algorithms, and other Threads of Toxic Tech.
Parrish, Allison. 2016. “Programming is Forgetting: Toward a New Hacker Ethic.” Open Hardware Summit presentation. http://opentranscripts.org/transcript/programming-forgetting-new-hacker-ethic/
Beck, Estee. 2017. “Sustaining Critical Literacies in the Digital Information Age: The Rhetoric of Sharing, Prosumerism, and Digital Algorithmic Surveillance.” https://wac.colostate.edu/books/social/chapter2.pdf
“The Modern History of Computing.” 2000. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/computing-history/
Anderson, Chris. 2004. “The Long Tail.” Wired Magazine. https://www.wired.com/2004/10/tail/