It’s a feature, not a bug.

I’m not sure if you noticed it, but there’s a latency to “live” online conversations. No doubt you’ve experience this. You connect to an online conversation service (WebEx, FaceTime, etc.), you see the faces of the people you want to speak with and there’s a sense of joy … until you begin to speak. Then, because of the slight delay between when you speak and when they hear your voice, you step over each other’s words a bit. Before you know it, you feel like two people standing at an exit door, awkwardly insisting that the other one go first.

Few stories bring to life the real-world experience of latency than Damon Krukowski’ first installment of the “Ways of Hearing” podcast. It was beautifully produced and the sounds provide excellent illustrations of his point. The slight delay from digital processing that Krukowski describes by contrasting digital broadcast television and analog radio can also be found, perhaps more noticeably, when comparing video calls and analog phone calls. To be fair, there’s a whole lot more data to crunch in a video call. On top of an analog signal, the system has to present the users images moving in sync with the audio. Because it would look awkward to have our voices precede our images, the sound (which is a smaller data set and could be sent more quickly) is set to “wait” for the images to be collected, processed and sent together.

This whole process is remarkably fast but networks, despite fiber optic connectivity, still bottleneck with traffic overload and processing limitations. Thus the delay. Thus, also, the awkward opening moments of a video conversation.

Phone conversations, to be honest, aren’t analog anymore. We can’t tell the difference, though, because the network capacity and processing speeds have far outpaced the minimum requirements for transmitting the relatively small amounts of data that is the digital transmission of the human voice. But it only seems small now, in the age of streaming rich-content media.

Remember when downloading a Triscuit-sized video on the Internet took several minutes? Today we begin to stream an entire feature film with no delay. One day — and that day is pretty much here with platforms like Zoom and FaceTime — we’ll watch and listen to each other in full HD video with … no … latency.

Here’s what I’m wondering: Will we miss the latency? Is there some nostalgia to this phenomena? Will we look at movies with aged actors pretending to be kids in 2019, talking on the “video phone” of the day, and the studio will make sure that the phone call has a delay when the actors communicate? After all, no one in the 1940’s thought of radio static as nostalgic and engineers worked feverishly to remove it, only to have it become an essential part of any movie featuring an actor “tuning in” a radio. The static and squeal of the radio dial is nostalgic. It’s part of the experience.

The same is true for vinyl records (I honestly didn’t expect their return). I grew up on these things and, sure, I thought the music was great, but I was that kid who was also happy to move beyond cassette tapes to marvel at the amazing clarity of the first publicly-available compact disc. I’ll never forget the first CD I played on my Sony Discman (Chicago’s Greatest Hits, if you were wondering). It was static free. We did it! I thought to myself … we finally created pure, crystal-clear music!

Then what happens? We start to miss the hiss. That crackle when the needle scratched the grooves on a record might have sounded kind of cool after all. And, beyond the noise, the overall warm and fuzzy tone of the music played from vinyl somehow couldn’t be repeated in a digital format. And we actually got a little lonesome for it. Today there’s a whole generation of hipsters who have no recollection of what it was like to grow up listening to vinyl and that crowd gets the warm fuzzies listening to music on a turntable! It’s universal.

This also happened to film, and you’ll probably know it when I point it out.  I learned it when I landed one of my first jobs out of college as a writer-producer. It was the 90’s and the cool video guys in the video sector wanted to recreate the nostalgic, gritty, off-kilter value of film. HD wasn’t a thing yet. We all had standard definition TVs. Local broadcast affiliated stations still dominated airtime. That’s when my boss taught me about something I’d never noticed … film judder.

To make it overly simple: Film stock plays at 24 frames per second (fps). Video plays at 30 frames per second. To make the timing work, a few frames (the fourth or fifth, I think) of the film need to be repeated when converting 24 fps film to play in the 30 fps video world (your TV).

Want to see it for yourself? Play a feature film (an old one like Gone with the Wind that you know was shot on film) then pause and move the film forward one frame at a time (VHS remotes used to always have this feature for some reason). Count as each frame moves to the next. Around the fourth or fifth frame, the repeated frame becomes obvious.

Here’s the funny part: We got used to it to the point that we liked it. When films were converted to VHS and we were watching at home, that film judder was happening all the time. Audiences couldn’t tell you what was going on precisely, but they associated the film judder with feature films and knew it was somehow different than the 30-frame video format they watched each day on the evening newscast. The strange artifact that by all means was a glitch became the cool, je ne sais quoi of film-to-video conversion.

So the video guys used a technique to replicate the artifact called judder. That’s right. They made fake film judder. I remember when we used the effect for a jewelry store commercial. It was completely fake. We could have shot the commercial at 30-frames-per-second … we just didn’t. We made the video camera record at 24 frames per second so that, when it was converted, it looked more film-like … more cinematic, if you will. And it worked.

So let’s add this all up. Technology worked around the static of radio and we wanted it back. Computers were able to remove the fuzzy sound of a vinyl record only to see that audiences wanted that back, too. Then video could be recorded in perfect sync with our televisions, but videographers replicated an artifact of film because that was nostalgic. So this begs the question: Is video-call latency the static-vinyl-record-film-judder of our time?

And the bigger question: Do artifacts like static, judder and latency make us feel more connected to the content? More analog? More human?

A “Web” of Change

Image result for universal turing machine

The article “The Modern History of Computing” was revealing in that when I think of the computer I do not think in terms of analog. The article put forth the idea that analog is continuous vs “broken into units” and I was better able to understand the concept of analog. To further understanding of the concept, I thought of a clock – an analog clock moves in continuous motion, while a digital clock changes as a unit. No more will the concept of analog baffle me. I had heard of the ‘Universal Turing Machine’ as there is a movie called The Imitation Game that is about Alan Turing. I recall watching the movie and seeing this machine with wires and knobs and wondering how on earth this would calculate. Well the continuous circuitry theory explains it by indicating how power moves from one source to another, but disrupting the continuity and rerouting it takes the power to a different location. Anyway, I think the concept is fascinating and can be extrapolated to things other than the computer. Other than that, most of the technicality of this article, as well as Berners-Lee’s proposal for CERN’s central data storage proposal, is really quite over my head. For example, for the non-scientist, conceptualizing the idea that memory can exist in a mercury tube, to be accessed at will, is strikingly phenomenal, and unfathomable. Yet, we see its existence, if not currently, than ancestrally, in most modern technology.

Moving on to the internet, and its presence in our society, it is undoubtedly had the most significant shift in how people get and give information. The article “Design Patterns and Business Models for the Next Generation of Software”, shows the evolution of Web 1.0 to Web 2.0. The direct comparisons show how the terms change to indicate the changes in the capacity of the concept it is managing. For example, “page views” once the litmus test for how far-reaching the website (think “likes”), became monetized by allowing advertisers to earn money by clicking on a link of multiple varieties. Again, the internet web, in its enormity, does not seem to qualify for such a tiny little term such as “platform”. The article itself says “Web 2.0 doesn’t have a hard boundary, but rather, a gravitational core”. Wow! That is hard to imagine but puts it in perspective for me.

Also discussed, is the move from software as a product to a service. Current software updates are delivered in what feels like weekly cycles in a “perpetual beta”. If the consumer of a platform had to physically update the product without the service of the platform, the functionality of the platform would likely be compromised. I know that I often don’t want to do updates that are just sitting on my device waiting to “automatically” load.

The ideal that started with Berners-Lee, as a centralized repertory for his company to access and deposit data so that it would not be lost or undiscoverable, culminated into a complex maze of interconnected devices of all makes and models that continues to extend to the farthest reaches of the earth. As we read in these articles, the digital culture, no longer in its infancy, but a long way from maturity is based in a “web” of change.

Works Cited

McCracken, Harry. “The Web at 25: Revisiting Tim Berners-Lee’s Amazing Proposal.” Accessed 23 Jan      2019. http://time.com/21039/tim-berners-lee-web-proposal-at-25/

O’Reilly, Tim. “What is Web 2.0: Design Patterns and Business Models for the Next Generation of            Software.” 30 Sept 2005, https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html

Tarantola, Andrew. Photograph of Universal Turing Machine. How to Build Turing’s Universal Machine. Gizmodo, 15 Mar 2012. https://gizmodo.com/how-to-build-turing-s-universal-machine-5891399

“The Modern History of Computing.” Stanford Encyclopedia of Philosophy. Accessed 21 Jan 2019.                https://plato.stanford.edu/entries/computing-history/

It Starts with a Cell

Thoughts on the connected timelines presented in The Modern History of Computing, Web at 25: Revisiting Tim Berners Lee’s Amazing Proposal, and What is Web 2.0.

To study evolution is to discuss the interactions of countless elemental building blocks coming together to form the most basic of living things — the cell. It has a limited purpose and, simultaneously, it is a recognized miracle of connected, symbiotic reactions and systems. It’s so miraculous, scientists only need to find one living cell on another planet to stake fundamental claims that could alter our understandings about who we are in this universe.

Just one cell is the singular representation of an infinite potential for life. But to accomplish complex life, the cell has to follow a few rules. It needs a code, of sorts, to live by. The code has to be flexible, allowing the cell to adapt to different environments. The code has to promote connectivity and cooperation with other cells. The code has to be able to be passed on and, with the help of other cells, be modified or rewritten over time to adjust to changes in the surrounding. Those are some of the key aspects required for a cell to collect with other cells and become a tissue, that might work in conjunction with other tissue to create an organ, that might work with other organs to create a system, that may function in a collective of systems to create, for lack of a better word, a being.

Through the course of three collective histories, we see the evolution of computers as not only the creation of machines meant to help humans perform computations but, ultimately, systems now working together to create their own intelligence. The loom and other rudimentary technologies provide the building blocks and, ultimately, inspiration for the mechanical functions of a slightly more complex apparatus. Babbage theorizes what’s possible and his successors begin building with the mechanical technologies of the day. Turing, well ahead of his time, recognizes the need for computers to rewrite their own instructions in order to “think” progressively better. By the 50’s, advancements in chemistry and electronics realize a century of computational work into the form of a computer. The miracle cell was now real.

But it wasn’t connected.

Tim Berners Lee, collector of observations and available tools, not only saw the communication value of computer connectivity, he envisioned a shared data set and eventually imagined (and named) a collective identity we now know as the World Wide Web. Every computer, connected to every computer, would be more valuable than any single computer ever could be. With a few common concepts, Berners Lee connected the cells into a simple (at the time) system.

It comes as no surprise that the first efforts of Web entrepreneurs to profit from the network mimicked the marketing efforts of the other prevailing economic sectors. It’s what they knew. Working on the proven economic principals of the time, the Web 1.0 philosophy began building capital systems based on proprietary code, copyrighted sales models, and strategies designed to own the specific environment of a maximum number of users. Applying 90’s-era economics to a more advanced, organically organized system didn’t work and the dot-com failures proved that this was an altogether new economy and ecosystem.

From the ashes of the dot-com bust rose something well beyond a new technology. In fact, the newly-prevailing technologies such as RSS feeds and blogs were truly simplified examples of available technology. What changed was how they were introduced (not applied). A top-down, we-provide-value-to-clients model was turned inside out, allowing instead for the clients (users) to act as symbiotic cells in a system where they provide and increase value to each other.

The basic building blocks became the cell. The cell, once connected, became the system. The system, with user-driven input, is now writing, rewriting, and refining its own collective intelligence. Today we stand at the edge of the next curve. Whether it’s called AI or bots or some other trending term, Turing’s “computer brain” — the collection of parts and systems and symbiotic code that thinks and adapts and thinks better the next time — is a reality in its early stages. It’s at this point we begin to ask …

Is what began as a cell, about to become a being?

“The Information”

     Gleick’s, “The Information” brings to light a point that was made by McCracken in his “Web at 25: Tim Berners-Lee’s Amazing Proposal Document.”  Two of the greatest forms of technology were conceived without regard for the true impact that they would have on future generations. In both articles, relatively simple concepts have evolved into complex, yet commonplace items of mainstream technology.

     While a member of the Bell Labs Mathematical research group, according to Gleick, Claude Shannon began making connections by, “…seeking a framework to connect his many threads, Shannon began assembling a theory for information”. (Gleick)   Shannon eventually commenced incorporating the term information into his theory. Gleick credits Shannon with connecting information to entropy, chaos, and alleviating uncertainty. This theory is thought of as the stimulus for information processing. In order for information to be properly decoded and processed it needs to possess some qualities belonging to clarity.

     Loewenstein gave me a clearer understanding of the theory of information in what he refers to as the “information circle”.  According to Loewenstein, “It connotes a cosmic principle of organization and order, and it provides an exact measure of that.” (qtd in Gleick) Each piece of information is added to create a layer for the organization, that keeps the circle going.  The addition of information is never ending as referenced by the circle analogy.

     In my interpretation of  Loewenstein’s theory, I imagine the information circle as a layered cake.  Each piece of information is a layer of the cake, and it serves a different, but crucial purpose. Obviously, the layers of the cake are also required in order for the cake to meet the requirements of a layered cake. The organization and placement of each layer is determined by the level of importance that each layer possesses.  Because layers are added to the top of the cake there aren’t layers (information) that are no longer useful. Gleick claims that “Hardly any information technology becomes obsolete. Each new one throws its predecessors into relief.” (Gleick) In fact, each layer is required in order to continue building.

Works Cited

     McCracken, Harry. “Web at 25: Tim Berners-Lee’s Amazing Proposal Document.” Time, Time, 12 Mar. 2014, time.com/21039/tim-berners-lee-web-proposal-at-25/.

     Gleick, James. “The Information” New York Times, 2011, https://www.nytimes.com/2011/03/20/books/review/excerpt-the-information-by-james-gleick.html

The Web at 25: Revisiting Tim Berners-Lee’s Amazing Proposal

     The first reading I chose was Harry McCracken’s “The Web at 25: Revisiting Tim Berners-Lee’s Amazing Proposal.” Before I could fully engage in the article I had to know what the acronym CERN stood for.  After visiting their webpage I was able to discern the meaning of the seemingly elusive acronym CERN. Having a limited background in French I was able to make sense of the letter placement. It turns out that the acronym stands for the European Center for Nuclear Research.  In French, the adjective follows the noun; therefore (in French), it reads as follows: Center European Research Nuclear (CERN).

     The history of the internet and how it came to be is very interesting in that one of the pioneers Berners-Lee was simply seeking a better storage method for data.  He was attempting to store data in a central location that users could retrieve no matter their location. According to McCracken, Berners-Lee thought,”…it would be easier to find if it were all linked together in a way that made it accessible from any computer.” (McCracken)  Because of Berners-Lee’s insight into the need for documents to be accessed from multiple computers, I was able to complete a project for a course last semester after my laptop died with all of my essays and research documents on it. Without my Google Drive, I would have undoubtedly panicked.  Prior to Google Drive, most people myself included stored documents onto flash drives, and for those of us that are a little older, we even used floppy disks. The most important thing that I took from McCracken’s article was that all of the storage methods that are common and available to all that access the internet enabling its users to store and access information from any internet connected device because of a vision that Berners-Lee had.

     Even before my near miss with a life-changing loss, I was first introduced to the internet in 1995. I was stationed in Germany at my first duty station, and I was told that I needed to communicate with my peers via email.  Bear in mind that despite Andreessen and Bina’s announcement most people did not have access to the internet, and we certainly did not know what email was. While I can recall using a computer in the 1980s, I’m certain that it was not connected to the internet, as the nuns that ran the new and impressive and computer lab were given the task of updating the computers on a regular basis via floppy disks.  

     While Berners-Lee initial intention was not intended to be an avenue for its users to earn a profit, it most certainly has evolved into just that.  Despite the published function of web pages today, they all are connected by sort of monetary exchange. Aside from the fact that some web pages are run by non-profit agencies, there is still a monetary exchange of some sort. The exchange could be as direct as a donation of funds, or in an indirect sense by advertisers sponsoring the web page.  The sponsors are seeing a profit from the customers that patronize their business due to viewing their advertisements from the non-profit web page.

     McCracken makes a highly debatable claim in closing the article he claims, “… it’s going to make life better for future generations in ways that are unimaginable right now.” (McCracken) While this was certainly the case for me last semester, I have students that would definitely beg to differ as the internet is a source of much pain for them.  Their lives have been complicated in unimaginable ways. Torture for them and their bullies is simply one mouse click away.

 

Works Cited:

     McCracken, Harry. “Web at 25: Tim Berners-Lee’s Amazing Proposal Document.” Time, Time, 12 Mar. 2014, time.com/21039/tim-berners-lee-web-proposal-at-25/.

“CERN Accelerating Science.” Superconductivity | CERN, home.cern/about

Ironic Titles, Forward Thinking, and Less is More

The “Modern History of Computing”, dated 2000, and updated in 2006, which I found a bit ironic because of its abbreviated and “get to the facts” presentation of information. Though it makes sense, I did not know that the computer originated from the common calculator. Cambridge University’s Mathematics Professor Babbage’s calculator transformed from its original computing mathematical tables. I thought it was very poetic (pun intended) that while working with Ada Lovelace, Babbage began to open his mind to other possibilities for the machine. The romantic in me thinks that literature somehow factored (pun intended) in this equation (pun intended). Alas, like most great ideas for future technological inventions that Babbage may have envisioned, would not manifest for years and would in fact change directions that he may have intended.

What I noticed throughout this abbreviated but sufficiently thorough historical piece was that with each level of technological advancement, the intent was not to start over or recreate the wheel, but to make the invention better or to expand on what existed. Babbage adopted the idea of analytical machines which were later developed them for wider use, and those machines were analog machines. Analog machines were not as cost efficient the machines that were decided for usage were the turning machines.  And so on…

With “The Web at 25: Revisiting Tim Berners-Lee’s Amazing Proposal”, I remind myself that the greatest minds are British and they must have the gift of foresight or hidden away time travel machines. Berners-Lee’s modest proposal was simple- information management:  expanding on existing ideas, realistic expectations, aiming for compatibility, simplicity, and anticipating a product or service that all businesses would need and would want to develop. Again, expanding on what exists and forward thinking never hurt humanity.

Gleick’s “The Information” defines information, analyzes information, and relates information with other references. For example, he begins his article with an event that results in the Mathematical Theory of Communication, measurable and quantifiable information. He goes on to give examples of how we have been conditioned to accept scientific and mathematic information as important. On the contrary, we are actually experiencing TMI (too much information) and that we grew/continue to grow too dependent on that information, which somewhat connects to “The Long Tail”.

I think I connected to the “The Long Tail” the most. I have never been a person who NEEDED to see what music or movies everyone felt was relevant and popular OR what polls and ranks TOLD me what was relevant and popular. The hit-driven culture has become daunting under current economics.  Now, with new data from sales research teams, entertainment companies no longer force their most lucrative movies and music, they now make all material available, even the unpopular material.

history, speed, and acronyms (writing opportunity 1)

by Thursday next week (January 24), each of you will post here with your own thoughtful response(s) to our readings for weeks 1 and 2. this post is an example to get things started. I invite you to shape your own blog writings in ways that suit your own learning/thinking styles, and to create discussion by commenting on and extending others’ ideas as well.

Our first two weeks’ of readings provide a variety of views on a few important things that came before the internet: global military and academic research endeavors, analog and digital computers, various forms of programming, phone networks, the concept of information as measurable quantity… and more. These readings begin to give us at least a glimpse into how far the roots of the internet (and the roots of all the digital cultures it has facilitated) reach back towards older practices, older hardware and software, older ideas, and pre-existing communities.

One of my favorite connections between computers and older technologies comes up in the Stanford overview—the detail about Charles Babbage having been inspired by the 1700s Jacquard loom. If you’ve ever seen a large loom in action, or thought about how a loom + thread + a pattern can create dozens of differently intricate woven rugs, then it makes pretty good sense that all those analog interlocking pieces and colors = a machine very like a computer (albeit one with a very focused range of input and output).

The loom is likely not the only relatively ancient technology that prefigures the modern computer. After reading Gleick’s book excerpt in the NYT, I want to see numbers and coins as pseudo-digital technologies, too. If, as I’ve heard many scholars talk about, numbers and accounting were the impetus for the first written language ever, then we owe those technologies quite a lot in terms of culture and the humanities.

Along with this central idea of history/historical precedent, another theme I felt throughout many of these readings was that of speed. The Standford history overview references speed as a prime value for almost everyone involved in developing early computers. The TIME magazine tribute and the in-depth O’Reilly essay both mention how fast things changed (and continued changing) once the web became part of the average person’s life. Speed and change seem to be key ideas. I wonder how those themes might show up in our other readings later in the semester.

A third theme that I feel might be important to keep in mind is one of public vs. private, and the spectrum between the two extremes. The timelines and milestones of digital history we’ve looked at so far focus largely on government-sponsored projects, military projects, and academic projects. For the most part these are motivated by public good. As McCracken writes in his TIME magazine piece, the network that Tim Berners-Lee proposed in 1989 was something that he “gave to the world.” But as we see in Anderson’s article on The Long Tail, there are plenty of private, capitalist interests online now, making money and changing the world in their own ways. Many of the biggest influences on digital culture today are commercial companies: Google, Apple, Facebook, etc.

Whether we’re talking about public organizations (like governments, universities, or non-profits) or private companies (like businesses, start-ups, service providers, etc), it’s never as simple as “public” or “private,” with no in-between. Even public institutions need financial stability, and even for-profit companies should (hopefully) be thinking at least a little bit about the public good. How do public and private entities work together and influence each other in relation to digital practices, policies, and cultures?

Somewhat related to these questions: I started watching National Geographic’s The Valley of the Boom this week. I wish I’d seen more about this series before I finalized our reading list this semester, but the timing just didn’t work out for me to add it as required viewing. The show adds yet another historical perspective on the internet—this time from a corporate angle. If you’re at all curious, I’d definitely recommend it.

Finally, I have to comment on how many acronyms have cropped up just in these first few readings. Maybe another mini-class-project we could undertake is a short glossary of acronyms and accompanying explanations. So far I’ve noted P2P, HTTP, HTML, XML, XHTML, CSS, AJAX, PHP, MySQL, RSS, NNTP, and I’m sure many more will cross our paths as we continue through the semester. Which of these acronyms are you already familiar with? Which are new to you? Thinking about when and why you’ve learned some of these but not others could help as you brainstorm and plan for our first assignment, the digital literacy narrative.

Welcome

This course explores past, present, and emerging practices and trends in digital culture. We will consider a range of digital tools, platforms, and cultures, closely examining how such tools and practices influence how we create, share, and preserve texts and other cultural artifacts.

This semester, we’ll engage with historical and contemporary theories of digital culture, and you’ll get to practice creating your own digital artifacts.

As we read and explore ideas of digital culture together, we’ll dip in and out of many types of discourse and media, sampling theory and critique on a range of topics– from social networking platforms, open source principles, fandoms and gaming communities, to crowdsourcing and crowdfunding initiatives, vlogs, blogs, web series, podcasts, and social justice movements.