— Clint Schaff (@clintschaff) March 7, 2014
These stats, assembled by We Are Social, may not be exact, but certainly seem plausible.
This is a guest post for the Sweetland Digital Rhetoric Collaborative’s blog carnival on data and tools.
This semester I am teaching a graduate seminar, Techniques in Information Visualization, and although my classes never enroll only cinema students, this one includes graduate students from five disciplines, at both the MA and PhD levels: English, Education, Journalism, Public Planning & Development, and Architecture. This diversity makes class both extremely rich but challenging to plan and lead. In short, it helps all of us get outside of our comfort zones. The lack of a shared vocabulary, for instance, means jargon must be reexamined and either justified or abandoned.
So what does this have to do with data and its management? First, there is no better topic for the type of defamiliarization inherent in a class like this, than that of information visualization. As numerous posts in this carnival have pointed out, thinking differently about what constitutes a datum and how it leads to information is incredibly important. This is true in all fields. Humanities researchers frequently feel their work has little if any data, while many “hard” and social scientists feel that the inclusion of a survey mechanism adds a statistical element to a project, as though self-reporting is transparent and always accurate.
After deciding (and disclosing) what the dataset will be, I find there are two main uses for information visualization: discovery and representation. The very act of visualizing complex data sets can be illuminating—it is often the only way to see connections and trends among results. But once these insights are gleaned, any visualization needs to be “cleaned up” so to speak, in order to emphasize the insights to be conveyed. Sometimes this means excluding outliers and sometimes this means simplifying certain aspects, but it should always be a rhetorically savvy act, one with intentionality. Both activities—the discovery and the representation—are extremely useful ones. As more of the world is data driven, we must interrogate the basis of the dataset as well as its representation. To this end, I will briefly discuss a tool and a method for visualizing data which can defamiliarize it in the process.
The first is a tool, but it’s actually so much more: Processing. As the Processing site notes, it is “a programming language, development environment, and online community.” Processing lives at the intersection of math and the visual arts, rendering and displaying data dynamically. Perhaps one of the most prominent projects is We Feel Fine by Jonathan Harris and Sep Kamvar which scours the internet every ten minutes looking for blog posts that have the phrase “I feel” and “I am feeling” and then displays this data dynamically, making it visually quite stunning. Processing has been used for rapid prototyping in addition to full scale projects and there are numerous examples on its site. It is free and open source with excellent documentation and some great tutorials. There is also Processing.js extension for flexibility and HTML5 integration. We have been slowly rolling it out in our curriculum (the Media Arts + Practice Division of the School of Cinematic Arts) and it’s been exciting to play with the possibilities. At the very least, Processing can foster algorithmic literacy and allows one to dive in with very little coding. In so doing, it can expand the possibilities for working with both data and code.
Similarly, the method I turn to can also function as a bridge between the sciences and the humanities. My current research, the Large Scale Video Analytics (LSVA) project, brings the power of supercomputing to bear on massive video databases (this includes digitized films as well as natively digital video). Not only are most image-based repositories incompletely tagged, there is a loss inherent in any transfer between one semiotic register (image) and another (word). As such, my team is attempting to enhance machine-read image queries by deploying several of them in a single search across thousands of videos, while also allowing crowd-sourced tagging.
One of the team, Dave Bock, is a visualization expert at the NCSA (National Center for Supercomputing Applications) and he normally works with chemists and physicists visualizing their data. In talking about the LSVA, I often use the mantra “Video is the big data issue of our time,” thought it’s remained more of an abstraction to me: a concept that seemed as though it would help the supercomputing people understand the importance of this work. But Dave began treating the moving images as actual data and began visualizing it using the same methods he employs to visualize scientific data.
The early results are amazing in that they have help us to clearly see similarities and differences in things like color timing, shot angle, edit lengths, et cetera, across videos and across archives without the nagging impulse to focus exclusively on the content of the video. The difficulty with filmic media is that it always seems to be capturing something real, some aspect of the material world. It seems objective, a mechanical rendering of the world. Its constructed nature is often difficult to stay aware of, and yet to film something is to frame it, and to frame is to exclude all else. Moreover, editing footage is also a rhetorically sophisticated act, one that is not ideologically neutral. But treating footage like data de-emphasizes this level of the film, and we can begin to speculate about the cumulative impact of these screens that bombard us daily. Also, by establishing a “barcode” of sorts for one film, we can compare it across thousands in a way that viewing simply would not allow: when there is more video produced each day than a human can view in a lifetime, we simply must find different research methodologies.
As with Processing, the benefits of this approach are not immediately, nor straightforwardly apparent, though the benefits are many. To visualize image-based media, and to spatialize time-based media is a bit trippy, but I am confidant it will be the source of new insights and help us form more sophisticated research questions about all data, image-based or not.
This short piece was made for a panel at the 2013 Computers and Writing Conference. The panel was organized by Naomi Silver (of U-Mich) and included a wonderful group of colleagues: Cheryl Ball, Kristine Blair, James Purdy, Joyce Walker.
C+W2013: Roundtable on the Futures of Composition
MOOCs as filmic texts
Hello. I’m Virginia Kuhn. I wish I could be physically present but I thank my colleagues for allowing me to participate in this way. I am an Associate Professor of Cinema in the newly formed Media Arts + Practice Division of the USC School of Cinematic Arts.
One of the key questions around which this panel was organized concerned MOOCs—massively open online courses—and their impact on FYC. And it’s no wonder: Even the most cursory search of the Chronicle of Higher Education uncovers copious articles that reference MOOCs. But what is even more revealing is when these references occur: 3 years ago there were 12, but in the last year, there have been 290. A search of Inside Higher Ed reveals quite similar results.
Whether it’s Coursera, Udacity, or Kahn Academy, MOOCs have captured the pedagogical imaginary across all disciplines in a way that is fairly unprecedented. MOOC mania does, however, bear some resemblance to the hype surrounding initial computer integration into schools. Like those early sound bites, MOOCs will either revolutionize education or ruin it. They will democratize access or stratify university education widening the chasm between elite institutions and all others. Beside this hype though, there is little evidence that these private companies are applying the principles of connectedness and distributed learning that George Siemens and Stephen Downes pioneered in 2008.
These issues are complex and compelling but for me, the more interesting area of inquiry centers on the actual ‘texts’ that comprise a MOOC. The architecture of computer networks [CLOUD] has only recently been able to accommodate the advances in graphics processing units [GPUs] which allow online hosting and streaming of big fat video files. But the impact is enormous. And I don’t think that is hyperbole. Video is everywhere. Not only do we view it [reading], we produce it [writing].
Indeed, much of my work centers on the premise that digital technologies endow filmic texts with book-like qualities. Unlike the ephemerality and broadcast nature of early television and cinema, video texts can be analyzed in a sustained way—they can be stopped, started, and studied with relative ease. They can also be created by anyone with a cell phone. In this light, we might view MOOCs as libraries that house these video-texts; flipping the classroom—a term that has been bandied about for assigning a video as homework and reserving class time for discussion. But this flipping is based on a lecture model: if we instead view these videos as the textbook that students are assigned to read in preparation for class work and discussion, then this flipped classroom is pretty much business as usual.
And this seems to be what Coursera has recently realized: on May 29, they announced a shift in their approach: they will partner with 10 public institutions to explore “campus-based MOOCs” or blended learning. As one astute article notes [screen shot of Jump the shark article], rather than competing with state universities as originally planned by inserting their rock star faculty’s lectures, they will now compete with Blackboard and other course management systems.
It is the composition of these video-texts that should concern us, just as the composition of alphabetic texts has been our focus to date.
A number of companies are in the video-making business and many see educational videos as a huge market. Indeed Kaltura, the video platform that offers both hosting and online editing, is holding a summit next week. The promotional materials explicating link commercial video with educational video. It’s not that there are no overlaps, however, pedagogy and advertising are (or should be) rhetorically discrete. Moreover, rhetoricians ought to be actively weighing in on the ideology inherent in cinematic language.
And some are. Kahn Academy videos have been called out by subject matter experts as being flawed and have given rise to Mystery Theatre style parody videos.
Or take the case of Common Craft educational videos. The Common Craft goal is explained in a book called The Art of Explanation in which the CC founder makes a case for creating videos that explain complex ideas in "Plain English." Lee Lefever notes that Common Craft has been hired by educational institutions, and the "Common Craft style" has emerged. It is worth mentioning that nowhere in this art of explanation is the word rhetoric mentioned.
But what happens when complexity cannot be represented in "Plain English" and/or when such companies, who work with little if any scholarly input simply get it wrong? There are numerous examples of flawed logic in the in the downhome style narrative of Common Crafts educational videos that cover topics such as plagiarism, RSS and Twitter.
Perhaps the most egregious example comes in the advice for constructing a video explanation. The recommended 3 minute length includes
If we consider writing with video as an extension and evolution of the academic essay, then the implications of this provocation for writing studies are numerous.
Common Craft educational videos:
Coursera jumps the shark, Higher Education Strategy Associates: http://higheredstrategy.com/coursera-jumps-the-shark/
Coursera Blog. http://blog.coursera.org/post/51696469860/10-us-state-university-systems-and-public-institutions
Inside Higher Ed: http://www.insidehighered.com/search/site/MOOCs
Kaltura’s video summit with recommendations for educational video based on business video: http://site.kaltura.com/KEVS-2013.html?mkt_tok=3RkMMJWWfF9wsRoivK7AZKXonjHpfsXw4%2BQlWLHr08Yy0EZ5VunJEUWy2YIBRdQ%2FcOedCQkZHblFnVUATq2mWK0NoqAE
Lefever, Lee. The Art of Explanation: Making Your Ideas, Products, and Services easier to understand. New Jersey: Wiley and Sons. 2013
The Trouble with Khan Academy: http://chronicle.com/blognetwork/castingoutnines/2012/07/03/the-trouble-with-khan-academy/
This post appeared in Media Commons Front Page section.
The site experienced some difficulties at the time, and it was the end of semester so I lost track of the thread and didn’t engage in the conversation as I would have liked to, but given the ongoing discussion of these overlaps and differences, it seemed like a good idea to reproduce it here:
At the 2011 Computers and Writing conference, I participated in a plenary session themed around the question, Are You a Digital Humanist? Even though I have not taught a writing course in years, and my faculty appointment is in Cinematic Arts, I’ve stayed connected to the C+W community, which includes a vibrant discussion forum, “techrhet” (technical rhetoricians), and the peer-reviewed journal, Kairos: A Journal of Rhetoric, Technology, and Pedagogy. And viewing the term digital humanities from the C+W perspective proved illuminating.
It reminded me that one of the divergences between media studies and the digital humanities may have less to do with the “digital” and more with the “humanities” side of that term, especially in departments of English. In tracing the emergence of the freshman English class (aka first year composition), Sharon Crowley finds rampant evidence of the “humanist contempt for mass media and popular culture” running through the professional literature. For instance, in 1950, one scholar laments the “visual minded illiteracy of a generation of television watchers,” just as in 1890, Adams Sherman Hill worried that novels and newspapers would ruin people’s language use as well as their morals (105).
The humanistic tendency to view literature as the height of human expression has traditionally been at odds with the study of mass media, and this privileging of literature, in turn, has implications for the type of critical response considered appropriate. Poets and dramatists are artists and their tool is creativity; the academic essay is about literature but, using the tool of criticism, it takes the form of prose. Given these roots, it’s not surprising that many digital humanities projects build tools to help study literature. And, insofar as many film studies programs grew out of literature departments, the privileging of cinema is similar: that is, cinema is the art we write about using academic prose. The creative and the critical are separate entities and the form of the critical does not shift much, nor is it questioned.
The field of rhetoric and composition takes as its subject the shifting nature of communication and expression, and what that means for academic argument as well as for teaching the academic essay. Indeed, the Computers and Writing conference has been problematizing the digital for almost thirty years, and so its members express skepticism about something that might seem like a trendy term: digital humanities.
Personally, I identify more as a digital rhetorician than a digital humanist, mainly because of the rhetorical focus on both the production and the consumption of texts; I encourage the use of all of the available semiotic registers which no longer includes only words, but also images, sound and interactivity. I make remix videos, I publish pieces that could not have been done on paper, and my research centers on tools for indexing massive video archives. Still, I use terms like digital humanist strategically and contingently, and in this respect, I follow the sentiments of the crowd-sourced Digital Humanities Manifesto 2.0, which argues that the term is not perfect, but it is a placeholder for what comes next. Given that current academic disciplines coalesced during the ascendency of print literacy, they need rethinking and likely will shift. Our active participation in that process will no doubt begin with conversations like this.
Crowley, Sharon. Composition in the University: Historical and Polemical Essays. 1998, U of Pittsburgh P.
Digital Humanities Manifesto: http://hastac.org/node/2182
“Are You a Digital Humanist?,” Town Hall session, Computers and Writing, 2011. Katherine Hayles, Jentery Sayers, Julie Klein, Alex Reid, Cheryl Ball, Doug Eyman
Friday, April 26, 2013
8:00pm – 12:00am
University Park Campus
USC School of Cinematic Arts (SCA)
Admission is free.
Rhythms + Visions: Expanded + Live 2 will light up the School of Cinematic Arts Complex in an evening of large exterior projections and animated sonic performances. Innovative artists Quayola, Miwa Matreyek and Charles Lindsay will perform an eclectic program of contemporary visual music and audio-visual art.
Quayola’s time-based digital sculptures and immersive audio-visual work have been presented worldwide. Using huge projections and sound, he will perform Partitura and other animated sound visualizations.
Miwa Matreyek creates magical animated illusions in layered, multi-projection performances. Matreyek performs as a live actor within her animations, which are akin to George Méliès or an animated storybook. She will perform a new animated audio-visual work.
Charles Lindsay, the artist in residence at the SETI Institute, combines science and astronomical visions with accompanying live vocal and electronic musicians. Their piece Trout Fishing in Space envisions a future when humans will leave Earth for good. The work features extraordinary images from the Cassini space mission and earthbound images of nature.
In addition to these performances, the exterior spaces will come alive with multimedia works by faculty and students from USC’s Digital Arts and Animation and Interactive Media divisions.
Organized by Michael Patterson (Cinematic Arts).
Photo: Scott Groller