Listen to the sound of data

(Written by Wibke Weber. Illustration: Schematic oscillogram of a sound. Author: Klaus-Dieter Keller, CC BY-SA 3.0)

90 rounds in 10 seconds. More than 1,100 rounds in all. This is the result of the shooting in Las Vegas in October 2017. 

“How did the gunman fire so quickly?” the New York Times wondered and analyzed the firing rate. It corresponds approximately to the rate of a fully automatic weapon with 98 shots in 7 seconds.

By way of comparison, 24 rounds in 9 seconds were fired in an Orlando mass shooting in 2016. In a graphic, the New York Times visualized the shots per second fired in the shootings and those fired by a fully automatic weapon: each dot one shot.

The graphics themselves don't look spectacular; nevertheless, they won the Malofiej Award 2018 in the category Breaking News. You can see the graphics here.

The striking thing about the graphics is not what you see, but what you hear: the fast shot sequence and the massive number of shots. Every shot is a dull sound of violence – it feels like standing in a hail of bullets.

Hearing is different from seeing

Journalistic data visualizations for listening are rare so far. As the term already says, it is about visualizing data. Only the visualization makes the data accessible. But in the case of the New York Times graphic, a second level of data transformation is added to the visual representation: the auditory transformation, or sonification. Sonification means: data are made accessible and tangible by transforming the data or aspects of the data into sound, thus presenting the underlying information in a different way. Because hearing is different from seeing.

We listen to a melody and feel touched. We hear a noise and turn our heads. We hear a signal and are alerted. We hear a voice and feel calmed. While the visualized data in the form of bar charts and graphs often appear objective and sober, sonified data evoke emotions.

Multimodal data visualizations using sonification follow a triple logic:

(i) the numerical logic based on mathematical calculation, mostly packed in tables and source code.

(ii) the logic of the image: this means the transfer of data into the visual mode. The logic of the image superimposes a second subjective level of meaning on the graphic since visualizing data is always an act of interpretation.

(iii) sonification: making the data audible. This third layer, which is a further act of interpretation, can either enrich the logic of the visual or add another layer of meaning to the visualization. The latter is the case with the New York Times graphics, where the shots immediately evoke images of the mass shooting in the user’s mind. The mute animated graph does not have the same effect. Why? Because hearing is always emotional. 

The ear perceives things in a more differentiated and sensitive way than the eye. If we could see as well as hear, we would recognize a 10-watt bulb from a distance of 1000 km. We hear over a spectrum of 10 octaves; the eye perceives only 1 octave of light. 

‘Octaves’ is the cue for another audible data visualization. The data visualization of the German newspaper Berliner Morgenpost goes over several octaves. (You will find the visualization here.) The data team translated the substantial fall of the Social Democratic Party of Germany (SPD) into music, based on 3838 surveys from January 1998 to the end of February 2018. The deep final tone sounds dramatic and seems to be the end of the party – which leads us to the next point: the aesthetic devices of sonification.

Points and lines, rhythm and timbre

Aesthetic stylistic devices in data visualizations, their multimodal interplay and their effects on how we perceive a graphic are aspects investigated by the research project INDVIL. The visual and auditory range of aesthetic elements is infinite: points, lines and areas as the basic units of data visualizations can be designed and varied in size, pattern, direction, shape, color, value and position. With advancing technology, these basic units have been extended by further variables, e.g. movement, speed, direction, flicker, rhythm, and synchronization. For instance, an animated graph looks more impressive than a static graph, thus conveying a different message. Data can also be made visible in the form of visual metaphors.

We know from film music and sound design that auditory stylistic devices are manifold too. The data sound of the two journalistic visualizations, and thus the messages, are created by different parameters: volume, pitch, duration, timbre, tempo, rhythm, harmony, and musical style.  Research questions are: How does the message of a visualization change when the data are presented in Vivaldi style? Should the shots fired by a gunman sound realistic or should they be sonified symbolically? How trustworthy is a sonification? Which methods are used to transform the data into sounds? Which program or algorithm are the sonified data built upon? How do such data visualizations relate to the objectivity norm in journalism? And does the audible version provide a new and surprising perspective on the data?

Questions that call for answers, especially when new technologies such as augmented and virtual reality (VR/AR) or artificial intelligence are entering the newsrooms. AR and VR allow us to dive directly into the data, interact with bar and pie charts, or zoom into timelines and maps. It's no longer about seeing data. It’s about feeling it.

New York Times (Oct, 5, 2017): Nine Rounds a Second: How the Las Vegas Gunman Outfitted a Rifle to Fire Faster.

Berliner Morgenpost (Mar.3, 2018): Der Sound zum tiefen Fall der SPD (The sound of the substantial fall of the SPD). Those who are interested in remixing the political sound can download the raw data including the sheet of music.

INDVIL Project: Innovative Data Visualization and Visual-Numeric Literacy.

Today's selected stories