Menu Close

Discriminability Tests for Visualization Effectiveness and Scalability

Contributors:

Rafael Veras and Christopher Collins

The scalability of a particular visualization approach is limited by the ability of people to discern differences between plots made with different datasets. Ideally, when the data changes, the visualization changes in perceptible ways. This relation breaks down when there is a mismatch between the encoding and the character of the dataset being viewed. Unfortunately, visualizations are often designed and evaluated without fully exploring how they will respond to a wide variety of datasets. We explore the use of an image similarity measure, the Multi-Scale Structural Similarity Index (MS-SSIM), for testing the discriminability of a data visualization across a variety of datasets. MS-SSIM is able to capture the similarity of two visualizations across multiple scales, including low-level granular changes and high-level patterns. Significant data changes that are not captured by the MS-SSIM indicate visualizations of low discriminability and effectiveness. The measure’s utility is demonstrated with two empirical studies. In the first, we compare human similarity judgments and MS-SSIM scores for a collection of scatterplots. In the second, we compute the discriminability values for a set of basic visualizations and compare them with empirical measurements of effectiveness. In both cases, the analyses show that the computational measure is able to approximate empirical results. Our approach can be used to rank competing encodings on their discriminability and to aid in selecting visualizations for a particular type of data distribution.

Materials related to this research are available for download here.

Publications

    [pods name="publication" id="4161" template="Publication Template (list item)" shortcodes=1]

Acknowledgements

We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and Fundac¸ao CAPES (9078- ˜ 13-4/Ciencia sem Fronteiras).

Saliency Deficit and Motion Outlier Detection in Animated Scatterplots

Contributors:

Rafael Veras and Christopher Collins

We report the results of a crowdsourced experiment that measured the accuracy of motion outlier detection in multivariate, animated scatterplots. The targets were outliers either in speed or direction of motion and were presented with varying levels of saliency in dimensions that are irrelevant to the task of motion outlier detection (e.g., colour, size, position). We found that participants had trouble finding the outlier when it lacked irrelevant salient features and that visual channels contribute unevenly to the odds of an outlier being correctly detected. Direction of motion contributes the most to the accurate detection of speed outliers, and position contributes the most to accurate detection of direction outliers. We introduce the concept of saliency deficit in which item importance in the data space is not reflected in the visualization due to a lack of saliency. We conclude that motion outlier detection is not well supported in multivariate animated scatterplots.

This research was given an honourable mention at CHI 2019.

Materials used to conduct this research are available for download here.

Publications

    [pods name="publication" id="4212" template="Publication Template (list item)" shortcodes=1]

Acknowledgements

ActiveInk: (Th)Inking with Data

During sensemaking, people annotate insights: underlining sentences in a document or circling regions on a map. They jot down their hypotheses: drawing correlation lines on scatterplots or creating personal legends to track patterns. We present ActiveInk, a system enabling people to seamlessly transition between exploring data and externalizing their thoughts using pen and touch. ActiveInk enables the natural use of a pen for active reading behaviours while supporting analytic actions by activating any of these ink strokes. Through a qualitative study with eight participants, we contribute observations of active reading behaviours during data exploration and design principles to support sensemaking.

This research was given an honourable mention at CHI 2019.

Learn more about ActiveInk by visiting the project’s website and be sure to check out Microsoft’s blog post on ActiveInk.

Publications

    [pods name="publication" id="4209" template="Publication Template (list item)" shortcodes=1]

Acknowledgements

EduApps: Helping Non-Native English Speakers with Language Structure

First language (L1) influence errors are very frequent in English learners (L2), even more so when the learner’s proficiency level is higher (upper-intermediate/advanced). Our project aims to analyze errors made by learners from specific L1’s using learner corpora. Based on the analysis we want to focus on a specific type of error and research a way to identify it automatically in learners’ essays depending on their L1. This would allow us to implement an application that helps English as Second Language (ESL) students to identify and analyze their errors and to better understand the reasoning behind them, consequently improving the students’ English level.

About the EduApps initiative

EduApps is a suite of apps housed in an online environment that focuses on the health, well-being and development of one’s mind, body and community. Our research project titled, “There’s an App for That” is investigating the design process, development, implementation and evaluation of this suite of educational apps. Specifically, we are interested in helping students build confidence and competence in the cognitive, socio-emotional and physical domains. We are also interested in the impact a learning portal can have on students’ learning, teachers and the surrounding community. We hope that our research can build capacity for investigating and affecting innovation in formal and informal education settings in the use of digital technology. We have partnered with school boards and community organizations to develop and research the apps. More about each of the domains — their purpose, apps and related research can be found at http://eduapps.ca/.

Publications

    [pods name="publication" id="4191" template="Publication Template (list item)" shortcodes=1]

Acknowledgements

Metatation: Annotation for Interaction to Bridge Close and Distant Reading

In the domain of literary criticism, many critics practice close reading, annotating by hand while performing a detailed analysis of a single text. Often this process employs the use of external resources to aid analysis. In this article, we present a study and subsequent tool design focused on leveraging a critic’s annotations as implicit interactions for initiating context-specific computational support that automatically searches external resources. We observed 14 poetry critics performing a close reading, revealing a set of cognitive practices supported through free-form annotation that have not previously been discussed in this context. We used guidelines derived from our study to design a tool, Metatation, which uses a pen-and-paper system with a peripheral display to utilize reader annotations as underspecified interactions to augment close reading. By turning paper-based annotations into implicit queries, Metatation provides relevant supplemental information in a just-in-time manner and acts as a bridge between close and distant reading.

Publications

    [pods name="publication" id="4263" template="Publication Template (list item)" shortcodes=1]

Perceptual Biases in Font Size as a Data Encoding

Many visualizations, including word clouds, cartographic labels, and word trees, encode data within the sizes of fonts. While font size can be an intuitive dimension for the viewer, using it as an encoding can introduce factors that may bias the perception of the underlying values. Viewers might conflate the size of a word’s font with a word’s length, the number of letters it contains, or with the larger or smaller heights of particular characters (‘o’ vs. ‘p’ vs. ‘b’). We present a collection of empirical studies showing that such factors-which are irrelevant to the encoded values-can indeed influence comparative judgements of font size, though less than conventional wisdom might suggest. We highlight the largest potential biases and describe a strategy to mitigate them.

Publications

    [pods name="publication" id="4308" template="Publication Template (list item)" shortcodes=1] [pods name="publication" id="4269" template="Publication Template (list item)" shortcodes=1]