An Invitation to Ukrainian Students

The research funding agencies in Canada have established a temporary emergency fund to support Ukrainian students to continue any disrupted graduate studies at a university in Canada. I know there are more pressing issues than continuing studies for most Ukrainians, and that even getting to Canada is a challenge. That said, if you are interested in HCI and/or InfoVis research, and you are affected by this crisis, I would be happy to talk to you about joining my lab under this emergency funding. I can likely supplement funding to provide for degree completion. Email me directly.


Vialab Launches New Blog on Medium!

Early this summer we began posting blog content at Below is a brief description of our current blog posts as of the time of this writing.

Case study: Visualizing sexual harassment in academia

Our first blog post is about our Tied in Knots project and the research that went into it. In this article, we explore the lessons learned and describe the unique design journey our researchers undertook in visualizing data of such an intimate and unstructured nature in a manner that preserves the emotions and harrowing impact behind each survivor’s story.

Covid Connect

Our second blog post was published alongside the initial release of our Covid Connect web app that uses machine learning to guide users through a chatbot conversation about their experiences during the COVID-19 pandemic and display related stories from other people. This article discusses the design journey of our researchers, the technology behind the application, and how this app seeks to address the mental health crisis associated with the pandemic.

Visualizing Language Transfer Effects in Large Learner Corpora

This article describes in detail the result of a collaboration between our lab and researchers from the University of Konstanz in Germany: the H-matrix visualization. This article describes the utility of our H-matrix visualization and its many features, as well as the research that informed our design decisions.

Effects of Multiple Visual Channels on Outlier Detection

This article describes the results of a research project where we investigated how outlier-ness (or lack-there-of) in one visual dimension in a multivariate data visualization affects the perception of a data-point being an outlier in another visual dimension. Our findings suggest that designers should exercise caution when designing visualizations for multivariate data, especially when those visualizations are to be used for data discovery tasks.

Parallel Tag Clouds: A look back at one of our most influential papers

In our most recent blog post, we reflect on one of our most influential and cited research papers describing a unique text corpora visualization tool we call Parallel Tag Clouds. In this article, we describe the PTC visualization, the text corpora we used it on and the insights we obtained from using our PTC tool, and finally, the surprising reasons our Parallel Tag Clouds paper was so influential.

We plan to continue writing for this blog, and we hope that in doing so we will continue to make our research more accessible and disseminate our findings to a wider audience.

Vialab members to have largest ever presence at the IEEE VIS Conference

This year at the IEEE VIS Conference (Oct 19-25), members of the Ontario Tech University visualization research group Vialab will be at the Vancouver Convention Centre to present research results across the entire spectrum of conference events. Starting on Saturday, visiting researcher Tommaso Elli (Politenico Milano) will present his digital humanities dissertation plans at the doctoral colloquium. On Sunday Christopher Collins (conference posters co-chair) will present a paper co-authored with lab members Adam Bradley and Victor Sawal, Approaching Humanities Questions Using Slow Visual Search Interfaces at the VIS4DH workshop. After lunch, Christopher will give the keynote Mixed-Initiative Visual Analytics: Model-Driven Views and Analytic Guidance at the MLUI workshop.  

Christopher will participate on day one in the panel discussion at the EVIVA-ML Workshop. On Tuesday in the opening plenary, Christopher will accept the VAST Test of Time Award with Martin Wattenberg and Fernanda Viégas for their 2009 paper Parallel Tag Clouds to Explore Faceted Text Corpora.

In the first papers session (aptly titled Provocations) on Tuesday (2:50, Ballroom B), Kyle Hall will present a paper co-authored by lab members Adam Bradley and Christopher Collins on Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations.  On Tuesday afternoon in Room 1, Mariana Shimabukuro will present a short paper H-Matrix: Hierarchical Matrix for Visual Analysis of Cross-Linguistic Features in Large Learner Corpora, a language education visualization created with collaborators from the University of Konstanz.

On Wednesday (10:50am, Room 2+3) Brandon Laughlin will present A Visual Analytics Framework for Adversarial Text Generation at the VizSec Symposium on Visualization for Cyber Security. This work, in conjunction with Ontario Tech researchers from the Faculty of Business and IT, proposes a method to leverage machine learning and human linguistic expertise together to create adversarial examples which can convince both human readers and machine learning classifiers. Later that morning (11:50am, Room 8+15), Menna El-Assady will present our CG&A paper Visualization and the Digital Humanities: Moving Toward Stronger Collaborations written in collaboration with Adam Bradley, Christopher Collins, and collaborators from several institutions. This paper presents the experiences of interdisciplinary collaboration from both the humanities and computer science points of view. The poster reception starting at 5:10pm in Ballroom ABC will include Mariana Shimabukuro’s poster Cross-Linguistic Word Frequency Visualization for PT and EN (displayed in poster position 87 starting Monday for the entire week).

On Thursday afternoon (4:40, Ballroom B) Menna El-Assady will present Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections. This paper presents a new method for manipulating linguistic models such as topic models through expressing semantic knowledge using a visual interface, allowing for the training and adjustment of complex black-box models without adjusting obscure model parameters.

On Thursday (11:35, Ballroom A), alumnus Rafael Veras will present his work Discriminability Tests for Visualization Effectiveness and Scalability. When data changes are not visible to viewers of a visualization, then the visualization is not effective. This project introduces a new low-cost method to model human perception to determine the limits of discriminability for visualizations.

At this year’s IEEE VIS, Vialab will present 3 full papers, 1 short paper, 1 symposium paper, 1 workshop paper, 1 poster, 1 CG&A paper, and 1 doctoral colloquium talk! We gratefully acknowledge the funding support of the Canada Research Chairs program and NSERC which has made these projects possible. Open access preprints of papers are available on this website as well as on Arxiv.

Vialab members presenting award-winning work at the ACM CHI Conference on Human Factors in Computing Systems

From Saturday, May 4th to Thursday, May 9th 2019 Dr. Christopher Collins and Dr. Rafael Veras from vialab will be attending the ACM CHI Conference on Human Factors in Computing Systems in Glasgow to promote our research activities. CHI is the premier international conference of Human-Computer Interaction, a place where researchers and practitioners gather from across the world to discuss the latest in interactive technology. The conference is highly selective, accepting only about 23% of submitted works in the full papers track. Both papers co-authored by vialab members received honorable mention awards, which are given to the top 4% of submissions in any given year.

Saliency Deficit and Motion Outlier Detection in Animated Scatterplots (Honorable mention)

Rafael Veras and Christopher Collins

Monday May 6th at 14:00 – Room: Lomod Auditorium

When a data visualization is animated, some points naturally pop out and others are less visible among the clutter. Through a large scale perceptual experiment, we determined which factors are most likely to cause important data elements to be seen or missed. The resulting model can be used to guide the design of visualizations to ensure important data points will be visible.

ActiveInk: (Th)Inking with Data (Honorable mention)

Hugo Romat, Nathalie Henry Riche, Ken Hinckley, Bongshin Lee, Caroline Appert, Emmanuel Pietriga, Christopher Collins.

Tuesday May 7th at 14:00 – Room: Hall1

Interacting with a pen to write thoughts and sketch ideas is a natural way to think through an analysis. In ActiveInk, we merge note-taking with novel ink-driven actions on data visualizations, such as circling a data item to highlight it, or crossing it out to remove it from view.

Other Contributions from Ontario Tech University

Members of Ontario Tech’s Games User Research Group also have a strong showing at CHI 2019!

Full papers:

Let’s Play Together: Adaptation Guidelines of Board Games for Players with Visual Impairment

Frederico da Rocha Tomé Filho, Pejman Mirza-Babaei, Bill Kapralos, Glaudiney Moreira Mendonça Junior.

Wednesday May 8th at 14:00 – Room: Gala

While board games have been rising in popularity in the past decade, they have been largely inaccessible for those with visual impairment. We investigated and evaluated various accessibility strategies to make these games playable to all users, regardless of visual ability, and propose a series of guidelines for the design and evaluation of accessible games.

Aggregated Visualization of Playtesting Data

G√ľnter Wallner, Nour Halabi, Pejman Mirza-Babaei,

Wednesday May 8th at 16:00 – Room: Hall 2

Visualization techniques are currently being employed to help integrate quantitative and qualitative data. This paper proposes an aggregated visualization technique to simultaneously display mixed playtesting data. We evaluate the usefulness of the technique through interviews with professional game developers and compare it to a non-aggregated visualization.

Late-breaking works:

Artificial Playfulness: A Tool for Automated Agent-Based Playtesting

Samantha Stahlke, Atiya Nova, Pejman Mirza-Babaei

Tuesday May 7th at 10:20 am – Room: Hall 4

Playtesting is a crucial part of the game production process, but testing with human users can be incredibly expensive and time-consuming. Our research aims to address these challenges with PathOS – a prototype framework for simulating player navigation in games through the use of AI. PathOS gives developers a cost-effective option to coarsely predict player behaviour, allowing them to pursue informed iteration on their work earlier in the design process.

FRVRIT – A Tool for Full Body Virtual Reality Game Evaluation

Daniel MacCormick, Alain Sangalang, Jackson Rushing, Ravnik Singh Jagpal, Pejman Mirza-Babaei, Loutfouz Zamaan,

Tuesday May 7th at 10:20 am – Room: Hall 4

Testing and evaluating how players interact with VR games often requires watching back hours of footage and manually noting down observations. FRVRIT provides developers a way of recording entire VR sessions and visualizing them at a glance, in their entirety.


User Experience (UX) Research in Games

Instructors: Lennart Nacke, Pejman Mirza-Babaei, Anders Drachen,

Thursday May 9th at 9:00 am to 16:00  – Room: Castle 1 Crown

Vialab member Menna El-Assady to present ‚ÄėThreadReconstructor: Modeling Reply-Chains to Untangle Conversational Text through Visual Analytics‚Äô at EuroVis 2018 in Brno.

On Wednesday, June 6th, 2018, a Vialab member will be presenting a new paper.

‚ÄúThreadReconstructor: Modeling Reply-Chains to Untangle Conversational Text through Visual Analytics‚ÄĚ is lead by Ph.D. student Mennatallah El-Assady in collaboration with the University of Konstanz, and presents a visual analytics approach for detecting and analyzing the implicit conversational structure of discussions. Motivated by the need to reveal and understand single threads in online conversations and text transcripts, ThreadReconstructor combines supervised and unsupervised machine learning models to enable the exploration of generated threaded structures and the analysis of the untangled reply-chains, comparing different models and their agreement.

ThreadReconstructor will be published in the Computer Graphics Forum, volume 37, issue 3, and presented at Eurovis 2018 in Brno.

2016 Course Materials

I have made my slides, assignments, and in-class examples available for students and other instructors who may be interested. These slides are inspired by many others, in particular Dr. Mark Green who often teaches this course at UOIT. In addition, the in-class examples are adapted from a set of examples by Daniel Vogel at Waterloo, and I am grateful to him for making these available. The ray tracing assignment is based on skeleton code and an assignment handout by Dr. Tobias Isenberg. Finally, I am grateful to Dr. Aaron Hertzmann who was my graphics instructor at the University of Toronto and who set a high part for the technical content of a course like this. My course notes were valuable in preparing to teach this topic.

Course topics overview (2016):

  1. Graphics Pipeline
    1. From model to pixels, overview of the basic process
  2. Introduction to Graphics Programming
    1. GLUT, GLEW, and GLM
    2. Vertex and fragment shaders
    3. Transformations, lighting
    4. Geometrical Data
  3. Modeling
    1. Polygons, face and vertex tables, normal vectors
    2. Transformations, matrices, composition of transformations
    3. Homogeneous coordinates
    4. Implicit representations
    5. Parametric representations, piecewise representation, continuity
    6. Cubic curves, canonical form, blending functions
    7. Hermite, natural spline, Cardinal spline, Bezier curve
    8. Hierarchical modeling, OpenGL examples, display lists
    9. Subdivision algorithms
  4. Rendering
    1. Viewing transformations, projections
    2. Hidden surface, z-buffer, BSP trees
    3. Basic lighting, ambient, diffuse and specular reflection
    4. Texture mapping, Mipmaps, texture mapping in OpenGL
  5. Ray Tracing
    1. Local and global illumination
    2. Basic ray tracing technique, reflection, refraction, shadows
    3. Intersection calculations, sphere, plane, polygons
    4. Performance, bounding volumes, grids
    5. Distributed ray tracing, sampling patterns, path tracing
  6. Graphics Hardware
    1. Video, sync, frame buffers, bandwidth issues
    2. 3D acceleration, path to fixed function pipeline
    3. GPU architecture
  7. Introduction to Visualization
    1. Scientific and information visualization
    2. Visual variables and perception
    3. Colour perception & theory
    4. Colour spaces
    5. Scalar and vector visualization techniques
    6. Marching squares and marching cubes algorithms
    7. Volume rendering, transfer functions, volume traversal
  8. Advanced OpenGL programming
    1. Tessellation and geometry shaders
    2. Procedural textures
    3. GPGPU
    4. OpenGL versions
  9. Graphics Application Development
    1. Data file formats
    2. Interaction
    3. Case studies

Vialab contributions to IEEE VIS 2017

Vialab members had several contributions to the IEEE VIS conference in Phoenix this month. Our contributions also represented the extent of the lab’s collaborations, from France, Scotland, Germany, Canada, and the USA.

Menna El-Assady (also affiliated with University of Konstanz) presented our paper on progressive learning of topic model parameters, for which we received an honourable mention for best paper! Her framework allows people who do not know about the inner workings of topic models to guide the settings of the parameters by examining the outputs of competing models and “voting” on their preference. Through an evolutionary approach, the topic models are refined without ever having to play with complex settings.

Hrim Mehta presented her work on Data Tours in collaboration with Dr. Fanny Chevalier and colleagues at Inria, in France. Hrim’s poster presented our idea of how to author semi-automated tours of large datasets, which can be used as a narrative overview of datasets for which a static overview would be too cluttered or overwhelming.

Mariana Shimabukuro presented her poster on automatically abbreviating text labels for visualizations. She used a crowd-sourcing platform to gather abbreviation strategies from many participants and simultaneously measured the success of these abbreviations by asking other participants to decode them. The resulting abbreviation algorithm is available as an API to abbreviate your own labels on visualizations made with d3 or other web-based platforms.

Mariana also is a co-author on an IEEE TVCG paper on font size as a data encoding, first-authored by Dr. Eric Alexander of Carleton College and colleagues at the University of Wisconsin. Eric’s talk highlighted the surprising finding that people are much better at judging differences in font size than expected, even when doing so in the presence of biasing factors such as varying length of words. This work lends credibility to the use of font size as a visual encoding, at least for tasks where “which is bigger” is the main question.

Dr. Christopher Collins was a co-organizer of the 2nd workshop on Immersive Analytics, a full-day event at VIS which attracted a number of papers and a whole lot of open research questions.

Dr. Collins, Menna El-Assady, and Dr. Adam Bradley were co-authors on¬†Risk the Drift!¬†Stretching Disciplinary Boundaries through¬†Critical Collaborations between the Humanities and Visualization“,¬†a position paper advocating for flexibility¬†in interdisciplinary research presented at the 2nd Visualization for Digital Humanities Workshop¬†(VIS4DH), which Dr. Collins and Menna El-Assady were also co-organizers.

Vialab member Menna El-Assady presented ‘NEREx: Named-Entity Relationship Exploration in Multi-Party Conversations’ at EuroVis 2017 in Barcelona.

We are pleased to announce that this month, a Vialab member has presented a new paper.

“NEREx: Named-Entity Relationship Exploration in Multi-Party Conversations”¬†was lead by PhD student Mennatallah El-Assady, and presents a visualization used to explore political debates and multi-party conversations. By revealing different perspectives on multi-party conversations, NEREx gives an entry point for the analysis through high-level overviews and provides mechanisms to form and verify hypotheses through linked detail-views.

NEREx will be published in the Computer Graphics Forum, volume 36, number 3, and presented at Eurovis 2017 in Barcelona.


Funded PhD Position in Explainable Artificial Intelligence

Funded PhD Position in Interfaces for Explainable Artificial Intelligence

NOTE: This position is not currently available.

When an artificial intelligence system makes a decision or draws a conclusion, the reasons behind that decision are often obscure and difficult to interpret. In order to trust the outcomes of AI systems, they need to be able to present the rationale behind decisions in understandable, transparent ways. We are seeking a highly-motivated PhD candidate for an interdisciplinary research project across the fields of deep learning, visual analytics, and human-computer interaction. Specifically, the research program will be in Explainable Artificial Intelligence with a focus on creating systems to help people interpret the reasoning behind decisions made by deep learning systems. The selected candidate will join an international collaborative project and will be responsible for the design, implementation, and testing of visualization interfaces connecting to explainable machine learning systems designed by partners on the larger project.

This funded position will be established in the Visualization for Information Analysis Lab (vialab) in the Computer Science PhD program at the University of Ontario Institute of Technology in Oshawa, Ontario, Canada under the supervision of Dr. Christopher Collins. The candidate with collaborate with other team members Dr. Graham Taylor at the University of Guelph and Dr. Mohamed Amer at SRI International.

Depending on performance there is a strong likelihood of one or more paid internships at SRI International during the period of study.


  • Masters degree in Computer Science, Software Engineering, Informatics/Data Science or an equivalent university-level degree and relevant experience
  • Strong and demonstrated programming skills
  • Research, work, or significant course experience in human-computer interaction, visual analytics, or interface design
  • Preference for candidates who also have experience/interest in artificial intelligence or machine learning
  • Able to work as an independent and flexible researcher in interdisciplinary teams
  • Strong English writing and speaking skills


Send the following to

  • Detailed CV
  • Motivation letter explaining your interest in and relevant experience for this project
  • Summary of your Master‚Äôs thesis
  • Transcripts (unofficial are acceptable at this stage; translations are not required at this stage)

Note: The selected candidate will be invited to apply through the official university application process and offers will be conditional on meeting application criteria for the UOIT CS program.


  • Expressions of interest as soon as possible; formal application process to follow for the invited candidate
  • Start date: September 2017 or negotiable
  • Duration: 4 years


The vialab at UOIT, lead by Dr. Christopher Collins, Canada Research Chair in Linguistic Information Visualization, conducts research in information analysis, visual analytics, text and document analytics, and human-computer interaction. The University of Ontario Institute of Technology (UOIT), located in Oshawa, Ontario, advances the discovery and application of knowledge through a technology-enriched learning environment and innovative programs responsive to the needs of students, and the evolving 21st-century workplace. UOIT promotes social engagement, fosters critical thinking, and integrates outcomes-based learning experiences inside and outside the classroom. Oshawa, Ontario is located near the city of Toronto, Canada, where many lab members live.

Summer and Graduate Positions at Vialab

In the summer of 2017 Dr. Christopher Collins at the Visualization for Information Analysis lab (vialab) at UOIT is seeking to hire 1 or more strong undergraduate students for research internships in a broad range of exciting topics. These internships may be eligible for credit under the UOIT co-op program.

Also, I am seeking new graduate students at the M.Sc. level; graduate studies at UOIT are funded for accepted students.

Topics include:

  • Continuing work with the UOIT Registrar’s Office on the creation of a visual analytics dashboard to analyze patterns of student grades related to a variety of factors, to improve student success and university planning.
  • Re-implementing a gesture system created to work with off-screen elements using the Leap Motion device as a polished app for deployment.
  • Developing visualizations to understand the propagation of ideas through citation networks of research papers.
  • Developing mobile applications for literacy education.
  • Investigating alternative authentication systems using subtle hand gestures / hand shakes to identify oneself to a computer system.
  • (M.Sc./Ph.D. only) Developing explanation interfaces for artificial intelligence systems (this opportunity will be linked to guaranteed industrial internships at SRI International)

Each of these positions will require students who are strong programmers. Languages used include Java, C++, Python, and Javascript. All projects will also include an aspect of literature research and report writing. Selected candidates will be expected to attend and participate in lab meetings, research discussions, and assist on other projects as needed from time-to-time. Except in exceptional cases, summer internship applicants should have completed second year in the UOIT CS program. Graduate applicants should have an Honours degree in Computer Science or related subject or be in the final year of studies toward the degree.

Interested candidates should send an email statement of interest, copy of recent unofficial transcript (mycampus print out is ok), and any other relevant materials (cv/portfolio of example projects) to Dr. Collins at before February 28.

| © Copyright vialab | Dr. Christopher Collins, Canada Research Chair in Linguistic Information Visualization |