Supporting Serendipitous Discovery and Balanced Analysis of Online Product Reviews with Interaction-Driven Metrics and Bias-Mitigating Suggestions

Contributors

Mahmood Jasim, Ali Sarvghad, Christopher Collins, Narges Mahyar

Abstract

In this study, we investigate how supporting serendipitous discovery and analysis of online product reviews can encourage readers to explore reviews more comprehensively prior to making purchase decisions. We propose two interventions — Exploration Metrics that can help readers understand and track their exploration patterns through visual indicators and a Bias Mitigation Model that intends to maximize knowledge discovery by suggesting sentiment and semantically diverse reviews. We designed, developed, and evaluated a text analytics system called Serendyze, where we integrated these interventions. We asked 100 crowd workers to use Serendyze to make purchase decisions based on product reviews. Our evaluation suggests that exploration metrics enabled readers to efficiently cover more reviews in a balanced way, and suggestions from the bias mitigation model influenced readers to make confident data-driven decisions. We discuss the role of user agency and trust in text-level analysis systems and their applicability in domains beyond review exploration

Website

serendyze.cs.umass.edu

 

Publication

  • [IMG]
    M. Jasim, C. Collins, A. Sarvghad, and N. Mahyar, “Supporting Serendipitous Discovery and Balanced Analysis of Online Product Reviews with Interaction-Driven Metrics and Bias-Mitigating Suggestions,” in CHI Conference on Human Factors in Computing Systems, 2022.
    [Bibtex] [PDF] [DOI]

    @InProceedings{jas2022a,
      title={Supporting Serendipitous Discovery and Balanced Analysis of Online Product Reviews with Interaction-Driven Metrics and Bias-Mitigating Suggestions},
      author={Jasim, Mahmood and Collins, Christopher and Sarvghad, Ali and Mahyar, Narges},
      booktitle = {CHI Conference on Human Factors in Computing Systems},
      month = mar, 
      year = 2022,
     doi = {10.1145/3491102.3517649}
    }

Video

Covid Connect: Chat-Driven Anonymous Story-Sharing for Peer Support

Contributors

Christopher Collins, Simone Arbour, Nathan Beals, Shawn Yama, Jennifer Laffier, Zixin Zhao

Abstract

The mental-health impact of the Covid-19 pandemic and the related restrictions and isolation have been immense. In this paper, we present a system designed to break down loneliness and isolation, and to allow people to share their stories, complaints, emotions, and gratitude anonymously with one another. Using a chatbot interface to collect visitor stories, and a custom visualization to reveal related past comments from others, Covid Connect links people together through shared pandemic experiences. The collected data also serves to reflect the experiences of the community of participants during the third through fifth waves of the pandemic in the local region. We describe the Covid Connect system, and analyze the collected data for themes and patterns arising from stories shared with the chatbot. Finally, we reflect on the experience through an autobiographical lens, as users of our own system, and posit ideas for the application of similar approaches in other mental health domains.

Try Covid Connect

https://covidconnect.me/

Blog and News

Learn more about the design decisions and process behind Covid Connect at our blog.

Ontario Tech News: You’re not on mute: Ontario Tech research exploring how AI technology offers mental health support during pandemic

Publication

  • [IMG]
    C. Collins, S. Arbour, N. Beals, S. Yama, J. Laffier, and Z. Zhao, “Covid Connect: Chat-Driven Anonymous Story-Sharing for Peer Support,” in ACM Designing Interactive Systems Conference, 2022.
    [Bibtex] [PDF] [DOI]

    @InProceedings{col2022a,
      title={Covid Connect: Chat-Driven Anonymous Story-Sharing for Peer Support},
      author={Collins, Christopher and Arbour, Simone and Beals, Nathan and Yama, Shawn and Laffier, Jennifer and Zhao, Zixin},
      booktitle = {ACM Designing Interactive Systems Conference},
      month = jun, 
      year = 2022,
     doi = {10.1145/3532106.3533545}
    }

 

Project Video

 

Data Resources

Categories and Explanation

Covid Connect Chatbot Script

Dataset and Evaluation

 

 

Acknowledgments

This work is a collaboration of Ontario Tech University and Ontario Shores Centre for Mental Health Sciences. It was supported by NSERC and the City of Oshawa TeachingCity Initiative.

 

Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning

Contributors

Chris Kim, Xiao Lin, Christopher Collins, Graham W. Taylor, and Mohamed R. Amer.

Abstract

While the computer vision problem of searching for activities in videos is usually addressed by using discriminative models, their decisions tend to be opaque and difficult for people to understand. We propose a case study of a novel machine learning approach for generative searching and ranking of motion capture activities with visual explanation. Instead of directly ranking videos in the database given a text query, our approach uses a variant of Generative Adversarial Networks (GANs) to generate exemplars based on the query and uses them to search for the activity of interest in a large database. Our model is able to achieve comparable results to its discriminative counterpart, while being able to dynamically generate visual explanations. In addition to our searching and ranking method, we present an explanation interface that enables the user to successfully explore the model’s explanations and its confidence by revealing query-based, model-generated motion capture clips that contributed to the model’s decision. Finally, we conducted a user study with 44 participants to show that by using our model and interface, participants benefit from a deeper understanding of the model’s conceptualization of the search query. We discovered that the XAI system yielded a comparable level of efficiency, accuracy, and user-machine synchronization as its black-box counterpart, if the user exhibited a high level of trust for AI explanation.

Publication

  • [IMG]
    C. Kim, X. Lin, C. Collins, G. W. Taylor, and M. R. Amer, “Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning,” ACM Trans. Interact. Intell. Syst., vol. 11, iss. 3–4, 2021.
    [Bibtex] [PDF] [URL] [DOI]

    @article{kim2021,
    author = {Kim, Chris and Lin, Xiao and Collins, Christopher and Taylor, Graham W. and Amer, Mohamed R.},
    title = {Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning},
    year = {2021},
    issue_date = {December 2021},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    volume = {11},
    number = {3–4},
    issn = {2160-6455},
    url = {doi.org/10.1145/3465407},
    doi = {10.1145/3465407},
    journal = {ACM Trans. Interact. Intell. Syst.},
    month = aug,
    articleno = {23},
    numpages = {34},
    keywords = {model-generated explanation, trust and reliance, Explainable artificial intelligence, user study}
    }

Blog

Our featured Medium story for this research paper can be found here.

Video

Presentation from IUI 2022

 

Case Study Participant Interface

The web-based study interface is available for public access at http://gr.ckprototype.com/.

 

Professional Differences: A Comparative Study of Visualization Task Performance and Spatial Ability Across Disciplines

Contributors

Kyle Wm Hall, Anthony Kouroupis, Anastasia Bezerianos, Danielle Albers Szafir, and Christopher Collins

Abstract

Problem-driven visualization work is rooted in deeply understanding the data, actors, processes, and workflows of a target domain. However, an individual’s personality traits and cognitive abilities may also influence visualization use. Diverse user needs and abilities raise natural questions for specificity in visualization design: Could individuals from different domains exhibit performance differences when using visualizations? Are any systematic variations related to their cognitive abilities? This study bridges domain-specific perspectives on visualization design with those provided by cognition and perception. We measure variations
in visualization task performance across chemistry, computer science, and education, and relate these differences to variations in spatial ability. We conducted an online study with over 60 domain experts consisting of tasks related to pie charts, isocontour plots, and 3D scatterplots, and grounded by a well-documented spatial ability test. Task performance (correctness) varied with profession across more complex visualizations (isocontour plots and scatterplots), but not pie charts, a comparatively common visualization. We found that correctness correlates with spatial ability, and the professions differ in terms of spatial ability. These results indicate that domains differ not only in the specifics of their data and tasks, but also in terms of how effectively their constituent members engage with visualizations and their cognitive traits. Analyzing participants’ confidence and strategy comments suggests that focusing on performance neglects important nuances, such as differing approaches to engage with even common visualizations and potential skill transference. Our findings offer a fresh perspective on discipline-specific visualization with specific recommendations to help guide visualization design that celebrates the uniqueness of the disciplines and individuals we seek to serve.

Publications

  • [IMG]
    K. W. Hall, A. Kouroupis, A. Bezerianos, D. A. Szafir, and C. Collins, “Professional Differences: A Comparative Study of Visualization Task Performance and Spatial Ability Across Disciplines,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, iss. 1, pp. 654-664, 2022.
    [Bibtex] [DOI]

    @article{hal2022,
      author={Hall, Kyle Wm. and Kouroupis, Anthony and Bezerianos, Anastasia and Szafir, Danielle Albers and Collins, Christopher},
      journal={IEEE Transactions on Visualization and Computer Graphics}, 
      title={Professional Differences: A Comparative Study of Visualization Task Performance and Spatial Ability Across Disciplines}, 
      year={2022},
      volume={28},
      number={1},
      pages={654-664},
      doi={10.1109/TVCG.2021.3114805}
    }

Our featured blog post on this research paper can be found here.

Card-IT: a Dynamic FSM-based Flashcard Generator for Learning Italian Verb Morphology

Contributors

Jessica Zipf, Mariana Shimabukuro, Shawn Yama, and Christopher Collins

Abstract

We report on a novel approach to training and testing Italian verb morphology by developing a flashcard application. Instead of manually curated content, this application integrates a large-scale finite-state morphological (FSM) analyzer which both analyzes a user’s input and dynamically generates specific verb forms (flashcards). FSMs are widely used in natural language processing as part of a system’s text preprocessing pipeline. Our main contribution is to leverage the FSM as the core component to implement a dynamic verb generator based on defined morphological features or return a form’s morphological analysis. Therefore, we developed Card-IT, a web-based application powered by the FSM that uses flashcards as a way for learners to utilize the analyzer in a user-friendly manner. The two-sided cards represent both functions of the FSM: analysis and generation.

Card-IT can be used to quickly analyze a form or to look up entire verb paradigms where the users (teachers or learners) can freely define morphological features, such as tense, mood, etc. Optionally, they can choose to leave any feature unspecified. Depending on the user’s selection, the application returns the corresponding flashcards, which can be saved and organized into a new or existing deck for testing and training. The organization and sorting of decks and cards allows learners to study verbs based on their individual study interests/needs e.g., one might choose to focus on subjunctive forms or past tense only. Additionally, teachers can create decks to provide their students with specific learning content and exercises.

As studies have shown, knowledge of the underlying linguistic concepts benefits the acquisition of a new language (e.g., Heift, 2004). Therefore Card-IT embeds explanations of linguistic terms (e.g., mood, conditional) using visual components, to allow learners to identify linguistic patterns and raise their metalinguistic awareness over time. Moreover, in Card-IT all linguistic terms are provided in the target language.

We plan on evaluating Card-IT with experts, Italian teachers, and implementing their feedback before evaluating it with students. At its current version, Card-IT offers three functions: (1) form analysis and look-up as mentioned above; (2) training, and (3) testing. In training using the self or teacher-curated decks generated with the help of the FSM, learners can study and learn verbs along with their inflectional forms. Testing mode consists of two different exercises: a conjugation quiz that prompts the user to type a form based on provided linguistic specification; and a tense quiz which offers a form asking the user to pick the corresponding tense out of three. Optimally, the learner may also select a mixed-mode which combines both testing exercises.

Feedback plays a crucial role in learning in that it must be both informative and motivating, yet not discouraging (Livingstone, 2012). Whenever the learner enters an incorrect verb form, the FSM the system checks whether it corresponds to any other tense/mood configurations. If so, the system reports it to the user to provide targeted feedback on errors with indications of how to improve rather than just an (in)correct message.

Presentation

Demo

Although Card-IT is still in the latter stages of development, you can try out the demo at cardit.vialab.ca by logging in using demo@email.com with the password livecardit.

Visual Analytics Tools for Academic Advising

Contributors

Riley Weagant, Christopher Collins, Taylor Smith, Michael Lombardo

Abstract

Post secondary institutions have a wealth of student data at their disposal.  This data has recently been used to explore a problem that has been prevalent in the education domain for decades. Student retention is a complex issue that researchers are attempting to address using machine learning. This thesis describes our attempt to use academic data from Ontario Tech University to predict the likelihood of a student withdrawing from the university after their upcoming semester.  We used academic data collected between 2007 and 2011 to train a random forest model that predicts whether or not a student will dropout. Finally, we used the confidence level of the model’s prediction to represent a students ”likelihood of success”, which is displayed on a beeswarm plot as part of an application intended for use by academic advisors.

Publications

  • [IMG]
    M. Lombardo, R. Weagant, and C. Collins, “Exploratory Data Analysis on Student Retention,” , UOIT Student Research Showcase, 2017.
    [Bibtex] [PDF]

    @poster{lom2017,
    author = {Michael Lombardo and Riley Weagant and Christopher Collins},
    title = {Exploratory Data Analysis on Student Retention},
    booktitle = {UOIT Student Research Showcase},
    year = 2017
    }
  • [IMG]
    R. Weagant, T. Smith, and C. Collins, “Student Retention: A Data Driven Approach,” , UOIT Student Research Showcase, 2015.
    [Bibtex] [PDF]

    @poster{wea2015,
    author = {Riley Weagant and Taylor Smith and Christopher Collins},
    title = {Student Retention: A Data Driven Approach},
    booktitle = {UOIT Student Research Showcase},
    year = 2015
    }
  • [IMG]
    R. Weagant, “Supporting Student Success with Machine Learning and Visual Analytics,” Master Thesis, 2019.
    [Bibtex] [PDF]

    @MastersThesis{wea2019a,
        author = {Riley Weagant},
        title = {Supporting Student Success with Machine Learning and Visual Analytics},
        school =    {University of Ontario Institute of Technology},
        year = 2019
    }

Érudit and Vialab Collaboration Projects

Project Page

erudit.vialab.ca

Textension

Digitize analog text into manipulatable web objects. Digitized objects can be annotated and analytics can be performed on the text data.

GitHub Repository

Citation Context Surfer

A tool aimed at giving users a in-depth look at citations and their surrounding context. Using customizable rulesets new trends can be found and explored.

Synonymic Search

Perform OCR topic and text analysis using a visual interface. Documents for analysis are sourced from the Érudit database.

Érudit Collaboration Maps

Mapping the transfer of knowledge in the Érudit corpus. Allows for filtering by co-authorship in journals or by the institution, date, author or paper title.

GitHub Repository

Érudit Journal Matcher

Perform OCR topic and text analysis using a visual interface. Newspapers for analysis are sourced from the Érudit database.

Demos

Explore Textension: textension.vialab.ca

Explore Citation Context Surfer: citation.vialab.ca

Explore Synonymic Search: synonym.vialab.ca

Explore Érudit Collaboration Maps: collabmap.vialab.ca

Explore Érudit Journal Matcher: synonym.vialab.ca/journal

 

Academia is Tied in Knots

Contributors

Tommaso Elli, Adam Bradley, Christopher Collins, Uta Hinrichs, Zachary Hills, Karen Kelsky

Abstract

A data visualization project aimed at giving visibility to the issue of sexual harassment in the academy.

As researchers and members of academic community we felt that this issue goes too often under-reported and we decided to give visibility to it using data-visualization as a communicative medium.

The data you are about to see comes from an anonymous online survey aimed at collecting personal experiences. The survey was issued in late 2017 and, through it, more than 2000 testimonies were collected.

This data is highly personal and sensitive. We spent significant effort identifying suitable ways to handle and represent it, to show the large dataset, but also honor the individual experiences.

Visualization

Explore the visualization tiedinknots.io.

Publications

  • [IMG]
    T. Elli, A. Bradley, C. Collins, U. Hinrichs, Z. Hills, and K. Kelsky, “Tied In Knots: A Case Study on Anthropographic Data Visualization About Sexual Harassment in the Academy,” in VISAP ’20, 2020.
    [Bibtex] [PDF]

    @InProceedings{ell2020a,
      author = {Tommaso Elli and Adam Bradley and Christopher Collins and Uta Hinrichs and Zachary Hills and Karen Kelsky},
      title = {Tied In Knots: A Case Study on Anthropographic Data Visualization About Sexual Harassment in the Academy},
      booktitle = {VISAP '20},
      month = {October},
      year = {2020},
      publisher = {IEEE},
    }

Video

Acknowledgments

This work was supported by NSERC Canada Research Chairs, the Canada Research Chairs, and DensityDesign.

   

Tilt-Responsive Techniques for Digital Drawing Boards

Contributors

Hugo Romat, Christopher Collins, Nathalie Riche, Michel Pahud, Christian Holz, Adam Riddle, Bill Buxton, Ken Hinckley

Abstract

Drawing boards offer a self-stable work surface that is continuously adjustable. On digital displays, such as the Microsoft Surface Studio, these properties open up a class of techniques that sense and respond to tilt adjustments. Each display posture—whether angled high, low, or somewhere in-between—affords some activities, but not others. Because
what is appropriate also depends on the application and task, we explore a range of app-specific transitions between reading vs. writing (annotation), public vs. personal, shared person-space vs. task-space, and other nuances of input and feedback, contingent on display angle. Continuous responses provide interactive transitions tailored to each use-case. We show how a variety of knowledge work scenarios can use sensed display adjustments to drive context-appropriate transitions, as well as technical software details of how to best realize these concepts. A preliminary remote user study suggests that techniques must balance effort required to adjust tilt, versus the potential benefits of a sensed transition.

Publications

  • [IMG]
    H. Romat, C. Collins, N. Riche, M. Pahud, C. Holz, A. Riddle, B. Buxton, and K. Hinckley, “Tilt-Responsive Techniques for Digital Drawing Boards,” in UIST ’20: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020.
    [Bibtex] [PDF]

    @InProceedings{col2020a,
      author = {Hugo Romat and Christopher Collins and Nathalie Riche and Michel Pahud and Christian Holz and Adam Riddle and Bill Buxton and Ken Hinckley},
      title = {Tilt-Responsive Techniques for Digital Drawing Boards},
      booktitle = {UIST '20: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology},
      month = {October},
      year = {2020},
      publisher = {ACM},
    }

Videos

Textension: Digitally Augmenting Document Spaces in Analog Texts

Contributors

Adam James Bradley, Christopher Collins, Victor Sawal, Sheelagh Carpendale

Abstract

In this paper, we present a framework that allows people who work with analog texts to leverage the affordances of digital technology, such as data visualization, computational linguistics, and search, using any web-based mobile device with a camera. After taking a picture of a particular page or set of pages from a text or uploading an existing image, our prototype system builds an interactive digital object that automatically inserts visualizations and interactive elements into the document. Leveraging the findings of previous studies, our framework augments the reading of analog texts with digital tools, making it possible to work with texts in both a digital and analog environment.

Resources

Online Demo

GitHub Repository

Publications

  • [IMG]
    A. J. Bradley, C. Collins, V. Sawal, and S. Carpendale, “Textension: Digitally Augmenting Analog Texts Using Mobile Devices,” , MobileVis Workshop at ACM CHI, 2018.
    [Bibtex] [PDF]

    @poster{bra2018a,
    author = {Adam James Bradley and Christopher Collins and Victor Sawal and Sheelagh Carpendale},
    title = {Textension: Digitally Augmenting Analog Texts Using Mobile Devices},
    booktitle = {MobileVis Workshop at ACM CHI},
    year = 2018,
    note = {CHI Workshop Paper}
    }
  • [IMG]
    A. J. Bradley, V. Sawal, S. Carpendale, and C. Collins, “Textension: Digitally Augmenting Document Spaces in Analog Texts.,” DHQ: Digital Humanities Quarterly, vol. 13, iss. 3, 2019.
    [Bibtex] [PDF]

    @article{bradley2019textension,
      title={Textension: Digitally Augmenting Document Spaces in Analog Texts.},
      author={Bradley, Adam James and Sawal, Victor and Carpendale, Sheelagh and Collins, Christopher},
      journal={DHQ: Digital Humanities Quarterly},
      volume={13},
      number={3},
      year={2019}
    }

The paper is available here.

Acknowledgments

This work was supported by NSERC Canada Research Chairs, The Canada Foundation for Innovation – Cyberinfrastructure Fund, and the Province of Ontario – Ontario Research Fund.

   

| © Copyright vialab | Dr. Christopher Collins, Canada Research Chair in Linguistic Information Visualization |