Menu Close

GazeQ-GPT: Gaze-Driven Question Generation for Personalized Learning from Short Educational Videos

Contributors:

Benedict Leung, Mariana Shimabukuro, Matthew Chan, Christopher Collins

Abstract

Effective comprehension is essential for learning and understanding new material. However, human-generated questions often fail to cater to individual learners’ needs and interests. We propose a novel approach that leverages a gaze-driven interest model and a Large Language Model (LLM) to generate personalized comprehension questions automatically for short (∼10 min) educational video content. Our interest model scores each word in a subtitle. The top-scoring words are then used to generate questions using an LLM. Additionally, our system provides marginal help by offering phrase definitions (glosses) in subtitles, further facilitating learning. These methods are integrated into a prototype system, GazeQ-GPT, automatically focusing learning material on specific content that interests or challenges them, promoting more personalized learning. A user study (𝑁 = 40) shows that GazeQ-GPT prioritizes words in the fixated gloss and rewatched subtitles with higher ratings toward glossed videos. Compared to ChatGPT, GazeQ-GPT achieves higher question diversity while maintaining quality, indicating its potential to improve personalized learning experiences through dynamic content adaptation.

GazeQ-GPT Video Figure

Publications

    [pods name="publication" id="9529" template="Publication Template (list item)" shortcodes=1]

Eye Tracking for Target Acquisition in Sparse Visualizations

In this paper, we present a novel marker-free method for identifying screens of interest when using head-mounted eye-tracking for visualization in cluttered and multi-screen environments. We offer a solution to discerning visualization entities from sparse backgrounds by incorporating edge-detection into the existing pipeline. Our system allows for both more efficient screen identification and improved accuracy over the state-of-the-art ORB algorithm.

The source code for this project is available on our Github.

 

Publications

    [pods name="publication" id="4164" template="Publication Template (list item)" shortcodes=1]

Acknowledgements

Detecting Negative Emotion for Mixed Initiative Visual Analytics

Contributors:

Prateek Panwar and Christopher Collins

The work describes an efficient model to detect negative mind states caused by visual analytics tasks. We have developed a method for collecting data from multiple sensors, including GSR and eye-tracking, and quickly generating labelled training data for the machine learning model. Using this method we have created a dataset from 28 participants carrying out intentionally difficult visualization tasks. We have concluded the paper by discussing the best performing model, Random Forest, and its future applications for providing just-in-time assistance for visual analytics.

Publications

    [pods name="publication" id="4215" template="Publication Template (list item)" shortcodes=1] [pods name="publication" id="4218" template="Publication Template (list item)" shortcodes=1]