Menu Close

Hrim Mehta

Graduate Student – PhD

I am a Ph.D. candidate in Computer Science at UOIT, co-supervised by Dr. Christopher Collins and Dr. Fanny Chevalier. My main interest lies in mixed-initiative visual analytics. I’m currently developing a framework to formalize and guide the design of semi-automated narratives or tours to prompt and sustain the process of data exploration.

I received my Master’s in Computer Science from UOIT in 2015 under the co-supervision of Dr. Christopher Collins and Dr. Mark Hancock. My thesis focused on leveraging free-form ink annotations, made when performing a close reading, as implicit interactions to augment a literary critic’s analysis process with real-time context-specific metadata.

Contact

If you’d like to know more about my work, send me an email (hrim.mehta@ontariotechu.ca) or connect with me on LinkedIn.

Publications

  • H. Mehta, A. Chalbi, F. Chevalier, and C. and Collins, “DataTours: A Data Narratives Framework,” Proc. of IEEE Conf. on Information Visualization (InfoVis), Posters, 2017.

    PDF

    @poster{meh2017b,
    author = {Hrim Mehta and Amira Chalbi and Fanny Chevalier and and Christopher Collins},
    title = {DataTours: A Data Narratives Framework},
    booktitle = {Proc. of IEEE Conf. on Information Visualization (InfoVis), Posters},
    series = {Poster},
    address = {Phoenix, USA},
    year = 2017
    }

  • H. Mehta, A. J. Bradley, M. Hancock, and C. Collins, “Metatation: Annotation as Implicit Interaction to Bridge Close and Distant Reading,” ACM Trans. on Computer-Human Interaction (TOCHI), p. 35:1–35:41, 2017.

    PDF

    @article{meh2017a,
    title={Metatation: Annotation as Implicit Interaction to Bridge Close and Distant Reading},
    author={Hrim Mehta and Adam James Bradley and Mark Hancock and Christopher Collins},
    journal = {ACM Trans. on Computer-Human Interaction (TOCHI)},
    publisher={ACM},
    pages = {35:1–35:41},
    articleno = {35},
    numpages = {41},
    year=2017,
    doi = {10.1145/3131609}
    }

  • A. J. Bradley, H. Mehta, Mark Hancock, and C. Collins, “Visualization, Digital Humanities, and the Problem of Instrumentalism,” in IEEE VIS Workshop on Visualization for the Digital Humanities (VIS4DH), 2016.

    PDF

    @InProceedings{bra2016,
    author = {Adam James Bradley and Hrim Mehta and Mark Hancock, and Christopher Collins},
    title = {Visualization, Digital Humanities, and the Problem of Instrumentalism},
    booktitle = {IEEE VIS Workshop on Visualization for the Digital Humanities (VIS4DH)},
    year = 2016,
    venue = {Baltimore, USA},
    eventdate = {2016-10-24}
    }

  • H. Mehta, “Augmenting Free-Form Annotations with Digital Metadata for Close Reading of Poetry,” Master Thesis, 2015.

    PDF

    @MastersThesis{meh2015a,
    author = {Hrim Mehta},
    title = {Augmenting Free-Form Annotations with Digital Metadata for Close Reading of Poetry},
    school = {University of Ontario Institute of Technology},
    year = 2015
    }

  • B. Kondo, H. Mehta, and C. Collins, “Glidgets: Interactive Glyphs for Exploring Dynamic Graphs,” Proc. of IEEE Conf. on Information Visualization (InfoVis), 2014.

    PDF

    @poster{kon2014c,
    author = {Brittany Kondo and Hrim Mehta and Christopher Collins},
    title = {Glidgets: Interactive Glyphs for Exploring Dynamic Graphs},
    booktitle = {Proc. of IEEE Conf. on Information Visualization (InfoVis)},
    note = {Best Poster Award},
    series = {Poster},
    address = {Paris, France},
    year = 2014
    }

  • Rajaram, S., Surale, H., McConkey, C., Rognon, C., Mehta, H., Glueck, M., & Collins, C. (2025). Gesture and Audio-Haptic Guidance Techniques to Direct Conversations with Intelligent Voice Interfaces. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery.

    PDF

    @inproceedings{10.1145/3706598.3714310,
    author = {Rajaram, Shwetha and Surale, Hemant Bhaskar and McConkey, Codie and Rognon, Carine and Mehta, Hrim and Glueck, Michael and Collins, Christopher},
    title = {Gesture and Audio-Haptic Guidance Techniques to Direct Conversations with Intelligent Voice Interfaces},
    year = {2025},
    isbn = {9798400713941},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3706598.3714310},
    doi = {10.1145/3706598.3714310},
    abstract = {Advances in large language models (LLMs) empower new interactive capabilities for wearable voice interfaces, yet traditional voice-and-audio I/O techniques limit users’ ability to flexibly navigate information and manage timing for complex conversational tasks. We developed a suite of gesture and audio-haptic guidance techniques that enable users to control conversation flows and maintain awareness of possible future actions, while simultaneously contributing and receiving conversation content through voice and audio. A 14-participant exploratory study compared our parallelized I/O techniques to a baseline of voice-only interaction. The results demonstrate the efficiency of gestures and haptics for information access, while allowing system speech to be redirected and interrupted in a socially acceptable manner. The techniques also raised user awareness of how to leverage intelligent capabilities. Our findings inform design recommendations to facilitate role-based collaboration between multimodal I/O techniques and reduce users’ perception of time pressure when interleaving interactions with system speech.},
    booktitle = {Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems},
    articleno = {1133},
    numpages = {20},
    keywords = {multimodal interaction, voice interfaces},
    location = {
    },
    series = {CHI ’25}
    }


vialab Team

vialab Alumni