Menu Close

Benedict Leung

Graduate Student - MSc

I am an undergraduate student in Computer Science at Ontario Tech University. I am currently working on my honours thesis supervised by Dr. Christopher Collins.  In the summer of 2021, I have landed a research assistant position where I was tasked to build a web-based breadboard circuit-building simulator, designed for learning about digital design, hoping to help those that don’t have a breadboard at home. Similar to Fritzing, but a web-based version.

Contact

You can contact me at: benedict.leung1@ontariotechu.net
Portfolio site: https://benedict-leung.github.io/BenedictLeung.github.io/
Github profile: https://github.com/Benedict-Leung

Publications

  • B. Leung, “Touch Free Camera Mental Commands and Hand Gestures,” Bachelors Thesis, 2022.

    PDF

    @BachelorsThesis{leu2022a,

    author={Benedict Leung},

    title={Touch Free Camera Mental Commands and Hand Gestures},

    school={University of Ontario Institute of Technology},

    year=2022

    }

  • Leung, B., Shimabukuro, M., & Collins, C. (2024). NeuroSight: Combining Eye-Tracking and Brain-Computer Interfaces for Context-Aware Hand-Free Camera Interaction. In Adjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery.

    PDF

    @inproceedings{10.1145/3672539.3686312,
    author = {Leung, Benedict and Shimabukuro, Mariana and Collins, Christopher},
    title = {NeuroSight: Combining Eye-Tracking and Brain-Computer Interfaces for Context-Aware Hand-Free Camera Interaction},
    year = {2024},
    isbn = {9798400707186},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3672539.3686312},
    doi = {10.1145/3672539.3686312},
    abstract = {Technology has blurred the boundaries of our work and private lives. Using touch-free technology can lessen the divide between technology and reality and bring us closer to the immersion we once had before. This work explores the combination of eye-tracking glasses and a brain-computer interface to enable hand-free interaction with the camera without holding or touching it. Different camera modes are difficult to implement without the use of eye-tracking. For example, visual search relies on an object, selecting a region in the scene by touching the touchscreen on your phone. Eye-tracking is used instead, and the fixation point is used to select the intended region. In addition, fixations can provide context for the mode the user wants to execute. For instance, fixations on foreign text could indicate translation mode. Ultimately, multiple touchless gestures create more fluent transitions between our life experiences and technology.},
    booktitle = {Adjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology},
    articleno = {73},
    numpages = {3},
    keywords = {brain-computer interface, camera, eye-tracking},
    location = {Pittsburgh, PA, USA},
    series = {UIST Adjunct ’24}
    }

  • Benedict Leung, C. (2025). GazeQ-GPT: Gaze-Driven Question Generation for Personalized Learning from Short Educational Videos. In Proc. Graphics Interface (GI).

    PDF

    @inproceedings{mypaper,
    title = {GazeQ-GPT: Gaze-Driven Question Generation for Personalized Learning from Short Educational Videos},
    author = {Benedict Leung, Mariana Shimabukuro, Matthew Chan, Christopher Collins},
    year = 2025,
    booktitle = {Proc. Graphics Interface (GI)}
    }


vialab Team

vialab Alumni