2nd Annual M&M Spring Symposium



Location TBD

April 11, 2025



Register now!




About

The M&M Spring Symposium is a celebration for those students that take on a 2-term research project with either the MIB or MuLab in the School of Computing at Queen's University. This primarily involves CISC 500 students.

The MMSS Symposium will take place on April 11th, 2024, with the location to be decided soon (based on registered interest). Admittance is free and open to all those interested.

Speakers


  • 10h00 Opening

    General introduction to the M&M Spring Symposium and CISC 500 with MIB & MuLab

  • Matthew Sun

    10h30 Matthew Sun

    Towards Learning Symbolic Representations from Video

    This research investigates the extraction of state representations from videos, addressing a key challenge in computer vision: learning representations that align with symbolic planning. Typically, representations derived from deep learning models are incompatible with the symbolic structures utilized by classical planning methods. To bridge this gap, this study introduces a novel approach to extract propositional representations directly from video data by leveraging perceptual autoencoders—pretrained representation-learning models employed within Latent Diffusion Models (LDM). The proposed method utilizes a pretrained encoder to generate semantically compressed embeddings for individual video frames to be used as inputs. Subsequently, to achieve representations compatible with symbolic planning, we train an autoencoder specifically designed to produce binary representations from these semantic embeddings, capturing visual features in a compact, discrete form. The architecture of this autoencoder supports the extraction of high-level symbolic information. Ultimately, this framework aims to explore the capabilities and limitations of vision-based neural networks in learning representations conducive to symbolic planning directly from video data.

    Bio:

    Matthew is a fourth-year undergraduate Computing student at Queen’s University with a focus in AI. He is working with Dr. Muise on extracting discrete state representations from video. His other interests include basketball analytics and exercise science.

  • Brandon Cheng

    11h00 Brandon Cheng

    Continuous Visual Learning Curriculum: Unsupervised Visual Representation Learning Guided by Continuous Visual Streams

    In this work, we propose the Continuous Visual Learning Curriculum (CVLC), a novel approach to visual representation learning that departs from traditional task-driven supervised learning paradigms. We discuss how continuous streams of visual information experienced by biological organisms could play a role in the development of the visual cortex, resulting in a robust visual processing pipeline that learns task-general visual feature representations. We explain how CVLC attempts to replicate this development process by using continuous visual data streams and self-organizing learning rules to optimize simple MLPs for low-level visual processing. We hypothesize that representations produced by models trained using CVLC will extract features useful for general visual processing, allowing such representations to be used in a variety of visual tasks like pattern recognition and character classification. We test our hypothesis by training a model on synthetically generated continuous data streams using CVLC. We assess the resulting model both by directly inspecting the properties of representations generated for manually curated visual inputs, and by evaluating how effective the learned representations are for few-shot character classification in comparison to baseline techniques that learn from disjointed data. Our results demonstrate the CVLC model performing above baseline control models, confirming that features learned through CVLC are useful to visual processing tasks despite being trained on task-free data. While we have not extensively compared CVLC to state-of-the-art unsupervised methods, our findings suggest that continuous learning approaches like CVLC offer promising insights into overcoming key limitations of current unsupervised techniques.

  • Omar Ibrahim

    11h30 Omar Ibrahim

    Image Mosaicing Using Calibrated Cameras

    This work examines the process of stitching multiple subimages to create an 'inverted mosaic'. The strategy relies on accurate camera calibration around the object to correct for the cameras’ positions and orientations. The center of the object is centered on each image, after which the images may be warped, blended, and finally stitched onto a single seamless canvas.

    Bio:

    Omar is an undergraduate thesis student researching camera calibration and image mosaicing using computer vision.

  • Fang Lei Wu

    14h00 Fang Lei Wu

    TA Assignment using Agent Based Modelling

    Teaching assistants (TAs) are integral to the operation of a post secondary institution during a term therefore it is in their best interests to make optimal use of their TAs for a quality learning environment when assigning TAs to courses. However as the years go by in a university cohorts of undergraduate and graduate students change causing the pool of available TAs to constantly changes each year resulting in the hunt for quality TAs to never end. So we propose TA Assignment using Agent Based Modeling. Basing the model off of the Queen's University Computing TA system (QUCTAS) which collects all their TA applicants into a single pool allowing us to streamline the TA assignment process. The model simulates parts of Students, Courses, Professors and the University relevant to the subject of TA assignment to provide a sufficient environment to demonstrate relations between the students and QUCTAS. Using the environment provided by the model we test various TA assignment algorithms as well as optimizing available hyper parameters. TAs are given a score of 1 being the worst to 5 being the best, with the best algorithm within the final evaluation achieving an average score of 4.795/5 as well with an average of 6.413 TAs assigned with a score of 2 or less.

    Bio:

    Fang is a fourth year undergraduate Computing student at Queen’s University. His undergraduate project features TA assignment optimization via planning and an agent based model. Hopefully this research may be of use to future TA assignments. Apart from artificial intelligence Fang also has an interest in data science which, and believes it can go hand and hand with artificial intelligence.

  • Shrinidhi Thatahngudi Sampath Krishnan

    14h30 Shrinidhi Thatahngudi Sampath Krishnan

    Interpretable Model Analysis For Oncogene Prediction And Contribution Using Gene Expression Data

    This research aims to enhance the interpretability of cancer classification models by integrating explainable AI (XAI) techniques with post-hoc analysis and biological validation. We apply SHAP, LIME, and DeepLIFT to analyze oncogene contributions across multiple models (DNN, SVM, and logistic regression) traditionally viewed as black-boxes. Gene expression data from TCGA, TARGET, and GTEx were processed to train these models for differentiating cancerous from normal tissues. Feature importance was assessed across models, and the most influential genes were further analyzed using DAVID and g:Profiler to validate their biological significance.

    Bio:

    Shrinidhi is an undergraduate student specializing in Biomedical Computation at Queen’s University, currently conducting research under the guidance of Professor Hu. Focused on the intersection of artificial intelligence and healthcare, Shrinidhi investigates machine learning applications in drug discovery and disease prognosis, notably through projects like deep docking with Prof Hu and predictive models for surgical outcomes.

  • Jasmine van Leeuwen

    15h00 Jasmine van Leeuwen

    Automated Planning Solutions for Assignment Problems in Aviation

    This research introduces automated planning as a novel approach to two critical operational challenges in aviation: the Gate Assignment Problem (GAP) and Fleet Assignment Problem (FAP). Traditional approaches to these NP-complete problems rely on complex mathematical programming or heuristic methods that are difficult to implement and adapt. This work demonstrates that temporal planning using PDDL 2.1 offers an accessible and flexible alternative that produces high-quality solutions. For GAP, the model successfully generates conflict-free gate assignments that accommodate all scheduled flights while respecting operational constraints at Billy Bishop Toronto City Airport. For FAP, the model efficiently matches KLM's aircraft fleet to routes based on aircraft capabilities and scheduling requirements. Results show that this approach creates feasible assignments for both problems and reveals potential operational inefficiencies and opportunities for resource optimization. This research takes a significant step toward making sophisticated optimization techniques more accessible to aviation stakeholders and provides a foundation for integrating automated planning into broader airline and airport operations.

    Bio:

    Jasmine is an undergraduate student at Queen's University. She is working with Dr. Muise on using automated planning to model & create optimized solutions for complex aviation challenges.

  • David Courtis

    15h30 David Courtis

    Mechanistic Personality Analysis of LLMs Steering Personality via Latent Feature Interventions

    Large Language Models (LLMs) have demonstrated the ability to simulate human-like OCEAN personality traits in generated text. Previous efforts have focused on prompt engineering or fine-tuning to shape LLM personality. In this work, we propose a mechanistic interpretability approach that directly intervenes on the model’s latent features. Our method identifies latent directions in the residual stream corresponding to a target OCEAN trait using sparse autoencoders (SAEs) and contrastive activation analysis. We formalize an additive steering vector in activation space and demonstrate how applying a small additive shift to the hidden states enhances the target trait while preserving overall language modeling performance. To determine the optimal combination of feature shifts, we explore a linear weighting heuristic with grid search optimization that balances personality expression with task performance. Our approach shows promise in controllably tuning personality traits at the mechanistic level while maintaining high performance on standard benchmarks.

Schedule

Time Slot Description
10h00 Opening MIB/MuLab General introduction to the M&M Spring Symposium and CISC 500 with MIB & MuLab
10h30 Matthew Sun Matthew Sun MuLab Towards Learning Symbolic Representations from Video
11h00 Brandon Cheng Brandon Cheng MIB Continuous Visual Learning Curriculum: Unsupervised Visual Representation Learning Guided by Continuous Visual Streams
11h30 Omar Ibrahim Omar Ibrahim MuLab Image Mosaicing Using Calibrated Cameras
12h00 Lunch -
14h00 Fang Lei Wu Fang Lei Wu MuLab TA Assignment using Agent Based Modelling
14h30 Shrinidhi Thatahngudi Sampath Krishnan Shrinidhi Thatahngudi Sampath Krishnan MIB Interpretable Model Analysis For Oncogene Prediction And Contribution Using Gene Expression Data
15h00 Jasmine van Leeuwen Jasmine van Leeuwen MuLab Automated Planning Solutions for Assignment Problems in Aviation
15h30 David Courtis David Courtis MIB Mechanistic Personality Analysis of LLMs Steering Personality via Latent Feature Interventions
16h00 Conclude -