1st Annual M&M Spring Symposium



Goodwin 524

April 11, 2024

About

The M&M Spring Symposium is a celebration for those students that take on a 2-term research project with either the MIB or MuLab in the School of Computing at Queen's University. This primarily involves CISC 500 students, but may also include independent research projects and students in the STEMInA program.

The innagural Symposium will take place on April 11th, 2024, in Goodwin 524. Admittance is free and open to all those affiliated with either of the two labs (MuLab & MIB).

Lunch will be provided.

Speakers


  • Dylan Rietze

    10h00 Dylan Rietze

    Comprehensive Antibiotic Resistance Database (CARD)

    Antibiotic resistance is a significant problem in healthcare worldwide. To address this challenge, I am exploring the Comprehensive Antibiotic Resistance Database (CARD) from a biotechnological perspective. My focus is on understanding antibiotic resistance mechanisms and their implications. By studying CARD, we can gain a better understanding of how to combat this global health issue and ultimately overcome it.

    Bio:

    Dylan is a second-year Undergraduate Life Sciences student interested in genetics and the use of computational methods to analyze the human genome. He joins M&MSS as a STEMInA candidate.

  • Matthew Vandergrift

    10h20 Matthew Vandergrift

    Can we Agree to Disagree? Examining the disagreement problem in explainable artificial intelligence

    This work seeks to examine the disagreement problem. This phenomena describes how differing explanations can be produced for the same model prediction. This work focuses on examining the disagreement problem for local feature importance explanations, which return a numeric vector indicating the value of each feature. We consider disagreement between LiME, Permutation SHAP, Kernel SHAP, DeepLift, Integrated Gradients, and Layerwise Relevance Propagation. We utilize a wide variety of metrics, capturing distance, distributional, and geometric disagreement. We compare these metrics against the importance of each feature, and against random noise. We conclude that disagreement decreases for more important features in our tested datasets.

    Bio:

    Matthew is an undergraduate computing and mathematics student at Queen’s University, currently working under the supervision of Professor Hu. He is investigating current challenges facing the field of explainable artificial intelligence, specifically the tendency of explanations to disagree. He has previously worked on explainable clustering with Professor Hu as an NSERC USRA research award recipient.

  • Sophie Ellwood

    10h40 Sophie Ellwood

    An analysis of methods for identifying community structure within a human disease network

    Previous work has demonstrated that many human networks, including those of drug-target and protein-protein interactions contain biologically relevant subnetworks. Particularly, the human disease network suggests that the genetic origins of many diseases are shared with others, underscoring the interconnected nature of the gene-disease relationships. The intersection of computer science and network biology develops the discovery of methods that allow meaningful results to be extracted from these extremely large networks. By focusing on community detection algorithms, this research aims to uncover functional subnetworks based primarily on the graph structure, while also comparing the effectiveness of these various methods.

    Bio:

    Sophie is an undergraduate student completing a computer science specialization at Queen's University. She is part of the MIB lab, working with Prof. Hu to investigate the application of community detection algorithms to the human disease network. In the past, she has completed several internships in finance and has been active in computing clubs including QTMA and QWIC.

  • Logan Cantin

    11h00 Logan Cantin

    StrategySearch: Interpretable AI Model for Playing Strategy Games

    Modern AI systems such as AlphaGo have demonstrated superhuman performance in a variety of tasks. However, these systems are black boxes which do not allow us to gain any insight into the strategy that they use to accomplish the task. StrategySearch is a novel AI model that uses evolutionary algorithms and large language models to evolve source code to play two player games. The result is a Python source code file that represents the strategy and is interpretable by humans. We discuss the implications of this method for automating knowledge discovery.

    Bio:

    Logan is an undergraduate student in Computer Science and Mathematics at Queen's university working on AI interpretability and automated knowledge discovery with Prof. Hu. He has done Covid-19 modelling research during the pandemic with Queen's university and the Canadian government to help inform public health policy.

  • Greg Wang

    11h20 Greg Wang

    Prompt Pattern Discovery by Evolution through Large Models

    With recent developments in the rise of usage of Large Language Models (LLM) in general, prompt engineering has become an increasingly explored field. Prompt engineering plays a large role in the quality of a result coming out of an LLM and make the difference between a result containing complete non-sense and practical, useful solutions from models. This paper sets out to find prompting patterns that will aid in increasing the quality of solutions provided by LLMs in specifically the realm of mathematical problems. It will achieve this through the usage of the MAP-Elites to generate prompts. These prompts then generate python runnable code which will be used to find insightful patterns in the prompting methods. This paper contributes practical prompting methods and patterns to achieve a higher quality result from any LLM.

    Bio:

    Greg Wang is a fourth year undergraduate student and multi hackathon award winner attending Queen’s university working on prompt engineering optimization with Dr. Hu. He is joining as a CISC 500 student for M&MSS.

  • Olivia Xu

    11h40 Olivia Xu

    Enhancing Reliability of LLM Outputs: A Guardrail Framework Incorporating Iterative Prompting and Verification

    Large Language Models (LLMs) have become more and more ubiquitous, influencing a vast array of applications from automated content creation and language translation to assist- ing in complex decision-making processes. However, these models are still prone to producing “hallucinations”, or un- grounded content, which compromises their reliability especially in mission-critical applications. This study introduces a generic guardrail framework designed to enhance trust in LLM outputs by integrating sound verifiers and employing iterative prompting to encourage model self-correction. The framework is model-agnostic and training-free, operating in- dependently of the model’s internal architecture. We evaluate our approach through experiments on abstraction summarization tasks using two types of verifiers (one-step and multi- step) and find that iterative prompting with conversation his- tory can significantly improve LLM outputs in terms of adherence to defined passing conditions. Our guardrail frame- work will be publicly available and can be adapted across a broad range of downstream tasks to enhance the trustworthiness and provability of LLM outputs.

    Bio:

    Olivia is an undergraduate student at Queen’s University, working with Prof. Muise on using neurosymbolic approach to build safe AI. She has completed internships at RBC and Uber as a software engineering intern, and the Center for AI and Data Governance in Singapore as a visiting research assistant. She also leads QMIND and CUCAI.

  • Ting Hu

    14h00 Ting Hu

    Evolvability and rate of evolution in evolutionary algorithms

    (from the archives) We transfer the method of measurement of the rate of genetic substitutions from molecular biology to evolutionary algorithms (EAs). We apply this measurement method to investigate the effects of main configuration parameters in EAs and show that some insights can be gained into the effectiveness of these parameters with respect to evolution acceleration. Further, we observe that population size plays an important role in determining the rate of evolution. We formulate a new indicator based on this rate of evolution measurement to adjust population size dynamically during evolution. Such a strategy can stabilize the rate of genetic substitutions and effectively improve the performance of a GP system over fixed-size populations. This rate of evolution measure also provides an avenue to study evolvability, since it captures how the two sides of evolvability, i.e., variability and neutrality, interact and cooperate with each other during evolution. We show that evolvability can be better understood in the light of this interplay and how this can be used to generate adaptive phenotypic variation via harnessing random genetic variation.

    Bio:

    Ting is an Associate Professor at Queen's School of Computing. She received her PhD in Computer Science from Memorial University in St. John’s, Canada and completed her postdoctoral training in bioinformatics at Dartmouth College in Hanover, New Hampshire, USA. Her research focuses on explainable AI, evolutionary computing, and the applications of machine learning in biomedicine.

  • Christian Muise

    14h15 Christian Muise

    DSHARP: Using sharpSAT for faster d-DNNF compilation

    (from the archives) Knowledge compilation is a compelling technique for dealing with the intractability of propositional reasoning. One particularly effective target language is Deterministic Decomposable Negation Normal Form (d-DNNF). We exploit recent advances in #SAT solving in order to produce a new state-ofthe-art CNF → d-DNNF compiler: DSHARP. Empirical results demonstrate that DSHARP is generally an order of magnitude faster than C2D, the de facto standard for compiling to d-DNNF, while yielding a representation of comparable size.

    Bio:

    Christian is an Assistant Professor at Queen’s University in Kingston, Canada where he directs the MuLab. He completed my PhD under the supervision of Professors Sheila McIlraith and J. Christopher Beck in the area of Automated Planning, with the Knowledge Representation and Reasoning Group at the University of Toronto. Following his PhD, he was a post-doc for two years with the University of Melbourne’s Agentlab studying techniques for multi-agent planning with a project on human-agent collaboration, and then subsequently a Research Fellow with the MERS group at MIT’s CSAIL. Just prior to joining Queen’s, Christian was a Research Staff Member for two years at the MIT-IBM Watson AI Lab.

Schedule

Time Slot Description
10h00 Dylan Rietze Dylan Rietze MIB/MuLab Comprehensive Antibiotic Resistance Database (CARD)
10h20 Matthew Vandergrift Matthew Vandergrift MIB Can we Agree to Disagree? Examining the disagreement problem in explainable artificial intelligence
10h40 Sophie Ellwood Sophie Ellwood MIB An analysis of methods for identifying community structure within a human disease network
11h00 Logan Cantin Logan Cantin MIB StrategySearch: Interpretable AI Model for Playing Strategy Games
11h20 Greg Wang Greg Wang MIB Prompt Pattern Discovery by Evolution through Large Models
11h40 Olivia Xu Olivia Xu MuLab Enhancing Reliability of LLM Outputs: A Guardrail Framework Incorporating Iterative Prompting and Verification
12h00 Lunch -
14h00 Ting Hu Ting Hu MIB Evolvability and rate of evolution in evolutionary algorithms
14h15 Christian Muise Christian Muise MuLab DSHARP: Using sharpSAT for faster d-DNNF compilation
14h30 Conclude -