Brainhack Montreal Winter 2026 Projects


Back to Home

Submitted Projects


FPGA Hardware Acceleration of 3D reconstruction of EEG data

Leaders:

Hossen Alyazgi — hossen.alyazgi@mail.mcgill.ca — cristo07847 Luna Alkabra — luna.alkabra@mail.mcgill.ca


HyPyP Cookbook Sprint: a community-driven onboarding kit (docs, tutorials, examples, and small feature fixes)

Leaders:

  • Rémy Ramadour // remy.ramadour.hsj@ssss.gouv.qc.ca // Discord: @ramdam79
  • Patrice Fortin // patrice.fortin.hsj@ssss.gouv.qc.ca // Discord: @osokin


HyPyP is an open-source Python toolbox for hyperscanning and interpersonal brain/physiology synchrony analyses. The goal of this BrainHack project is to build a community-driven onboarding kit that makes HyPyP easier to learn, run, and extend—especially for newcomers.

Concretely, we will produce a Cookbook: a small set of well-documented, reproducible tutorials (notebooks), improved documentation, and contributor-friendly entry points (“good first issues”). This will help students, researchers, and engineers quickly go from “I installed HyPyP” to “I can run a complete synchrony workflow and understand what I’m doing,” while also making it easier for new contributors to participate.

As part of the Cookbook effort, we will also identify the main bottlenecks and roadblocks that create friction for newcomers (installation issues, unclear steps, common pitfalls) and either remove them when feasible or document clear solutions (troubleshooting/FAQ).

This is a high-impact, low-friction community effort: in only a few days, we can significantly reduce the barrier to entry for hyperscanning analyses and improve reproducibility. The project is designed to be inclusive: contributors can help through writing, reviewing, coding, testing, visualization, or pedagogy, at beginner/intermediate/advanced levels.

Bonus: if time allows, we’ll implement a small “quality-of-life” improvement (e.g., clearer API wrapper, tests, example dataset, or minor feature fix) so the event yields both documentation and code improvements.

Participants will:

  • clone the repo, install dependencies, and run a “hello world” notebook,
  • pick a task (tutorial writing, doc improvement, tests/CI, visualization, small feature),
  • submit contributions via PRs using templates and “good first issues” labels.

We will provide a clear README, a beginner-friendly checklist, and a live coordination channel during the event.

Build a Python command line configuration wizard for Neurobagel

Leaders:

Sebastian Urchs (_surchs) Alyssa Dai (daialyssa) Arman Jahanpour (rmanaem)


Neurobagel is a tool ecosystem and network of global data nodes that lets you search for subject-level neuroscience cohorts across distributed datasets stored at other institutes or sites. To participate, institutes deploy local Neurobagel “nodes” that then connect to the global network to become discoverable and queryable.

We want to make it easier to deploy a Neurobagel node so that more sites can join Neurobagel, including those with more limited technical resources. Currently, configuring a Neurobagel node requires copying and editing local configuration files by hand. This configuration is important, because it controls options such as whether queries return aggregated results or full records, and other privacy-relevant settings.

The current approach is cumbersome and overwhelming for users for several reasons:

  • There are many config options, so the template file is long and hard to edit
  • We are actively developing Neurobagel, so new config options are regularly introduced that users then must manually update in their existing setup
  • Neurobagel has different deployment use cases, and many config options are only relevant to certain ones
  • Users can only find out if their config files are correct by launching the node and waiting for it to fail

Our goal: To create a command line tool that will simplify the process of generating a valid configuration for a Neurobagel node.

Come talk to us if you:

  • Want to (learn to) build the command line tool with us
  • and/or are interested in using Neurobagel and want to give us your view as a potential user We are very excited for both kinds of contributions!

Analysing Dynamic Data for MoCA Solo

Leaders:

Murray Gilles: murray.gillies@mocacognition.com Saber Naderi: saber.naderi@mocacognition.com / Discord handle: saber_moca Yijae Kim: yijae.kim@mocacognition.com / Discord handle: yijae1_42148_49872


MoCA Cognition has developed a new digital tool called MoCA Solo to quantify a patient’s cognitive performance. It is based on the very well-established paper test “MoCA 8.1”, used to assess people with Mild Cognitive Impairment and Alzheimer’s disease. It requires the patient to perform an array of tasks, such as clock drawing, naming animals or recalling 5 words. For the paper version, a point is awarded for each task, but how the task is completed can only be observed live by a human. In the MoCA Solo application, all data is recorded in raw format, i.e. all audio files and every click on the iPad are collected for post data collection analysis.

MoCA Cognition has collected data with the MoCA Solo application from 500 English-speaking participants, including patients and healthy individuals. All the current outcome measures have been annotated by three human raters. MoCA Cognition has used this data to develop AI scoring algorithms to automatically score the current outcome measures without the need for the presence of a human. These scores characterize the patient and the goal of this activity was to reproduce the paper MoCA score.

While the current outcome measures are available from both the algorithms and the ground truths for the 500 people, we haven’t explored the dynamics of how the MoCA Solo tests were completed. We are interested in whether there is information in the dynamics of how the tests are completed which correlates with the existing outcome measures. An example of this could be the time it takes for a patient to name an animal shown in an image and the ability to recall the 5 words in delayed recall. The goal of the project is to create algorithms that take the dynamic data and find features that correlate with the existing measures such as the MoCA Score.

Demo video of MoCA Solo: https://youtu.be/sl_4nr-n8SM MoCA Paper 8.1 & Instructions : https://captiva.neurosurgery.ufl.edu/resources/moca/

BIDSbook

Leaders:

Basile Pinsard


  • Useful for many data management of multiple data acquisition and sharing projects https://jupyterbook.org

Wonkyconn - quality metrics and insights into your fMRI connectomes!

Leaders:

Hao-Ting Wang @htwangtw Clara El Khantour @claraElk Pierre Bergeret @pbergeret12


Evaluating the residual motion and analytic insights on the fMRI connectome and visualising reports. The project is based on the code from SIMEXP/fmriprep-denoise-benchmark and the publication by Wang et al. (2024).

QC-Studio workflow with Nipoppy

Leaders:

Nikhil Bhagwat (nikhil153_) Michelle Wang (michelle__wang) Brent McPherson


Nipoppy is a lightweight framework for standardized organization and processing of neuroimaging-clinical datasets. Its goal is to help users adopt the FAIR principles and improve the reproducibility of studies.

This brainhack project aims to add a visual quality control (QC) interface to the Nipoppy framework. We will build a prototype using Streamlit and Niivue tools. The QC interface would show both 2D image report (.svg) and 3D nifti files. We will test this prototype on raw and processed images from fmriprep pipeline using a sample Nipoppy dataset.

MRI quality checks are hard! We would like to make them a bit easier :)

Electrophysiology derivatives for BIDS (Brain Imaging Data Structure) -- tools, open platform, and standards

Leaders:

BIDS maintainer Christine Rogers at mcgill.ca @christinerogers on both Discord and GitHub


  • BIDS derivatives for EEG BEP021 - wrap up, community feedback
  • EEGNet.loris.ca - User feedback on querying interface, data contribution process
  • EEG2BIDS platform-independent tool: user-facing issues and docs

Connect with me to get involved / learn more / access the open resources

A Model Context Protocol (MCP) for Nipoppy

Leaders:

Brent McPherson Michelle Wang Mathieu Dugre Nikhil Bhagwat Jean-Baptiste Poline


  • We are building a model context protocol (MCP) for Nipoppy.
  • This will allow for Agents to seamlessly access and query Nipoppy datasets.

Nilearn: Welcoming contributions !

Leaders:

Elizabeth DuPre (@emdupre on Discord)


  • Nilearn is an existing Python library for statistical and machine learning analysis of brain imaging data
  • The software is being actively developed and continues to welcome contributions !
  • We have a large issue backlog to work on, including a number of good first issues
  • Elizabeth is also available during Brainhack for any questions or discussions about using and extending Nilearn