BrainGuard: Privacy-Preserving Multisubject Image Reconstructions from Brain Activities

1Lanzhou University, 2Nanyang Technological University, 3Zhejiang University
AAAI 2025 (Oral)
teaser

(a) Early subject-specific methods require separate training for each individual using their respective fMRI, overlooking intersubject commonalities. (b) Recent multisubject methods that combine all subjects' fMRI for training pose substantial privacy concerns. (c) BrainGuard captures intersubject commonalities while preserving data privacy.

Abstract

Reconstructing perceived images from human brain activity forms a crucial link between human and machine learning through Brain-Computer Interfaces.

Early methods primarily focused on training separate models for each individual to account for individual variability in brain activity, overlooking valuable cross-subject commonalities. Recent advancements have explored multisubject methods, but these approaches face significant challenges, particularly in data privacy and effectively managing individual variability.

To overcome these challenges, we introduce \textsc{BrainGuard}, a privacy-preserving collaborative training framework designed to enhance image reconstruction from multisubject fMRI data while safeguarding individual privacy. \textsc{BrainGuard} employs a collaborative global-local architecture where individual models are trained on each subject's local data and operate in conjunction with a shared global model that captures and leverages cross-subject patterns. This architecture eliminates the need to aggregate fMRI data across subjects, thereby ensuring privacy preservation.

To tackle the complexity of fMRI data, \textsc{BrainGuard} integrates a hybrid synchronization strategy, enabling individual models to dynamically incorporate parameters from the global model. By establishing a secure and collaborative training environment, \textsc{BrainGuard} not only protects sensitive brain data but also improves the image reconstructions accuracy. Extensive experiments demonstrate that \textsc{BrainGuard} sets a new benchmark in both high-level and low-level metrics, advancing the state-of-the-art in brain decoding through its innovative design.

Method

method

\textsc{BrainGuard} requires only a single training session using multisubject data by introducing a collaborative training framework, consisting of individual models and a global model. These models are trained in a collaborative approach, where individual and global models engage in bidirectional parameter fusion during their training process. Specifically, the individual models are optimized not only through their respective subject's fMRI training objectives but also via integration with the global model. Conversely, the update of the global model's parameters is informed by the amalgamation of multiple individual models. To achieve the dynamic parameter fusion process, we devise a hybrid synchronization strategy.

Results

textbf{Qualitative comparisons on the NSD \texttt{test} dataset.} \textsc{BrainGuard} performs a single training session on multisubject fMRI data, demonstrates superior reconstruction accuracy compared to four recent state-of-the-art methods ~\cite{quan2024psychometry,wang2024mindbridge,scotti2023reconstructing,ozcelik2023brain}, while effectively preserving data privacy.}

compare results

BibTeX

@article{tian2025brainguard,
  author    = {Zhibo Tian and Ruijie Quan and Fan Ma and Kun Zhan and Yi Yang},
  title     = {BrainGuard: Privacy-preserving multisubject image reconstructions from brain activities},
  journal   = {AAAI},
  year      = {2025},
}