In image-guided neurosurgery, preoperative images are typically used for surgical planning and intraoperative guidance.
The accuracy of preoperative images can be significantly compromised by intraoperative brain deformation. To
compensate for brain shift, biomechanical finite element models have been used to assimilate intraoperative data to
simulate brain deformation. The clinical feasibility of the approach strongly depends on its accuracy and efficiency. In
order to facilitate and streamline data flow, we have developed graphical user interfaces (GUIs) to provide efficient
image updates in the operating room (OR). The GUIs are organized in a top-down hierarchy with a main control panel
that invokes and monitors a series of sub-GUIs dedicated to perform tasks involved in various aspects of computations of
whole-brain deformation. These GUIs are used to segment brain, generate case-specific brain meshes, and assign and
visualize case-specific boundary conditions (BC). Registration between intraoperative ultrasound (iUS) images acquired
pre- and post-durotomy is also facilitated by a dedicated GUI to extract sparse displacement data used to drive a
biomechanical model. Computed whole-brain deformation is then used to morph preoperative MR images (pMR) to
generate a model-updated image set (i.e., uMR) for intraoperative guidance (accuracy of 1-2 mm). These task-driven
GUIs have been designed to be fault-tolerant, user-friendly, and with sufficient automation. In this paper, we present the
modular components of the GUIs and demonstrate the typical workflow through a clinical patient case.