NeuroMiner is a free machine learning software written by Prof. Nikolaos Koutsouleris and his team in MATLAB, developed since 2009, that facilitates research into better tools for precision medicine. It provides a wealth of state-of-the-art supervised machine learning techniques, such as linear and non-linear support-vector machines, relevance vector machines, random forests, and gradient-boosting algorithms. It also comes with numerous dimensionality reduction methods and feature selection strategies that allow finding optimal combinations of predictive features for the user’s given prediction problem.
NeuroMiner can be operated with little coding experience, as it is fully menu-driven and can be used in server/remote environments with no or limited graphical interfaces. Furthermore, the NeuroMiner interface facilitates standardized parameter setup, storage, and dissemination across research labs, which is an essential requirement for more robust and generalizable predictive models.
NeuroMiner is constantly updated by the Section for Precision Psychiatry at the Department of Psychiatry and Psychiatry of Ludwig-Maximilian-University and can analyze all tabular data format that are stored in numeric MATLAB format, or alternatively in CSV or Microsoft Excel spreadsheets. It also supports the in-depth analysis of 3D voxel-based neuroimaging data. These different data sources can be analyzed separately or be combined using a variety of data fusion approaches ranging from data concatenation or bagging to more advanced stacking methods and sequential data integration techniques. New features include single-subject prediction interpretation using Shapley Values, integration with the JuSpace toolbox for neurophysiological measures derived from positron emission tomography, and out-of-sample cross-validation (OOCV) to evaluate models on previously unseen datasets.
In high-performance computing environments, researchers can exploit the SGE-/SLURM-based parallelization functionality of NeuroMiner to tremendously speed up the model training, cross-validation and visualization procedures.
Visualization of a model’s classification performance
Visualization of feature weights
Classification performance across different parameter combinations and folds of the cross-validation structure
Classification performance of out-of-sample cross-validation (OOCV) analyses