Information

What tools are available for EEG analysis on the R platform?

What tools are available for EEG analysis on the R platform?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm starting some EEG studies on attention, and would really like to use R for preprocessing (filtering/artifact rejection), visualization, and analysis, but I can find very little in the way of tools. If there isn't a standalone package, what packages might be useful?

Things I want to do:

  • Condition categorization according to events, and comparing all subsequent analyses by condition
  • Power spectral density in specific frequency bands (SMR, theta, beta, alpha, etc.)
  • Event-related potentials
  • LORETA (low-resolution electromagnetic tomography)

Antoine Tremblay has just released an advanced analysis toolbox: http://onlinelibrary.wiley.com/doi/10.1111/psyp.12299/abstract

It's missing about half the features on your list, although fundamentally, spectral density is a simple task and LORETA is a stand-alone package anyways (although similar approaches, e.g. general CSD estimation, are implemented in many packages). Basically, once you got the EEG data read into R and cleaned of artifacts, ERPs (simple averaging) and spectra are fairly basic tasks and LORETA is an external toolbox agnostic of where it's getting its data from.

Alternatively, I would propose to use either one of the two standard MATLAB-based solutions (EEGLAB or Fieldtrip), or MNE in one of its iterations (e.g. the one in Python). All of these will handle the tasks you're talking about.


For the sake of completeness:

  1. eegkit, see https://cran.r-project.org/web/packages/eegkit/index.html

  2. For "historical purposes" perhaps the following could also be of interest, although development seems somewhat stagnant lately: https://rdrr.io/cran/eegAnalysis/


I was searching for alternative when I fell on this post. Here are a few others

eegUtils , the same author has a blog that might be of interest to you for further reading here

I also found eegAnalysis but the last update was in 2014

Finally for ERPs there is erpR


Bigmelon: tools for analysing large DNA methylation datasets

Motivation: The datasets generated by DNA methylation analyses are getting bigger. With the release of the HumanMethylationEPIC micro-array and datasets containing thousands of samples, analyses of these large datasets using R are becoming impractical due to large memory requirements. As a result there is an increasing need for computationally efficient methodologies to perform meaningful analysis on high dimensional data.

Results: Here we introduce the bigmelon R package, which provides a memory efficient workflow that enables users to perform the complex, large scale analyses required in epigenome wide association studies (EWAS) without the need for large RAM. Building on top of the CoreArray Genomic Data Structure file format and libraries packaged in the gdsfmt package, we provide a practical workflow that facilitates the reading-in, preprocessing, quality control and statistical analysis of DNA methylation data.We demonstrate the capabilities of the bigmelon package using a large dataset consisting of 1193 human blood samples from the Understanding Society: UK Household Longitudinal Study, assayed on the EPIC micro-array platform.

Availability and implementation: The bigmelon package is available on Bioconductor (http://bioconductor.org/packages/bigmelon/). The Understanding Society dataset is available at https://www.understandingsociety.ac.uk/about/health/data upon request.

Supplementary information: Supplementary data are available at Bioinformatics online.

© The Author(s) 2018. Published by Oxford University Press.

Figures

Example of bigmelon workflow. The…

Example of bigmelon workflow. The workflow is broken up into three parts: Data…

Demonstration of outlyx on Understanding…

Demonstration of outlyx on Understanding Society Dataset ( n = 1193). ( A…

Comparison of quantile normalization on…

Comparison of quantile normalization on 52 GB β matrix from Marmal-aid dataset (…

Median time spent randomly accessing…

Median time spent randomly accessing different sized portions of data from the Marmal-Aid…


Brainstorm: A User-Friendly Application for MEG/EEG Analysis

Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).

1. Introduction

Although MEG and EEG instrumentation is becoming more common in neuroscience research centers and hospitals, research software availability and standardization remain limited compared to the other functional brain imaging modalities. MEG/EEG source imaging poses a series of specific technical challenges that have, until recently, impeded academic software developments and their acceptance by users (e.g., the multidimensional nature of the data, the multitude of approaches to modeling head tissues and geometry, and the ambiguity of source modeling). Ideally, MEG/EEG imaging is multimodal: MEG and EEG recordings need to be registered to a source space that may be obtained from structural MRI data, which adds to the complexity of the analysis. Further, there is no widely accepted standard MEG/EEG data format, which has limited the distribution and sharing of data and created a major technical hurdle to academic software developers.

MEG/EEG data analysis and source imaging feature a multitude of possible approaches, which draw on a wide range of signal processing techniques. Forward head modeling for example, which maps elemental neuronal current sources to scalp potentials and external magnetic fields, is dependent on the shape and conductivity of head tissues and can be performed using a number of methods, ranging from simple spherical head models [1] to overlapping spheres [2] and boundary or finite element methods [3]. Inverse source modeling, which resolves the cortical sources that gave rise to MEG/EEG recordings, has been approached through a multitude of methods, ranging from dipole fitting [4] to distributed source imaging using Bayesian inference [5–7]. This diversity of models and methods reflects the ill-posed nature of electrophysiological imaging which requires restrictive models or regularization procedures to ensure a stable inverse solution.

The user’s needs for analysis and visualization of MEG and EEG data vary greatly depending on their application. In a clinical environment, raw recordings are often used to identify and characterize abnormal brain activity, such as seizure events in epileptic patients [8]. Alternatively, ordering data into trials and averaging of an evoked response [9] remains the typical approach to revealing event-related cortical activity. Time-frequency decompositions [10] provide insight into induced responses and extend the analysis of MEG/EEG time series at the sensor and source levels to the spatial, temporal, and spectral dimensions. Many of these techniques give rise to computational and storage related challenges. More recently, an increasing number of methods have been proposed to address the detection of functional and effective connectivity among brain regions: coherence [11], phase locking value [12], Granger causality [13, 14] and its multivariate extensions [15], and canonical correlation [16] among others. Finally, the low spatial resolution and nonisotropic covariance structure of measurements requires adequate approaches to their statistical analysis [17].

Despite such daunting diversity and complexity in user needs and methodological approaches, an integrated software solution would be beneficial to the imaging community and provide progressive automation, standardization and reproducibility of some of the most common analysis pathways. The Brainstorm project was initiated more than 10 years ago in collaboration between the University of Southern California in Los Angeles, the Salpêtrière Hospital in Paris, and the Los Alamos National Laboratory in New Mexico. The project has been supported by the National Institutes of Health (NIH) in the USA and the Centre National de la Recherche Scientifique (CNRS) in France. Its objective is to make a broad range of electromagnetic source imaging and visualization techniques accessible to nontechnical users, with an emphasis on the interaction of users with their data at multiple stages of the analysis. The first version of the software was released in 2000, [18] and a full graphic user interface (GUI) was added to Brainstorm 2 in 2004 [19]. As the number of users grew, the interface was completely redesigned and improved, as described in this paper. In response to the high demand from users, many other tools were integrated in Brainstorm to cover the whole processing and visualization pipeline of MEG/EEG recordings, from the importing of data files, from a large selection of formats, to the statistical analysis of source imaging maps. Brainstorm 3 was made available for download in June 2009 and was featured at the 15th Human Brain Mapping Conference in San Francisco. The software is now being improved and updated on a regular basis. There have been about 950 new registered users since June 2009, for a total of 4,000 since the beginning of the project.

Brainstorm is free and open source. Some recent publications using Brainstorm as a main analysis software tool are listed in [20–26]. This paper describes the Brainstorm project and the main features of the software, its connection to other projects, and some future developments that are planned for the next two years. This paper describes the software only methodological background material is not presented here but can be found in multiple review articles and books, for example, [1, 27, 28].

2. Software Overview

Brainstorm is open-source software written almost entirely in Matlab scripts and distributed under the terms of the General Public License (GPL). Its interface is written in Java/Swing embedded in Matlab scripts, using Matlab’s ability to work as a Java console. The use of Matlab and Java make Brainstorm a fully portable, cross-platform application.

The advantage of scripting languages in a research environment is the simplicity to maintain, modify, exchange, and reuse functions and libraries. Although Python might be a better choice for a new project because of its noncommercial open source license, Brainstorm was built from a vast amount of pre-existing lines of Matlab code as its methodological foundations for data analysis. The Matlab development environment is also a high-performance prototyping tool. One important feature for users who do not own a Matlab license is that a stand-alone version of Brainstorm, generated with the Matlab Compiler, is also available for download for the Windows and Linux operating systems.

All software functions are accessible through the GUI, without any direct interaction with the Matlab environment hence, Brainstorm can be used without Matlab or programming experience. For more advanced users, it is also possible to run all processes and displays from Matlab scripts, and all data structures manipulated by Brainstorm can be easily accessed from the Matlab command window.

The source code is accessible for developers on an SVN server, and all related Brainstorm files are compressed daily into a zip file that is publicly available from the website, to facilitate download and updates for the end user. Brainstorm also features an automatic update system that checks at each startup if the software should be updated and whether downloading a new version is necessary.

User documentation is mainly organized in detailed online tutorials illustrated with numerous screen captures that guide the user step by step through all software features. The entire website is based on a MoinMoin wiki system [29] hence, the community of users is able to edit the online documentation. Users can report bugs or ask questions through a VBulletin forum [30], also accessible from the main website.

3. Integrated Interface

Brainstorm is driven by its interface: it is not a library of functions on top of which a GUI has been added to simplify access but rather a generic environment structured around one unique interface in which specific functions were implemented (Figure 1). From the user perspective, its organization is contextual rather than linear: the multiple features from the software are not listed in long menus they are accessible only when needed and are typically suggested within contextual popup menus or specific interface windows. This structure provides faster and easier access to requested functions.


General overview of the Brainstorm interface. Considerable effort was made to make the design intuitive and easy to use. The interface includes: (a) a file database that provides direct access to all data (recordings, surfaces, etc.), (b) contextual menus that are available throughout the interface with a right-button click, (c) a batch tool that launches processes (filtering, averaging, statistical tests, etc.) for all files that were drag-and-dropped from the database (right) multiple displays of information from the database, organized as individual figures and automatically positioned on the screen, and (d) properties of the currently active display.

Data files are saved in the Matlab.mat format and are organized in a structured database with three levels of classification: protocols, subjects, and experimental conditions. User data is always directly accessible from the database explorer, regardless of the actual file organization on the hard drive. This ensures immediate access to all protocol information and allows simultaneous display and comparison of recordings or sources from multiple runs, conditions, or subjects.

4. Supported File Formats

Brainstorm requires three categories of inputs to proceed to MEG/EEG source analysis: the anatomy of the subject, the MEG/EEG recordings, and the 3D locations of the sensors. The anatomy input is usually a T1-weighted MRI of the full head, plus at least two tessellated surfaces representing the cerebral cortex and scalp. Supported MRI formats include Analyze, NIfTI, CTF, Neuromag, BrainVISA, and MGH. Brainstorm does not extract cortical and head surfaces from the MRI, but imports surfaces from external programs. Three popular and freely available surface formats are supported: BrainSuite [31], BrainVISA [32], and FreeSurfer [33].

The native file formats from three main MEG manufacturers are supported: Elekta-Neuromag, CTF, and BTi/4D-Neuroimaging. The generic file format developed at La Salpêtrière Hospital in Paris (LENA) is also supported. Supported EEG formats include: Neuroscan (cnt, eeg, avg), EGI (raw), BrainVision BrainAmp, EEGLab, and Cartool. Users can also import their data using generic ASCII text files.

Sensor locations are always included in MEG files however, this is not the case for the majority of EEG file formats. Electrode locations need to be imported separately. Supported electrode definition files include: BESA, Polhemus Isotrak, Curry, EETrak, EGI, EMSE, Neuroscan, EEGLab, Cartool, and generic ASCII text files.

Other formats not yet supported by Brainstorm will be available shortly. Our strategy will merge Brainstorm’s functions for the input and output from and to external file formats with the fileio module from the FieldTrip toolbox [34]. This independent library, also written in Matlab code, contains routines to read and write most of the file formats used in the MEG/EEG community and is already supported by the developers of multiple open-source software packages (EEGLab, SPM, and FieldTrip).

5. Data Preprocessing

Brainstorm features an extensive preprocessing pipeline for MEG/EEG data: visual or automatic detection of bad trials and bad channels, event marking and definition, baseline correction, frequency filtering, data resampling, averaging, and the estimation of noise statistics. Other preprocessing operations can be performed easily with other programs (EEGLab [35], FieldTrip, or MNE [36]) and results then imported into Brainstorm as described above.

Expanding preprocessing operations with the most popular techniques for noise reduction and automatic artifact detection is one of our priorities for the next few years of development.

6. Visualization of Sensor Data

Brainstorm provides a rich interface for displaying and interacting with MEG/EEG recordings (Figure 2) including various displays of time series (a)–(c), topographical mapping on 2D or 3D surfaces (d)-(e), generation of animations and series of snapshots of identical viewpoints at sequential time points (f), the selection of channels and time segments, and the manipulation of clusters of sensors.


These visualization tools can be used either on segments of recordings that are fully copied into the Brainstorm database and saved in the Matlab.mat file format, or on typically larger, ongoing recordings, directly read from the original files and which remain stored in native file formats. The interface for reviewing raw recordings (Figure 3) also features event marking in a fast and intuitive way, and the simultaneous display of the corresponding source model (see below).


7. Visualization of Anatomical Surfaces and Volumes from MRI

Analysis can be performed on the individual subject anatomy (this requires the importation of the MRI and surfaces as described above) or using the Brainstorm’s default anatomy (included in Brainstorm’s distribution), which is derived from the MNI/Colin27 brain [37]. A number of options for surface visualization are available, including transparency, smoothing, and downsampling of the tessellated surface. Figure 4 shows some of the possible options to visualize MRI volumes and surfaces.


8. Registration of MEG/EEG with MRI

Analysis in Brainstorm involves integration of data from multiple sources: MEG and/or EEG recordings, structural MRI scans, and cortical and scalp surface tessellations. Their geometrical registration in the same coordinate system is essential to the accuracy of source imaging. Brainstorm aligns all data in a subject coordinate system (SCS), whose definition is based on 3 fiducial markers: the nasion, left preauricular, and right preauricular points: more details regarding the definition of the SCS are available at Brainstorm’s website.

MRI-Surfaces
Aligning the MRI data volume with the surface tessellations of the head tissues is straightforward and automatic as both usually originate from the same volume of data. Nevertheless, Brainstorm features several options to manually align the surface tessellations with the MRI and to perform quality control of this critical step including definition of the reference points on the scalp surface (Figure 5(a)) and visual verification of the proper alignment of one of the surfaces in the 3D MRI (Figures 5(b), 5(c)).


Registration of MRI with MEG/EEG
The fiducial reference points need to be first defined in the MRI volume (see above and Figure 4) and are then pair matched with the coordinates of the same reference points as measured in the coordinate system of the MEG/EEG during acquisition. Alignment based on three points only is relatively inaccurate and can be advantageously complemented by an automatic refinement procedure when the locations of additional scalp points were acquired during the MEG/EEG session, using a 3D digitizer device. Brainstorm lets the user run this additional alignment, which is based on an iterated closest point algorithm, automatically.
It is common in EEG to run a study without collecting individual anatomical data (MRI volume data or individual electrode positions). Brainstorm has a tool that lets users define and edit the locations of the EEG electrodes at the surface of the individual or generic head (Figure 6). This tool can be used to manually adjust one of the standard EEG montages available in the software, including those already defined for the MNI/Colin27 template anatomy.


Volume and Surface Warping of the Template Anatomy
When the individual MRI data is not available for a subject, the MNI/Colin27 template can be warped to fit a set of head points digitized from the individual anatomy of the subject. This creates an approximation of the individual anatomy based on scalp morphology, as illustrated in Figure 7. Technical details are provided in [38]. This is particularly useful for EEG studies where MRI scans were not acquired and the locations of scalp points are available.


Warping of the MRI volume and corresponding tissue surface envelopes of the Colin27 template brain to fit a set a digitized head points (white dots in upper right corner): initial Colin27 anatomy (left) and warped to the scalp control points of another subject (right). Note how surfaces and MRI volumes are adjusted to the individual data.

9. Forward Modeling

Forward modeling refers to the correspondence between neural currents and MEG/EEG sensor measurements. This step depends on the shape and conductivity of the head and can be computed using a number of methods, ranging from simple spherical head models [1] to overlapping spheres [2] and boundary or finite element methods [39].

Over the past ten years, multiple approaches to forward modeling have been prototyped, implemented, and tested in Brainstorm. The ones featured in the software today offer the best compromise between robustness (adaptability to any specific case) and accuracy (precision of the results). Other techniques will be added in the future. Current models include the single sphere and overlapping spheres methods for MEG [2] and Berg's three-layer sphere model for EEG [40]. For the single sphere methods, an interactive interface helps the user refine—after automatic estimation—the parameters of the sphere(s) that best fits the subject’s head (Figure 8).


EEG is more sensitive to approximations in the geometry of the head as a volume conductor so that boundary element methods (BEMs) may improve model accuracy. A BEM approach for both MEG and EEG will soon be added to Brainstorm through a contribution from the OpenMEEG project [41], developed by the French National Institute for Research in Computer Science and Control (INRIA).

10. Inverse Modeling

Inverse modeling resolves the cortical sources that gave rise to a specific set of MEG or EEG recordings. In Brainstorm, the main method to estimate source activities is adapted from the depth-weighted minimum L2 norm estimator of cortical current density [42], which can subsequently be normalized using either the statistics of noise (dSPM [43]) or the data covariance (sLORETA [44]), as estimated from experimental recordings. For consistency and in an effort to promote standardization, the implementation of these estimators is similar to the ones available in the MNE software [36]. Two additional inverse models are available in Brainstorm: a linearly-constrained minimum variance (LCMV) beamformer [45] and the MUSIC signal classification technique [4, 46]. We also plan to add least squares multiple dipole fitting [4] to Brainstorm in the near future.

The region of support for these inverse methods can be either the entire head volume or restricted to the cortical surface, with or without constraints on source orientations. In the latter case, elementary dipole sources are distributed over the nodes of the surface mesh of the cortical surface. The orientation of the elementary dipoles can be left either unconstrained or constrained normally to the cortical surface. In all cases, the recommended number of dipoles to use for source estimation is about 15,000 (decimation of the original surface meshes can be performed within Brainstorm).

Brainstorm can manage the various types of sensors (EEG, MEG gradiometers, and MEG magnetometers) that may be available within a given dataset. When multiple sensor types are processed together in a joint source model, the empirical noise covariance matrix is used to estimate the weight of each individual sensor in the global reconstruction. The noise covariance statistics are typically obtained from an empty-room recording, which captures the typical instrumental and environmental fluctuations.

11. Source Visualization and Analysis

Brainstorm provides a large set of tools to display, visualize, and explore the spatio-temporal features of the estimated source maps (Figure 9), both on the cortical surface (a) and in the full head volume (b). The sources estimated on the cortical surface can be reprojected and displayed in the original volume of the MRI data (c) and on another mesh of the cortex at a higher or lower resolution. Reconstructed current values can be smoothed in space or in time before performing group analysis.


A variety of options for the visualization of estimated sources. (a) 3D rendering of the cortical surface, with control of surface smoothing (c) 3D orthogonal planes of the MRI volumes (b) conventional orthogonal views of the MRI volume with overlay of the MEG/EEG source density.

A dedicated interface lets the user define and analyze the time courses of specific regions of interest, named scouts in Brainstorm (Figure 10). Brainstorm distribution includes two predefined segmentations of the default anatomy (MNI Colin27 [37]) into regions of interest, based on the anatomical atlases of Tzourio-Mazoyer et al. [47].


Selection of cortical regions of interest in Brainstorm and extraction of a representative time course of the elementary sources within.

The rich contextual popup menus available in all visualization windows suggest predefined selections of views for creating a large variety of plots. The resulting views can be saved as images, movies, or contact sheets (Figure 9). Note that it is also possible to import dipoles estimated with the FDA-approved software Xfit from Elekta-Neuromag (Figure 11).


Temporal evolution of elementary dipole sources estimated with the external Xfit software. Data from a right-temporal epileptic spike. This component was implemented in collaboration with Elizabeth Bock, MEG Program, Medical College of Wisconsin.

12. Time-Frequency Analysis of Sensor and Source Signals

Brainstorm features a dedicated user interface for performing the time-frequency decomposition of MEG/EEG sensor and source time series using Morlet wavelets [10]. The shape—scaled versions of complex-valued sinusoids weighted by a Gaussian kernel—of the Morlet wavelets can efficiently capture bursts of oscillatory brain activity. For this reason, they are one of the most popular tools for time-frequency decompositions of electrophysiological data [26, 48]. The temporal and spectral resolution of the decomposition can be adjusted by the user, depending on the experiment and the specific requirements of the data analysis to be performed.

Time-frequency decompositions tend to increase the volume of data dramatically, as it is decomposed in the space, time, and frequency dimensions. Brainstorm has been efficiently designed to either store the transformed data or compute it on the fly.

Data can be analyzed as instantaneous measurements, or grouped into temporal and spectral bands of interest such as alpha (8–12 Hz) [26, 49], theta (5–7 Hz) [50–53], and so forth. Even though this reduces the resolution of the decomposition, it may benefit the analysis in multiple ways: reduced data storage requirements, improved signal-to-noise ratio, and a better control over the issue of multiple comparisons by reducing the number of concurrent hypothesis being tested.

Figure 12 illustrates some of the displays available to explore time-frequency decompositions: time-frequency maps of the times series from one sensor (a)-(b), one source (c) and one or more scouts (d), time courses of the power of the sensors for one frequency band (e), 2D/3D mappings (f), and cortical maps (g)-(h) of the power for one time and one frequency band.


(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h) A variety of display options to visualize time-frequency decompositions using Brainstorm (see text for details).

13. Graphical Batching Interface

The main window includes a graphical batching interface (Figure 13) that directly benefits from the database display: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including preprocessing of the recordings, averaging, estimation of the sources, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script.


Graphical interface of the batching tool. (a) selection of the input files by drag-and-drop. (b) creation of an analysis pipeline. (c) example of Matlab script generated automatically.

The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy.

14. High-Level Scripting

For advanced users and visualization purposes, Brainstorm can be used as a high-level scripting environment. All Brainstorm operations have been designed to interact with the graphical interface and the database therefore, they have very simple inputs: mouse clicks and keyboard presses. As a result, the interface can be manipulated through Matlab scripts, and each mouse click can be translated into a line of script. Similar to working through the graphical interface, all contextual information is gathered from the interface and the database, so that most of the functions may be called with a limited number of parameters, and, for example, there is no need to keep track of file names. As a result, scripting with Brainstorm is intuitive and easy to use. Figure 14 shows an example of a Matlab script using Brainstorm.


15. Solutions for Performing Group Analyses with MEG/EEG Data and Source Models

Brainstorm’s “Process2” tab allows the comparison of two data samples. This corresponds to a single factor 2-level analysis and supported tests include simple difference, paired/unpaired Student t-tests of equal/unequal variance, and their nonparametric permutation alternatives [17]. The two groups can be assembled from any type of files, for example, two conditions within a subject, two conditions across subjects or two subjects for the same conditions, and so forth. These operations are generic in Brainstorm and can be applied to any type of data in the database: MEG/EEG recordings, source maps, and time-frequency decompositions. Furthermore, analysis of variance (ANOVA) tests are also supported up to 4 factors. Figure 15 displays the use of a Student t-test to compare two conditions, “GM” and “GMM,” across 16 subjects.


Student t-test between two conditions. (a) selection of the files. (b) selection of the test. (c) options tab for the visualization of statistical maps, including the selection of the thresholding method.

We specifically address here how to perform multisubject data analysis using Brainstorm. In multisubject studies, measurement variance has two sources: the within-subject variance and the between-subject variance. Using collectively all trials from every subject simultaneously for comparisons is fixed-effects analysis [54] and does not consider the multiple sources of variance. Random-effects analysis [54, 55], which properly takes into account all sources of variance, is available in Brainstorm in its simplest and most commonly used form of the summary statistic approach [56, 57]. Based on this approach, analysis occurs at two levels. At the first level, trials from each subject are used to calculate statistics of interest separately for each subject, and at the second level, the different subjects are combined into an overall statistic.

Consider the example of investigating experimental effects, where prestimulus data are compared against post-stimulus data. The first level analysis averages all trials from each subject to yield prestimulus and post-stimulus responses. The second-level analysis can be a paired t-test between the resulting N prestimulus maps versus the N post-stimulus maps, where N is the number of subjects. Brainstorm processes and statistics include averaging trials and paired t-tests, making such analysis possible. Also, the procedure described above assumes equal within-subject variance, but the subjects can be weighted accordingly if this is not the case.

Brainstorm also supports statistical thresholding of the resulting activation maps, which takes into account the multiple hypotheses testing problem. The available methods include Bonferroni, false discovery rate [58], which controls the expected portion of false positives among the rejected hypotheses, and familywise error rate [59], which controls the probability of at least one false positive under the null hypothesis of no experimental effect. The latter is controlled with a permutation test and the maximum statistic approach, as detailed in [17].

In order to compare multiple subjects at the source level, an intermediate step is required if the sources were originally mapped on the individual subject anatomies. The sources estimated on individual brains are first projected on the cortical surface of the MNI-Colin27 brain. In the current implementation, the surface-to-surface registration is performed hemisphere by hemisphere using the following procedure: (1) alignment along the anterior commissure/posterior commissure axis, (2) spatial smoothing to preserve only the main features of the surfaces onto which the registration will be performed, (3) deformation of the individual surface to match the MNI surface with an iterative closest point algorithm (ICP) [60], and (4) interpolation of the source amplitudes using Shepard’s method [61]. Figure 16 shows the sources on the individual anatomy (left), and its reprojection on the MNI brain (right). This simple approach will eventually be replaced by cortical surface registration and surface-constrained volume registration methods developed at the University of Southern California as described in [62]. We will also add functionality to use the common coordinate system used in FreeSurfer for intersubject surface registration.


(a)
(b)
(a)
(b) Cortical activations 46 ms after the electric stimulation of the left median nerve on the subject’s brain (a) and their projection in the MNI brain (b).

16. Future Developments

Brainstorm is a project under constant development, and the current version provides an environment where new features are readily implemented and adapted to the interface. There are several recurrent requests from users for new features, as well as plans for future developments. Examples of forthcoming developments in the next two years include:

– expanding the preprocessing operations with the most popular techniques for noise reduction and automatic artifact detection,

– integration of methods for functional connectivity analysis and multivariate statistical analysis [16, 63],

– expanding forward and inverse calculations to include BEM and multiple dipole fitting methods,

– interface for simulating MEG/EEG recordings using simulated sources and realistic anatomy,

– segmentation of MEG/EEG recordings in functional micro-states, using optical flow models [64].

17. Brainstorm in the Software Development Landscape

Several commercial solutions to visualize and process MEG/EEG data are available. Most are developed for specific acquisition systems and are often designed by the manufacturers of these systems. They are typically unsuitable for research for several reasons: they are mainly driven by the requirements of clinical environment and FDA and CE certifications their all-graphical interface seldom provides information about the underlying data analysis, file formats, are sometimes proprietary and undocumented source code and description of the algorithms are not accessible to the user, and they are expensive. The research community needs solutions that are completely open, with the possibility of directly manipulating the code, data, and parameters.

As a result, many laboratories have developed their own tools for MEG and EEG data analysis. However, these tools are often not shared either because of the lack of interest or because of the required effort to support the software, develop documentation, and create and maintain a distribution website. However, the approach of developing individual tools is very limiting because of the limited availability of human resources assigned to software development in most research groups and the breadth of expertise that is required (electrophysiology, electromagnetic modeling, signal processing, statistics, classification, software optimization, real-time processing, human-machine interfaces ergonomics, etc.).

In the past two decades, many projects have been developed to offer open and free alternatives to the wide range of commercial solutions. Common among these projects is the support by a large community of developers around the world, who produce free and reusable source code. For this purpose, the free software community equipped itself with tools to facilitate collaborative work, such as version managers, forums, wikis, and discussion lists. This approach to collaborative software development has not only reached a high level of maturity, but also proved its efficiency. The best example is probably the Linux operating system, whose stability matches or exceeds that of commercially produced operating systems.

In the realm of functional brain mapping, open-source tools such as SPM [65] and EEGLab [35] have been broadly adopted in many research labs throughout the world. Providing open access to source code in combination with a willingness to accept additions and modifications from other sites clearly appeals both to users in clinical and neuroscientific research and others involved in methodology development. A variety of public licenses also allows developers to choose whether all or part of the code remains in the public domain. Importantly for software developed in academic and nonprofit labs, which are dependent on externally funded research support, recent experience indicates that open-source distribution is valued by the resesarch community and credit for this distribution is attributed to the original developers.

Free software packages with similar features to Brainstorm (general purpose software for MEG/EEG) are EEGLab, FieldTrip, and MNE. The first two are written under the Matlab environment, with noncompiled scripts, and are supported by large communities of users connected with active forums and diffusion lists. EEGLab offers a simple but functional interface, and its target application is oriented towards the preprocessing of recordings and ICA analysis. FieldTrip is a rich and powerful toolbox that offers the widest range of functionalities, but without a graphic interface its usage requires good skills in Matlab programming. MNE is also organized as a set of independent functions, easily scriptable and mostly oriented towards the preprocessing of the recordings and the source estimation using minimum norm technique, but written in C++ and compiled for Linux and MacOSX platforms.

Brainstorm, in contrast, is an integrated application rather than a toolbox. At the present time, it offers fewer features than FieldTrip but on the other hand, its intuitive interface, its powerful visualization tools, and the structure of its database allow the user to work at a higher level. It is possible to complete in a few minutes, and within a few mouse clicks, what would take hours otherwise: there is no need to write any scripts, and no need to think about where data files are stored on hard drives the data is directly accessible, and a simple mouse click is sufficient to open a wide variety of display windows. It enables the researcher to concentrate on exploring his or her data. When visual exploration is complete and group analysis needs to be performed, Brainstorm offers a very high level scripting system, based on the interface and the database. The resulting code is easy to understand, and with few arguments: all the contextual information is gathered automatically from the database when needed, in contrast to FieldTrip, for example, where this information has to be specifically passed in arguments to each function.

To conclude, Brainstorm now represents a potentially highly-productive option for researchers using MEG or EEG however, it is a work in progress and some key features are still missing. In the spirit of other open source developments, to the extent possible, we will reuse functions developed by other groups, which will then jointly maintain. Similarly, other developers are welcome to use code from Brainstorm in their software.

Acknowledgment

This software was generated primarily with support from the National Institutes of Health under Grants nos. R01-EB002010, R01-EB009048, and R01-EB000473. Primary support also includes permanent site support from the Centre National de la Recherche Scientifique (CNRS, France) for the Cognitive Neuroscience and Brain Imaging Laboratory (La Salpêtrière Hospital and Pierre and Marie Curie University, Paris, France). Additional support was provided by two grants from the French National Research Agency (ANR) to the Cognitive Neuroscience Unit (Inserm/CEA, Neurospin, France) and to the ViMAGINE project (ANR-08-BLAN-0250), and by the Epilepsy Center in the Cleveland Clinic Neurological Institute. The authors are grateful to all the people who contributed to the conception, the development, or the validation of specific Brainstorm functionalities. In alphabetical order: Charles Aissani, Syed Ashrafulla, Elizabeth Bock, Lucie Charles, Felix Darvas, Ghislaine Dehaene-Lambertz, Claude Delpuech, Belma Dogdas, Antoine Ducorps, Guillaume Dumas, John Ermer, Line Garnero, Alexandre Gramfort, Matti Hämäläinen, Louis Hovasse, Esen Kucukaltun-Yildirim, Etienne Labyt, Karim N'Diaye, Alexei Ossadtchi, Rey Ramirez, Denis Schwartz, Darren Weber, and Lydia Yahia-Cherif. The software, extensive documentation, tutorial data, user forum, and reference publications are available at http://neuroimage.usc.edu/brainstorm.

References

  1. S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Processing Magazine, vol. 18, no. 6, pp. 14–30, 2001. View at: Publisher Site | Google Scholar
  2. M. X. Huang, J. C. Mosher, and R. M. Leahy, “A sensor-weighted overlapping-sphere head model and exhaustive head model comparison for MEG,” Physics in Medicine and Biology, vol. 44, no. 2, pp. 423–440, 1999. View at: Publisher Site | Google Scholar
  3. F. Darvas, J. J. Ermer, J. C. Mosher, and R. M. Leahy, “Generic head models for atlas-based EEG source analysis,” Human Brain Mapping, vol. 27, no. 2, pp. 129–143, 2006. View at: Publisher Site | Google Scholar
  4. J. C. Mosher, P. S. Lewis, and R. M. Leahy, “Multiple dipole modeling and localization from spatio-temporal MEG data,” IEEE Transactions on Biomedical Engineering, vol. 39, no. 5, pp. 541–557, 1992. View at: Publisher Site | Google Scholar
  5. J. W. Phillips, R. M. Leahy, and J. C. Mosher, “MEG-Based imaging of focal neuronal current sources,” IEEE Transactions on Medical Imaging, vol. 16, no. 3, pp. 338–348, 1997. View at: Google Scholar
  6. S. Baillet and L. Garnero, “A Bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem,” IEEE Transactions on Biomedical Engineering, vol. 44, no. 5, pp. 374–385, 1997. View at: Publisher Site | Google Scholar
  7. D. M. Schmidt, J. S. George, and C. C. Wood, “Bayesian inference applied to the electromagnetic inverse problem,” Human Brain Mapping, vol. 7, no. 3, pp. 195–212, 1999. View at: Google Scholar
  8. G. L. Barkley and C. Baumgartner, “MEG and EEG in epilepsy,” Journal of Clinical Neurophysiology, vol. 20, no. 3, pp. 163–178, 2003. View at: Publisher Site | Google Scholar
  9. A. Arieli, A. Sterkin, A. Grinvald, and A. Aertsen, “Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses,” Science, vol. 273, no. 5283, pp. 1868–1871, 1996. View at: Google Scholar
  10. C. Tallon-Baudry and O. Bertrand, “Oscillatory gamma activity in humans and its role in object representation,” Trends in Cognitive Sciences, vol. 3, no. 4, pp. 151–162, 1999. View at: Publisher Site | Google Scholar
  11. G. Pfurtscheller and F. H. Lopes Da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at: Publisher Site | Google Scholar
  12. P. Tass, M. G. Rosenblum, J. Weule et al., “Detection of n:m phase locking from noisy data: application to magnetoencephalography,” Physical Review Letters, vol. 81, no. 15, pp. 3291–3294, 1998. View at: Google Scholar
  13. C. W. J. Granger, B. N. Huangb, and C. W. Yang, “A bivariate causality between stock prices and exchange rates: evidence from recent Asianflu,” Quarterly Review of Economics and Finance, vol. 40, no. 3, pp. 337–354, 2000. View at: Google Scholar
  14. W. Hesse, E. Möller, M. Arnold, and B. Schack, “The use of time-variant EEG Granger causality for inspecting directed interdependencies of neural assemblies,” Journal of Neuroscience Methods, vol. 124, no. 1, pp. 27–44, 2003. View at: Publisher Site | Google Scholar
  15. H. B. Hui, D. Pantazis, S. L. Bressler, and R. M. Leahy, “Identifying true cortical interactions in MEG using the nulling beamformer,” NeuroImage, vol. 49, no. 4, pp. 3161–3174, 2010. View at: Publisher Site | Google Scholar
  16. J. L. P. Soto, D. Pantazis, K. Jerbi, S. Baillet, and R. M. Leahy, “Canonical correlation analysis applied to functional connectivity in MEG,” in Proceedings of the 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '10), pp. 113–116, April 2010. View at: Publisher Site | Google Scholar
  17. D. Pantazis, T. E. Nichols, S. Baillet, and R. M. Leahy, “A comparison of random field theory and permutation methods for the statistical analysis of MEG data,” NeuroImage, vol. 25, no. 2, pp. 383–394, 2005. View at: Publisher Site | Google Scholar
  18. S. Baillet, J. C. Mosher, and R. M. Leahy, “BrainStorm beta release: a Matlab software package for MEG signal processing and source localization and visualization,” NeuroImage, vol. 11, no. 5, p. S915, 2000. View at: Google Scholar
  19. J. C. Mosher, S. Baillet, F. Darvas et al., “Electromagnetic brain imaging using brainstorm,” vol. 7, no. 2, pp. 189–190. View at: Google Scholar
  20. A. Tzelepi, N. Laskaris, A. Amditis, and Z. Kapoula, “Cortical activity preceding vertical saccades: a MEG study,” Brain Research, vol. 1321, pp. 105–116, 2010. View at: Publisher Site | Google Scholar
  21. F. Amor, S. Baillet, V. Navarro, C. Adam, J. Martinerie, and M. Le Van Quyen, “Cortical local and long-range synchronization interplay in human absence seizure initiation,” NeuroImage, vol. 45, no. 3, pp. 950–962, 2009. View at: Publisher Site | Google Scholar
  22. T. A. Bekinschtein, S. Dehaene, B. Rohaut, F. Tadel, L. Cohen, and L. Naccache, “Neural signature of the conscious processing of auditory regularities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 106, no. 5, pp. 1672–1677, 2009. View at: Publisher Site | Google Scholar
  23. F. Carota, A. Posada, S. Harquel, C. Delpuech, O. Bertrand, and A. Sirigu, “Neural dynamics of the intention to speak,” Cerebral Cortex, vol. 20, no. 8, pp. 1891–1897, 2010. View at: Publisher Site | Google Scholar
  24. M. Chaumon, D. Hasboun, M. Baulac, C. Adam, and C. Tallon-Baudry, “Unconscious contextual memory affects early responses in the anterior temporal lobe,” Brain Research, vol. 1285, pp. 77–87, 2009. View at: Publisher Site | Google Scholar
  25. S. Moratti and A. Keil, “Not what you expect: experience but not expectancy predicts conditioned responses in human visual and supplementary cortex,” Cerebral Cortex, vol. 19, no. 12, pp. 2803–2809, 2009. View at: Publisher Site | Google Scholar
  26. D. Pantazis, G. V. Simpson, D. L. Weber, C. L. Dale, T. E. Nichols, and R. M. Leahy, “A novel ANCOVA design for analysis of MEG data with application to a visual attention study,” NeuroImage, vol. 44, no. 1, pp. 164–174, 2009. View at: Publisher Site | Google Scholar
  27. P. Hansen, M. Kringelbach, and R. Salmelin, Eds., Meg: An Introduction to Methods, Oxford University Press, Oxford, UK, 2010.
  28. R. Salmelin and S. Baillet, “Electromagnetic brain imaging,” Human Brain Mapping, vol. 30, no. 6, pp. 1753–1757, 2009. View at: Publisher Site | Google Scholar
  29. The MoinMoin Wiki Engine, http://moinmo.in/.
  30. vBulleti, http://www.vbulletin.com/.
  31. D. W. Shattuck and R. M. Leahy, “Brainsuite: an automated cortical surface identification tool,” Medical Image Analysis, vol. 8, no. 2, pp. 129–142, 2002. View at: Publisher Site | Google Scholar
  32. Y. Cointepas, J.-F. Mangin, L. Garnero, J.-B. Poline, and H. Benali, “BrainVISA: software platform for visualization and analysis of multi-modality brain data,” Neuroimage, vol. 13, no. 6, p. S98, 2001. View at: Google Scholar
  33. FreeSurfer, http://surfer.nmr.mgh.harvard.edu/.
  34. FieldTrip, Donders Institute for Brain, Cognition and Behaviour, http://fieldtrip.fcdonders.nl/.
  35. A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, pp. 9–21, 2004. View at: Publisher Site | Google Scholar
  36. M. S. Hämäläinen, MNE software, http://www.nmr.mgh.harvard.edu/martinos/userInfo/data/sofMNE.php.
  37. D. L. Collins, A. P. Zijdenbos, V. Kollokian et al., “Design and construction of a realistic digital brain phantom,” IEEE Transactions on Medical Imaging, vol. 17, no. 3, pp. 463–468, 1998. View at: Google Scholar
  38. R. M. Leahy, J. C. Mosher, M. E. Spencer, M. X. Huang, and J. D. Lewine, “A study of dipole localization accuracy for MEG and EEG using a human skull phantom,” Electroencephalography and Clinical Neurophysiology, vol. 107, no. 2, pp. 159–173, 1998. View at: Publisher Site | Google Scholar
  39. F. Darvas, D. Pantazis, E. Kucukaltun-Yildirim, and R. M. Leahy, “Mapping human brain function with MEG and EEG: methods and validation,” NeuroImage, vol. 23, no. 1, pp. S289–S299, 2004. View at: Publisher Site | Google Scholar
  40. P. Berg and M. Scherg, “A fast method for forward computation of multiple-shell spherical head models,” Electroencephalography and Clinical Neurophysiology, vol. 90, no. 1, pp. 58–64, 1994. View at: Publisher Site | Google Scholar
  41. A. Gramfort, T. Papadopoulo, E. Olivi, and M. Clerc, “OpenMEEG: opensource software for quasistatic bioelectromagnetics,” BioMedical Engineering Online, vol. 9, article 45, 2010. View at: Publisher Site | Google Scholar
  42. M. S. Hämäläinen and R. J. Ilmoniemi, “Interpreting magnetic fields of the brain: minimum norm estimates,” Medical and Biological Engineering and Computing, vol. 32, no. 1, pp. 35–42, 1994. View at: Google Scholar
  43. A. M. Dale, A. K. Liu, B. R. Fischl et al., “Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity,” Neuron, vol. 26, no. 1, pp. 55–67, 2000. View at: Google Scholar
  44. R. D. Pascual-Marqui, “Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details,” Methods and Findings in Experimental and Clinical Pharmacology, vol. 24, no. D, pp. 5–12, 2002. View at: Google Scholar
  45. B. D. Van Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” IEEE ASSP Magazine, vol. 5, no. 2, pp. 4–24, 1988. View at: Publisher Site | Google Scholar
  46. R. O. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, 1986, Reprint of the original 1979 paper from the RADC Spectrum Estimation Workshop. View at: Google Scholar
  47. N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou et al., “Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain,” NeuroImage, vol. 15, no. 1, pp. 273–289, 2002. View at: Publisher Site | Google Scholar
  48. C. Tallon-Baudry, O. Bertrand, C. Wienbruch, B. Ross, and C. Pantev, “Combined EEG and MEG recordings of visual 40 Hz responses to illusory triangles in human,” NeuroReport, vol. 8, no. 5, pp. 1103–1107, 1997. View at: Google Scholar
  49. M. S. Worden, J. J. Foxe, N. Wang, and G. V. Simpson, “Anticipatory biasing of visuospatial attention indexed by retinotopically specific alpha-band electroencephalography increases over occipital cortex,” The Journal of Neuroscience, vol. 20, no. 6, p. RC63, 2000. View at: Google Scholar
  50. W. Klimesch, M. Doppelmayr, T. Pachinger, and B. Ripper, “Brain oscillations and human memory: EEG correlates in the upper alpha and theta band,” Neuroscience Letters, vol. 238, no. 1-2, pp. 9–12, 1997. View at: Publisher Site | Google Scholar
  51. O. Jensen and C. D. Tesche, “Frontal theta activity in humans increases with memory load in a working memory task,” European Journal of Neuroscience, vol. 15, no. 8, pp. 1395–1399, 2002. View at: Publisher Site | Google Scholar
  52. W. Klimesch, M. Doppelmayr, H. Russegger, T. Pachinger, and J. Schwaiger, “Induced alpha band power changes in the human EEG and attention,” Neuroscience Letters, vol. 244, no. 2, pp. 73–76, 1998. View at: Publisher Site | Google Scholar
  53. C. S. Herrmann, M. Grigutsch, and N. A. Busch, “EEG oscillations and wavelet analysis,” in Event-Related Potentials-A Methods Handbook, pp. 229–259, MIT Press, Cambridge, Mass, USA, 2005. View at: Google Scholar
  54. C. E. McCulloch and S. R. Searle, Generalized, Linear, and Mixed Model, John Wiley & Sons, New York, NY, USA, 2001.
  55. W. D. Penny, A. P. Holmes, and K. J. Friston, “Random effects analysis,” Human Brain Function, vol. 2, pp. 843–850, 2004. View at: Google Scholar
  56. D. Pantazis and R. M. Leahy, “Meg: an introduction to methods,” in Statistical Inference in MEG Distributed Source Imaging, chapter 10, Oxford University Press, Oxford, UK, 2010. View at: Google Scholar
  57. J. A. Mumford and T. Nichols, “Modeling and inference of multisubject fMRI data,” IEEE Engineering in Medicine and Biology Magazine, vol. 25, no. 2, pp. 42–51, 2006. View at: Publisher Site | Google Scholar
  58. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society Series B, vol. 57, no. 1, pp. 289–300, 1995. View at: Google Scholar
  59. T. Nichols and S. Hayasaka, “Controlling the familywise error rate in functional neuroimaging: a comparative review,” Statistical Methods in Medical Research, vol. 12, no. 5, pp. 419–446, 2003. View at: Publisher Site | Google Scholar
  60. D. J. Kroon, “Iterative Closest Point using finite difference optimization to register 3D point clouds affine,” http://www.mathworks.com/matlabcentral/fileexchange/24301-finite-iterative-closest-point. View at: Google Scholar
  61. D. Shepard, “Two- dimensional interpolation function for irregularly- spaced data,” in Proceedings of the ACM National Conference, pp. 517–524, 1968. View at: Google Scholar
  62. A. A. Joshi, D. W. Shattuck, P. M. Thompson, and R. M. Leahy, “Surface-constrained volumetric brain registration using harmonic mappings,” IEEE Transactions on Medical Imaging, vol. 26, no. 12, pp. 1657–1668, 2007. View at: Publisher Site | Google Scholar
  63. J. L. P. Soto, D. Pantazis, K. Jerbi, J. P. Lachaux, L. Garnero, and R. M. Leahy, “Detection of event-related modulations of oscillatory brain activity with multivariate statistical analysis of MEG data,” Human Brain Mapping, vol. 30, no. 6, pp. 1922–1934, 2009. View at: Publisher Site | Google Scholar
  64. J. Lefèvre and S. Baillet, “Optical flow approaches to the identification of brain dynamics,” Human Brain Mapping, vol. 30, no. 6, pp. 1887–1897, 2009. View at: Publisher Site | Google Scholar
  65. SPM, http://www.fil.ion.ucl.ac.uk/spm/.

Copyright

Copyright © 2011 François Tadel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Predict iQ Section

Predict iQ analyses your respondents’ survey responses and embedded data in order to predict when a customer will eventually churn (abandon the company). Once a churn prediction model is configured in Predict iQ, newly collected responses will be evaluated for how likely the respondent is to churn, allowing you to be proactive in your company’s customer retention.

For in-depth information about using Predict iQ, check out our dedicated support page!


Top Research Tools and Software for Academics and Research Students

If you are conducting research, it is very important that you have appropriate methods and tools to carry out your research. If you are a non-native English speaker, then you need a research tool to help you with your written language. If your research involves data analysis, then you need a good statistical research tool for your work. It is also important that you keep tabs on what other people in your research arena are doing, so you need research tools such as Google Scholar and ResearchGate to collaborate with your peers. You also need a good plagiarism checking software to avoid academic misconduct. Finally, you need a research project management software to stay on top of the deadlines. In this blog, we review some of the useful tools for research that researchers can use to be more productive.

1. REF-N-WRITE Academic Writing Tool

Ref-N-Write is a fantastic research tool for beginner writers and non-native English speakers. This is a Microsoft Word add-in. This tool allows users to import research papers into MS Word. Then the tool allows you to search the research documents while you are writing your research paper or academic essay. In essence, this tool is similar to Google search engine the difference is that instead of searching the internet you are searching research papers and academic documents stored on your computer. REF-N-WRITE functions within MS Word and the search results are displayed in a panel that pops up from the bottom. You can expand the search results and jump to the exact location in the source document in a few clicks. This research tool is fantastic to lookup for writing ideas from related research papers or documents from your colleagues. The REF-N-WRITE tool also comes with a database of academic and scientific phrases. You can use this to polish your writing by substituting colloquial terms and informal statements in your text with academically acceptable words and phrases. REF-N-WRITE also features text-to-voice option that helps you pick up grammatical errors and sentence structural issues.

Useful Links:

2. Free Online Statistical Testing Tools

One of the most important requirement while writing up your research is the use of appropriate statistical methods and analysis to back up your claims. Whether you are doing quantitative or qualitative research, statistical analysis will be an indispensable part of your workflow. There are plenty of research tools available that allows you to do a wide variety of statistical analyses for your research. However, most of the time, you will find yourself performing basic calculations stuff such as mean, standard deviation, confidence intervals, standard error, etc. to make your work sound scientific. Also, you need to use some form of statistical test to test the significance of the difference between two groups or cohorts and compute the p-value. Some of the widely used statistical tests for this purpose include T-test, F-test, Chi-square test, Pearson correlation coefficient and ANOVA. Following are the list of free popular statistical research tools available online. These tools will allow you to cut and copy your data directly from your spreadsheet and perform the required statistical analysis.

Useful Links:

3. Microsoft Excel

One of the widely used tools for research is Microsoft Excel. MS Excel has plenty of features that will come in handy when you are doing a research project. Excel is a must have research tool if your study involves a lot of quantitative analysis. Excel offers a wide range of statistical functions such as AVERAGE, MIN, MAX, SUM, etc that you can apply to the cells in a few clicks. You can visualize your data using a wide variety of chart types, for example, bar plot, scatter plot, etc. You can use pivot tables to organize and generate summaries of your data easily. For complex statistical analysis, you can use Data Analysis ToolPak Excel add-in. This add-in comes with a wide variety of statistical analysis tools such as Descriptive statistics, Histogram, F-test, Random number generation, Fourier analysis, etc.

Useful Links:

4. Google Scholar

Google Scholar is a free online research tool offered by Google. This tool allows users to search for academic literature, scientific articles, journals, white papers and patents across the web. This is an excellent tool for research. It not only searches well-known databases, it also looks for articles in university repositories, so your chances of finding the full-text PDF of the research article you are after is very high. You can set up keyword alerts so that Google Scholar notifies you when there is a new article in your field or from your co-authors. You can manage multiple libraries of papers. You can label paper or article, and Google Scholar will organize them for you. Google Scholar displays vital information about the article such as citation number, versions and other articles citing the current article. Google Scholar also alerts you if somebody else has cited your paper. You can download citations in a wide variety of formats – MLA, APA, Chicago, Harvard, Vancouver, – and you can easily export the citation to EndNote and other bibliography managers. On the whole, Google Scholar is an indispensable tool for researchers.

Useful Links:

5. ResearchGate

ResearchGate is a social networking site for people doing research. The site contains more than 11 million members that include scientists, academics, Ph.D. students, and researchers. Users can create an account using a valid institutional email address. Once successful, they can create a profile, upload pictures, list publications and upload full-text papers. ResearchGate is a perfect research tool for researchers and academics looking for collaborations. You can follow updates from your colleagues or peers with similar interests. You will be notified if somebody reads or cites your paper, and also you will know if the people you are following have published new research. You can email other members and request for full-text of their listed publications. ResearchGate also computes a RG score based on your profile and publications. This is different from H-score computed by Google Scholar or citation score given by journals. On the whole, ResearchGate is an excellent tool for research if you want to keep tabs on your colleagues’ research and collaborate with different institutions.

Useful Links:

6. Plagiarism detection software tools

Plagiarism is seen as academic misconduct. Plagiarism is not taken lightly by academic and research institutions and is punished and penalized severely. Plagiarism occurs when you copy and paste a large chunk of text from a document written by someone else without giving credit to the author. This is seen as copying and taking credits for somebody’s work. Even if you paraphrase the text and use it in your text, it will still be seen as plagiarism. One of the common forms of plagiarism is self-plagiarism. Self-plagiarism is the use of one’s own previous work in another context without citing that it was used previously. This is because once you publish your work, the publisher holds the copyright for your text, so you need to either get permission from the publisher to reuse the text or you should cite the source. There are plenty of plagiarism detection software and online checking tools available that you can use to check how much of your text overlap with previously published materials. You can fix these mistakes before submitting your academic essay or research paper. Some of the tools for checking plagiarism are listed below.

Useful Links:

7. Project management tools

It is so easy for your research project to go out of hands when you are multi-tasking and dealing with multiple deadlines. It is good practice to choose a project management tool to keep on top of your research project. These tools can help you minimise the amount of time you spend on managing the project and instead concentrate on research work. Find a tool that allows you to lay out what is to be done, by whom and by then. Sometimes it would be helpful if you can visualize your tasks and the timeline for execution using simple diagrams such as a Gantt chart. There are plenty of research project management tools available you can simply pick the one that suits your research project. Here are some popular research management tools used in the academic community.


Top Video Interviewing Tools

With a recent survey finding 63% of HR managers having conducted an online interview, video interviewing software is becoming an important part of the recruiting tech stack.

    , receiving an eye-opening $93 million in funding to date, one of its differentiators is incorporating Industrial-Organizational Psychology in its pre-hire assessments and interview analyses. , specializing in video interviewing software, also offers other recruitment solutions such as digital structured interviews, automated reference checking, and more. Major clients include the United Nations, Samsung, Canon, and the Atlanta Hawks. , a Chicago-based company has a variety of solutions such as interview building, interview scheduler, interview prep, and more. They work towards the success of over 900 clients such as emerging businesses, colleges, universities as well as, large-midsize companies.

What tools are available for EEG analysis on the R platform? - Psychology

Collection of R packages to demonstrate the use of R Project software for basic analysis tasks.

The packages are intended to be used as a part of R. Therefore, the basic requirement is to have R installed beforehand. R Project is available for download from any of the list of CRAN mirrors. Detailed installation instructions and support is available here. For a more user friendly experience with R we recommend the RStudio, an open source free IDE for R.

Once R is up and running (with or without RStudio), we are almost ready to use the packages from this repository. R enables downloading and installing packages directly from GitHub by using the function install_github of the devtools package in just two lines of code:

An example of the second line for one of the packages in this repository would be

openEHRapi - using openEHR REST API from R

The package is designed to collect data from an openEHR server with AQL queries via REST web service architecture. Healthcare data is obtained by calling a REST API, and then format the returned result set in R to prepare the data for further analysis. Saving data records (compositions) to an openEHR server is also enabled.

The package openEHRapi includes functions:

  • get_query : for a given AQL query returns the result set in a data frame.
  • post_composition : stores new data records (compositions).

For more details on the usage of the package please refer to the included vignette.

EhrscapeR - using EhrScape REST API from R

The package is designed to collect data from open health data platform EhrScape with AQL queries via REST web service architecture. Healthcare data is obtained by calling a REST API, and then format the returned result set in R to prepare the data for further analysis. Saving data records (compositions) to EhrScape is also enabled.

The package EhrscapeR includes functions:

  • get_query : for a given AQL query returns the result set in a data frame.
  • get_query_csv : for a given AQL query returns the CSV formated result set in a data frame.
  • post_composition : stores new data records (compositions) in EhrScape.

For more details on the usage of the package please refer to the included vignette.

ParseGPX - parsing GPX data to R data frame

The R package parseGPX was designed for reading and parsing of GPX files containing GPS data. GPS data has become broadly available by integrating low-cost GPS chips into portable consumer devices. Consequently, there is an abundance of online and offline tools for GPS data visualization and analysis with R project being in the focus in this example. The data itself can be generated in several different file formats, such as txt, csv, xml, kml, gpx. Among these the GPX data format is ment to be the most universal intended for exchanging GPS data between programs, and for sharing GPS data with other users. Unlike many other data files, which can only be understood by the programs that created them, GPX files actually contain a description of what's inside them, allowing anyone to create a program that can read the data within. Several R packages already exist with functions for reading and parsing of GPX data files, e.g. plotKML , maptools , rgdal with corresponding functions readGPX , readGPS and readOGR .

The presented package parseGPX contains the function parse_gpx to read, parse and optionally save GPS data.

For more details on the usage of the package please refer to the included vignette.

AnalyzeGPS - analyze GPS data

The R package analyzeGPS offers functions for basic preparation and analysis of the GPS data:

  • readGPS : imports the GPS data in csv format into R data frame,
  • distanceGPS : calculation of distance between two data points or vectors of data points,
  • speedGPS : calculation of velocity between GPS data points,
  • accGPS : calculation of acceleration between GPS data points,
  • gradeGPS : calculation of grade or inclination between GPS data points.

Additionally, an example GPS dataset myGPSData.csv, acquired during cycling of a person. It is a data frame with 7771 rows and 5 variables:

  • lon : longitude data,
  • lat : latitude data,
  • ele : elevation data,
  • time : GPS time stamp - GMT time zone,
  • tz_CEST : time stamp converted to CEST time zone.

For more details on the usage of the package please refer to the included vignette.

ZephyrECG - parse and read Zephyr BH3 ECG data

The R package zephyrECG was designed to parse ECG data acquired with the Zephyr BioHarness 3 (BH3) monitor and how to import it into R. The package includes functions

  • separate_bh3 : parses and separates multiple sessions of ECG data recorded with the Zephyr BH3 monitor into separate csv files,
  • read_ecg : imports the ECG data stored in a csv file to a data frame in R.

For more details on the usage of the package please refer to the included vignette.

HeartBeat - heart beat detection from single-lead ECG signal

The R package heartBeat was designed for heart beat detection from single-lead ECG signals. The ECG data is expected to be already loaded into a data frame and ready to use (for importing data recorded with Zephyr BioHarness 3 monitor, please see the package zephyrECG). The package includes functions

  • heart_beat : detection of heart beats,
  • HRdistribution : reads the signal and the output of heart_beat function and determines instant heart rates, their distribution and a basic histogram,
  • annotateHR : adds factorized code to ECG data points according to heart rate determined previously with functions heart_beat and HRdistribution .

For more details on the usage of the package please refer to the included vignette.

StressHR - heart rate variability analysis for stress assessment

The R package stressHR assesses mental stress based on heart rate. Heart beats and heart rate are previously detected from single-lead ECG signal by using the heartBeat package. The package includes functions

  • hrv_analyze : executes the heart rate variability (HRV) on heart beat positions written in an ASCII file ( Rsec_data.txt ),
  • merge_hrv : merges the HRV data with the initial ECG data frame.

For more details on the usage of the package please refer to the included vignette.

MuseEEG - parse and read EEG data from InterAxon Muse device

The R package museEEG was designed to parse and read the EEG data collected with the InterAxon Muse device. The device stores the acquired data directly in .muse format and the manufacturer offers a tool MusePlayer that converts the .muse data to .csv format. The package is comprised of read_eeg function, which reads a file in csv format acquired by MUSE hardware and sorts the EEG data in it into a data frame.

For more details on the usage of the package please refer to the included vignette.

EmotionEEG - emotional valence and arousal assessment based on EEG recordings

The R package emotionEEG uses the EEG data collected with the InterAxon Muse device and assesses emotional valence and arousal based on asymmetry analysis. The EEG data prepared by the museEEG package contains EEG signal values in microvolts and alpha and beta absolute band powers and ratios in decibels. Emotional valence is calcluated based on the ratio of alpha band power between right and left EEG chanels FP2 and FP1. Emotional arousal is calculated based on the mean value of beta to alpha ratios of left and right EEG channels FP1 and FP2. The package includes functions

  • read_eeg : reads raw EEG data in csv format acquired by MUSE hardware and sorts it into a data frame,
  • emotion_analysis : uses the EEG data collected with Muse and assesses emotional valence and arousal based on asymmetry analysis.

For more details on the usage of the package please refer to the included vignette.

CycleR - cycling analysis through GPS data

The R package cycleR was designed to calculate advanced cycling parameters based on GPS data of the cycling route, for example power output, climb categorization, route-time segmentation. Calculations used in the package are based on Strava glossary & calculations The package analyzeGPS is required. The package is comprised of functions

  • segment_time : segments the cycling route according to activity,
  • categorize : detects and categorizes the climbs on the route,
  • cycling_power : assesses the total power produced by a cyclist on a bike ride given the GPS data and additional physical parameters.

For more details on the usage of the package please refer to the included vignette.

RunneR - running analysis with GPS data

The R package runneR was designed to calculate advanced running parameters based on GPS data of the running route, for example pace, calories burned, route segmentation, moving time. Calculations used in the package are based on Strava glossary & calculations The package analyzeGPS is required. The package is comprised of functions

  • analyze_run : determines the total time, moving time, resting time, time spent ascending, descending and on the flat and also pace on a running route described with GPS data,
  • cb_activity : estimates the amount of calories that you burn while running activity based on the MET (Metabolic Equivalent) data for physical activities,
  • cb_running : estimates the calories that you burn while running any given distance,
  • cb_hr : estimates the calories that you burn during aerobic (i.e. cardiorespiratory) exercise, based on your average heart rate while performing the exercise.

For more details on the usage of the package please refer to the included vignette.


Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

Keywords: Brainstorm EEG EEGLAB auditory N100 auditory processing source localization.

Figures

Schematic illustration of the processing…

Schematic illustration of the processing pipeline. The dashed line indicates that alternative processing…

ICA based artifact attenuation. Left)…

ICA based artifact attenuation. Left) Original EEG time course, shown for a subset…

Sensor level analysis. Shown is…

Sensor level analysis. Shown is the grand average (Red line) of all subjects…

Grand average source level activity…

Grand average source level activity for the N100 component. Shown is the activation…


Affiliations

School of Social Sciences, Nanyang Technological University, HSS 04-19, 48 Nanyang Avenue, Singapore, Singapore

Dominique Makowski, Tam Pham, Zen J. Lau & S. H. Annabel Chen

Behavioural Science Institute, Radboud University, Nijmegen, Netherlands

Département de psychologie, Université de Montréal, Montréal, Canada

Centre de Recherche de l’Institut Universitaire Geriatrique de Montréal, Montréal, Canada

Eureka Robotics, Singapore, Singapore

Life Science Informatics, THM University of Applied Sciences, Giessen, Germany

Centre for Research and Development in Learning, Nanyang Technological University, Singapore, Singapore

Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

Corresponding author


Statistical Package for Social Science (SPSS)

SPSS is the most popular quantitative analysis software program used by social scientists.

Made and sold by IBM, it is comprehensive, flexible, and can be used with almost any type of data file. However, it is especially useful for analyzing large-scale survey data.

It can be used to generate tabulated reports, charts, and plots of distributions and trends, as well as generate descriptive statistics such as means, medians, modes and frequencies in addition to more complex statistical analyses like regression models.

SPSS provides a user interface that makes it easy and intuitive for all levels of users. With menus and dialogue boxes, you can perform analyses without having to write command syntax, like in other programs.

It is also simple and easy to enter and edit data directly into the program.

There are a few drawbacks, however, which might not make it the best program for some researchers. For example, there is a limit on the number of cases you can analyze. It is also difficult to account for weights, strata and group effects with SPSS.


Brainstorm: A User-Friendly Application for MEG/EEG Analysis

Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).

1. Introduction

Although MEG and EEG instrumentation is becoming more common in neuroscience research centers and hospitals, research software availability and standardization remain limited compared to the other functional brain imaging modalities. MEG/EEG source imaging poses a series of specific technical challenges that have, until recently, impeded academic software developments and their acceptance by users (e.g., the multidimensional nature of the data, the multitude of approaches to modeling head tissues and geometry, and the ambiguity of source modeling). Ideally, MEG/EEG imaging is multimodal: MEG and EEG recordings need to be registered to a source space that may be obtained from structural MRI data, which adds to the complexity of the analysis. Further, there is no widely accepted standard MEG/EEG data format, which has limited the distribution and sharing of data and created a major technical hurdle to academic software developers.

MEG/EEG data analysis and source imaging feature a multitude of possible approaches, which draw on a wide range of signal processing techniques. Forward head modeling for example, which maps elemental neuronal current sources to scalp potentials and external magnetic fields, is dependent on the shape and conductivity of head tissues and can be performed using a number of methods, ranging from simple spherical head models [1] to overlapping spheres [2] and boundary or finite element methods [3]. Inverse source modeling, which resolves the cortical sources that gave rise to MEG/EEG recordings, has been approached through a multitude of methods, ranging from dipole fitting [4] to distributed source imaging using Bayesian inference [5–7]. This diversity of models and methods reflects the ill-posed nature of electrophysiological imaging which requires restrictive models or regularization procedures to ensure a stable inverse solution.

The user’s needs for analysis and visualization of MEG and EEG data vary greatly depending on their application. In a clinical environment, raw recordings are often used to identify and characterize abnormal brain activity, such as seizure events in epileptic patients [8]. Alternatively, ordering data into trials and averaging of an evoked response [9] remains the typical approach to revealing event-related cortical activity. Time-frequency decompositions [10] provide insight into induced responses and extend the analysis of MEG/EEG time series at the sensor and source levels to the spatial, temporal, and spectral dimensions. Many of these techniques give rise to computational and storage related challenges. More recently, an increasing number of methods have been proposed to address the detection of functional and effective connectivity among brain regions: coherence [11], phase locking value [12], Granger causality [13, 14] and its multivariate extensions [15], and canonical correlation [16] among others. Finally, the low spatial resolution and nonisotropic covariance structure of measurements requires adequate approaches to their statistical analysis [17].

Despite such daunting diversity and complexity in user needs and methodological approaches, an integrated software solution would be beneficial to the imaging community and provide progressive automation, standardization and reproducibility of some of the most common analysis pathways. The Brainstorm project was initiated more than 10 years ago in collaboration between the University of Southern California in Los Angeles, the Salpêtrière Hospital in Paris, and the Los Alamos National Laboratory in New Mexico. The project has been supported by the National Institutes of Health (NIH) in the USA and the Centre National de la Recherche Scientifique (CNRS) in France. Its objective is to make a broad range of electromagnetic source imaging and visualization techniques accessible to nontechnical users, with an emphasis on the interaction of users with their data at multiple stages of the analysis. The first version of the software was released in 2000, [18] and a full graphic user interface (GUI) was added to Brainstorm 2 in 2004 [19]. As the number of users grew, the interface was completely redesigned and improved, as described in this paper. In response to the high demand from users, many other tools were integrated in Brainstorm to cover the whole processing and visualization pipeline of MEG/EEG recordings, from the importing of data files, from a large selection of formats, to the statistical analysis of source imaging maps. Brainstorm 3 was made available for download in June 2009 and was featured at the 15th Human Brain Mapping Conference in San Francisco. The software is now being improved and updated on a regular basis. There have been about 950 new registered users since June 2009, for a total of 4,000 since the beginning of the project.

Brainstorm is free and open source. Some recent publications using Brainstorm as a main analysis software tool are listed in [20–26]. This paper describes the Brainstorm project and the main features of the software, its connection to other projects, and some future developments that are planned for the next two years. This paper describes the software only methodological background material is not presented here but can be found in multiple review articles and books, for example, [1, 27, 28].

2. Software Overview

Brainstorm is open-source software written almost entirely in Matlab scripts and distributed under the terms of the General Public License (GPL). Its interface is written in Java/Swing embedded in Matlab scripts, using Matlab’s ability to work as a Java console. The use of Matlab and Java make Brainstorm a fully portable, cross-platform application.

The advantage of scripting languages in a research environment is the simplicity to maintain, modify, exchange, and reuse functions and libraries. Although Python might be a better choice for a new project because of its noncommercial open source license, Brainstorm was built from a vast amount of pre-existing lines of Matlab code as its methodological foundations for data analysis. The Matlab development environment is also a high-performance prototyping tool. One important feature for users who do not own a Matlab license is that a stand-alone version of Brainstorm, generated with the Matlab Compiler, is also available for download for the Windows and Linux operating systems.

All software functions are accessible through the GUI, without any direct interaction with the Matlab environment hence, Brainstorm can be used without Matlab or programming experience. For more advanced users, it is also possible to run all processes and displays from Matlab scripts, and all data structures manipulated by Brainstorm can be easily accessed from the Matlab command window.

The source code is accessible for developers on an SVN server, and all related Brainstorm files are compressed daily into a zip file that is publicly available from the website, to facilitate download and updates for the end user. Brainstorm also features an automatic update system that checks at each startup if the software should be updated and whether downloading a new version is necessary.

User documentation is mainly organized in detailed online tutorials illustrated with numerous screen captures that guide the user step by step through all software features. The entire website is based on a MoinMoin wiki system [29] hence, the community of users is able to edit the online documentation. Users can report bugs or ask questions through a VBulletin forum [30], also accessible from the main website.

3. Integrated Interface

Brainstorm is driven by its interface: it is not a library of functions on top of which a GUI has been added to simplify access but rather a generic environment structured around one unique interface in which specific functions were implemented (Figure 1). From the user perspective, its organization is contextual rather than linear: the multiple features from the software are not listed in long menus they are accessible only when needed and are typically suggested within contextual popup menus or specific interface windows. This structure provides faster and easier access to requested functions.


General overview of the Brainstorm interface. Considerable effort was made to make the design intuitive and easy to use. The interface includes: (a) a file database that provides direct access to all data (recordings, surfaces, etc.), (b) contextual menus that are available throughout the interface with a right-button click, (c) a batch tool that launches processes (filtering, averaging, statistical tests, etc.) for all files that were drag-and-dropped from the database (right) multiple displays of information from the database, organized as individual figures and automatically positioned on the screen, and (d) properties of the currently active display.

Data files are saved in the Matlab.mat format and are organized in a structured database with three levels of classification: protocols, subjects, and experimental conditions. User data is always directly accessible from the database explorer, regardless of the actual file organization on the hard drive. This ensures immediate access to all protocol information and allows simultaneous display and comparison of recordings or sources from multiple runs, conditions, or subjects.

4. Supported File Formats

Brainstorm requires three categories of inputs to proceed to MEG/EEG source analysis: the anatomy of the subject, the MEG/EEG recordings, and the 3D locations of the sensors. The anatomy input is usually a T1-weighted MRI of the full head, plus at least two tessellated surfaces representing the cerebral cortex and scalp. Supported MRI formats include Analyze, NIfTI, CTF, Neuromag, BrainVISA, and MGH. Brainstorm does not extract cortical and head surfaces from the MRI, but imports surfaces from external programs. Three popular and freely available surface formats are supported: BrainSuite [31], BrainVISA [32], and FreeSurfer [33].

The native file formats from three main MEG manufacturers are supported: Elekta-Neuromag, CTF, and BTi/4D-Neuroimaging. The generic file format developed at La Salpêtrière Hospital in Paris (LENA) is also supported. Supported EEG formats include: Neuroscan (cnt, eeg, avg), EGI (raw), BrainVision BrainAmp, EEGLab, and Cartool. Users can also import their data using generic ASCII text files.

Sensor locations are always included in MEG files however, this is not the case for the majority of EEG file formats. Electrode locations need to be imported separately. Supported electrode definition files include: BESA, Polhemus Isotrak, Curry, EETrak, EGI, EMSE, Neuroscan, EEGLab, Cartool, and generic ASCII text files.

Other formats not yet supported by Brainstorm will be available shortly. Our strategy will merge Brainstorm’s functions for the input and output from and to external file formats with the fileio module from the FieldTrip toolbox [34]. This independent library, also written in Matlab code, contains routines to read and write most of the file formats used in the MEG/EEG community and is already supported by the developers of multiple open-source software packages (EEGLab, SPM, and FieldTrip).

5. Data Preprocessing

Brainstorm features an extensive preprocessing pipeline for MEG/EEG data: visual or automatic detection of bad trials and bad channels, event marking and definition, baseline correction, frequency filtering, data resampling, averaging, and the estimation of noise statistics. Other preprocessing operations can be performed easily with other programs (EEGLab [35], FieldTrip, or MNE [36]) and results then imported into Brainstorm as described above.

Expanding preprocessing operations with the most popular techniques for noise reduction and automatic artifact detection is one of our priorities for the next few years of development.

6. Visualization of Sensor Data

Brainstorm provides a rich interface for displaying and interacting with MEG/EEG recordings (Figure 2) including various displays of time series (a)–(c), topographical mapping on 2D or 3D surfaces (d)-(e), generation of animations and series of snapshots of identical viewpoints at sequential time points (f), the selection of channels and time segments, and the manipulation of clusters of sensors.


These visualization tools can be used either on segments of recordings that are fully copied into the Brainstorm database and saved in the Matlab.mat file format, or on typically larger, ongoing recordings, directly read from the original files and which remain stored in native file formats. The interface for reviewing raw recordings (Figure 3) also features event marking in a fast and intuitive way, and the simultaneous display of the corresponding source model (see below).


7. Visualization of Anatomical Surfaces and Volumes from MRI

Analysis can be performed on the individual subject anatomy (this requires the importation of the MRI and surfaces as described above) or using the Brainstorm’s default anatomy (included in Brainstorm’s distribution), which is derived from the MNI/Colin27 brain [37]. A number of options for surface visualization are available, including transparency, smoothing, and downsampling of the tessellated surface. Figure 4 shows some of the possible options to visualize MRI volumes and surfaces.


8. Registration of MEG/EEG with MRI

Analysis in Brainstorm involves integration of data from multiple sources: MEG and/or EEG recordings, structural MRI scans, and cortical and scalp surface tessellations. Their geometrical registration in the same coordinate system is essential to the accuracy of source imaging. Brainstorm aligns all data in a subject coordinate system (SCS), whose definition is based on 3 fiducial markers: the nasion, left preauricular, and right preauricular points: more details regarding the definition of the SCS are available at Brainstorm’s website.

MRI-Surfaces
Aligning the MRI data volume with the surface tessellations of the head tissues is straightforward and automatic as both usually originate from the same volume of data. Nevertheless, Brainstorm features several options to manually align the surface tessellations with the MRI and to perform quality control of this critical step including definition of the reference points on the scalp surface (Figure 5(a)) and visual verification of the proper alignment of one of the surfaces in the 3D MRI (Figures 5(b), 5(c)).


Registration of MRI with MEG/EEG
The fiducial reference points need to be first defined in the MRI volume (see above and Figure 4) and are then pair matched with the coordinates of the same reference points as measured in the coordinate system of the MEG/EEG during acquisition. Alignment based on three points only is relatively inaccurate and can be advantageously complemented by an automatic refinement procedure when the locations of additional scalp points were acquired during the MEG/EEG session, using a 3D digitizer device. Brainstorm lets the user run this additional alignment, which is based on an iterated closest point algorithm, automatically.
It is common in EEG to run a study without collecting individual anatomical data (MRI volume data or individual electrode positions). Brainstorm has a tool that lets users define and edit the locations of the EEG electrodes at the surface of the individual or generic head (Figure 6). This tool can be used to manually adjust one of the standard EEG montages available in the software, including those already defined for the MNI/Colin27 template anatomy.


Volume and Surface Warping of the Template Anatomy
When the individual MRI data is not available for a subject, the MNI/Colin27 template can be warped to fit a set of head points digitized from the individual anatomy of the subject. This creates an approximation of the individual anatomy based on scalp morphology, as illustrated in Figure 7. Technical details are provided in [38]. This is particularly useful for EEG studies where MRI scans were not acquired and the locations of scalp points are available.


Warping of the MRI volume and corresponding tissue surface envelopes of the Colin27 template brain to fit a set a digitized head points (white dots in upper right corner): initial Colin27 anatomy (left) and warped to the scalp control points of another subject (right). Note how surfaces and MRI volumes are adjusted to the individual data.

9. Forward Modeling

Forward modeling refers to the correspondence between neural currents and MEG/EEG sensor measurements. This step depends on the shape and conductivity of the head and can be computed using a number of methods, ranging from simple spherical head models [1] to overlapping spheres [2] and boundary or finite element methods [39].

Over the past ten years, multiple approaches to forward modeling have been prototyped, implemented, and tested in Brainstorm. The ones featured in the software today offer the best compromise between robustness (adaptability to any specific case) and accuracy (precision of the results). Other techniques will be added in the future. Current models include the single sphere and overlapping spheres methods for MEG [2] and Berg's three-layer sphere model for EEG [40]. For the single sphere methods, an interactive interface helps the user refine—after automatic estimation—the parameters of the sphere(s) that best fits the subject’s head (Figure 8).


EEG is more sensitive to approximations in the geometry of the head as a volume conductor so that boundary element methods (BEMs) may improve model accuracy. A BEM approach for both MEG and EEG will soon be added to Brainstorm through a contribution from the OpenMEEG project [41], developed by the French National Institute for Research in Computer Science and Control (INRIA).

10. Inverse Modeling

Inverse modeling resolves the cortical sources that gave rise to a specific set of MEG or EEG recordings. In Brainstorm, the main method to estimate source activities is adapted from the depth-weighted minimum L2 norm estimator of cortical current density [42], which can subsequently be normalized using either the statistics of noise (dSPM [43]) or the data covariance (sLORETA [44]), as estimated from experimental recordings. For consistency and in an effort to promote standardization, the implementation of these estimators is similar to the ones available in the MNE software [36]. Two additional inverse models are available in Brainstorm: a linearly-constrained minimum variance (LCMV) beamformer [45] and the MUSIC signal classification technique [4, 46]. We also plan to add least squares multiple dipole fitting [4] to Brainstorm in the near future.

The region of support for these inverse methods can be either the entire head volume or restricted to the cortical surface, with or without constraints on source orientations. In the latter case, elementary dipole sources are distributed over the nodes of the surface mesh of the cortical surface. The orientation of the elementary dipoles can be left either unconstrained or constrained normally to the cortical surface. In all cases, the recommended number of dipoles to use for source estimation is about 15,000 (decimation of the original surface meshes can be performed within Brainstorm).

Brainstorm can manage the various types of sensors (EEG, MEG gradiometers, and MEG magnetometers) that may be available within a given dataset. When multiple sensor types are processed together in a joint source model, the empirical noise covariance matrix is used to estimate the weight of each individual sensor in the global reconstruction. The noise covariance statistics are typically obtained from an empty-room recording, which captures the typical instrumental and environmental fluctuations.

11. Source Visualization and Analysis

Brainstorm provides a large set of tools to display, visualize, and explore the spatio-temporal features of the estimated source maps (Figure 9), both on the cortical surface (a) and in the full head volume (b). The sources estimated on the cortical surface can be reprojected and displayed in the original volume of the MRI data (c) and on another mesh of the cortex at a higher or lower resolution. Reconstructed current values can be smoothed in space or in time before performing group analysis.


A variety of options for the visualization of estimated sources. (a) 3D rendering of the cortical surface, with control of surface smoothing (c) 3D orthogonal planes of the MRI volumes (b) conventional orthogonal views of the MRI volume with overlay of the MEG/EEG source density.

A dedicated interface lets the user define and analyze the time courses of specific regions of interest, named scouts in Brainstorm (Figure 10). Brainstorm distribution includes two predefined segmentations of the default anatomy (MNI Colin27 [37]) into regions of interest, based on the anatomical atlases of Tzourio-Mazoyer et al. [47].


Selection of cortical regions of interest in Brainstorm and extraction of a representative time course of the elementary sources within.

The rich contextual popup menus available in all visualization windows suggest predefined selections of views for creating a large variety of plots. The resulting views can be saved as images, movies, or contact sheets (Figure 9). Note that it is also possible to import dipoles estimated with the FDA-approved software Xfit from Elekta-Neuromag (Figure 11).


Temporal evolution of elementary dipole sources estimated with the external Xfit software. Data from a right-temporal epileptic spike. This component was implemented in collaboration with Elizabeth Bock, MEG Program, Medical College of Wisconsin.

12. Time-Frequency Analysis of Sensor and Source Signals

Brainstorm features a dedicated user interface for performing the time-frequency decomposition of MEG/EEG sensor and source time series using Morlet wavelets [10]. The shape—scaled versions of complex-valued sinusoids weighted by a Gaussian kernel—of the Morlet wavelets can efficiently capture bursts of oscillatory brain activity. For this reason, they are one of the most popular tools for time-frequency decompositions of electrophysiological data [26, 48]. The temporal and spectral resolution of the decomposition can be adjusted by the user, depending on the experiment and the specific requirements of the data analysis to be performed.

Time-frequency decompositions tend to increase the volume of data dramatically, as it is decomposed in the space, time, and frequency dimensions. Brainstorm has been efficiently designed to either store the transformed data or compute it on the fly.

Data can be analyzed as instantaneous measurements, or grouped into temporal and spectral bands of interest such as alpha (8–12 Hz) [26, 49], theta (5–7 Hz) [50–53], and so forth. Even though this reduces the resolution of the decomposition, it may benefit the analysis in multiple ways: reduced data storage requirements, improved signal-to-noise ratio, and a better control over the issue of multiple comparisons by reducing the number of concurrent hypothesis being tested.

Figure 12 illustrates some of the displays available to explore time-frequency decompositions: time-frequency maps of the times series from one sensor (a)-(b), one source (c) and one or more scouts (d), time courses of the power of the sensors for one frequency band (e), 2D/3D mappings (f), and cortical maps (g)-(h) of the power for one time and one frequency band.


(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h) A variety of display options to visualize time-frequency decompositions using Brainstorm (see text for details).

13. Graphical Batching Interface

The main window includes a graphical batching interface (Figure 13) that directly benefits from the database display: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including preprocessing of the recordings, averaging, estimation of the sources, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script.


Graphical interface of the batching tool. (a) selection of the input files by drag-and-drop. (b) creation of an analysis pipeline. (c) example of Matlab script generated automatically.

The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy.

14. High-Level Scripting

For advanced users and visualization purposes, Brainstorm can be used as a high-level scripting environment. All Brainstorm operations have been designed to interact with the graphical interface and the database therefore, they have very simple inputs: mouse clicks and keyboard presses. As a result, the interface can be manipulated through Matlab scripts, and each mouse click can be translated into a line of script. Similar to working through the graphical interface, all contextual information is gathered from the interface and the database, so that most of the functions may be called with a limited number of parameters, and, for example, there is no need to keep track of file names. As a result, scripting with Brainstorm is intuitive and easy to use. Figure 14 shows an example of a Matlab script using Brainstorm.


15. Solutions for Performing Group Analyses with MEG/EEG Data and Source Models

Brainstorm’s “Process2” tab allows the comparison of two data samples. This corresponds to a single factor 2-level analysis and supported tests include simple difference, paired/unpaired Student t-tests of equal/unequal variance, and their nonparametric permutation alternatives [17]. The two groups can be assembled from any type of files, for example, two conditions within a subject, two conditions across subjects or two subjects for the same conditions, and so forth. These operations are generic in Brainstorm and can be applied to any type of data in the database: MEG/EEG recordings, source maps, and time-frequency decompositions. Furthermore, analysis of variance (ANOVA) tests are also supported up to 4 factors. Figure 15 displays the use of a Student t-test to compare two conditions, “GM” and “GMM,” across 16 subjects.


Student t-test between two conditions. (a) selection of the files. (b) selection of the test. (c) options tab for the visualization of statistical maps, including the selection of the thresholding method.

We specifically address here how to perform multisubject data analysis using Brainstorm. In multisubject studies, measurement variance has two sources: the within-subject variance and the between-subject variance. Using collectively all trials from every subject simultaneously for comparisons is fixed-effects analysis [54] and does not consider the multiple sources of variance. Random-effects analysis [54, 55], which properly takes into account all sources of variance, is available in Brainstorm in its simplest and most commonly used form of the summary statistic approach [56, 57]. Based on this approach, analysis occurs at two levels. At the first level, trials from each subject are used to calculate statistics of interest separately for each subject, and at the second level, the different subjects are combined into an overall statistic.

Consider the example of investigating experimental effects, where prestimulus data are compared against post-stimulus data. The first level analysis averages all trials from each subject to yield prestimulus and post-stimulus responses. The second-level analysis can be a paired t-test between the resulting N prestimulus maps versus the N post-stimulus maps, where N is the number of subjects. Brainstorm processes and statistics include averaging trials and paired t-tests, making such analysis possible. Also, the procedure described above assumes equal within-subject variance, but the subjects can be weighted accordingly if this is not the case.

Brainstorm also supports statistical thresholding of the resulting activation maps, which takes into account the multiple hypotheses testing problem. The available methods include Bonferroni, false discovery rate [58], which controls the expected portion of false positives among the rejected hypotheses, and familywise error rate [59], which controls the probability of at least one false positive under the null hypothesis of no experimental effect. The latter is controlled with a permutation test and the maximum statistic approach, as detailed in [17].

In order to compare multiple subjects at the source level, an intermediate step is required if the sources were originally mapped on the individual subject anatomies. The sources estimated on individual brains are first projected on the cortical surface of the MNI-Colin27 brain. In the current implementation, the surface-to-surface registration is performed hemisphere by hemisphere using the following procedure: (1) alignment along the anterior commissure/posterior commissure axis, (2) spatial smoothing to preserve only the main features of the surfaces onto which the registration will be performed, (3) deformation of the individual surface to match the MNI surface with an iterative closest point algorithm (ICP) [60], and (4) interpolation of the source amplitudes using Shepard’s method [61]. Figure 16 shows the sources on the individual anatomy (left), and its reprojection on the MNI brain (right). This simple approach will eventually be replaced by cortical surface registration and surface-constrained volume registration methods developed at the University of Southern California as described in [62]. We will also add functionality to use the common coordinate system used in FreeSurfer for intersubject surface registration.


(a)
(b)
(a)
(b) Cortical activations 46 ms after the electric stimulation of the left median nerve on the subject’s brain (a) and their projection in the MNI brain (b).

16. Future Developments

Brainstorm is a project under constant development, and the current version provides an environment where new features are readily implemented and adapted to the interface. There are several recurrent requests from users for new features, as well as plans for future developments. Examples of forthcoming developments in the next two years include:

– expanding the preprocessing operations with the most popular techniques for noise reduction and automatic artifact detection,

– integration of methods for functional connectivity analysis and multivariate statistical analysis [16, 63],

– expanding forward and inverse calculations to include BEM and multiple dipole fitting methods,

– interface for simulating MEG/EEG recordings using simulated sources and realistic anatomy,

– segmentation of MEG/EEG recordings in functional micro-states, using optical flow models [64].

17. Brainstorm in the Software Development Landscape

Several commercial solutions to visualize and process MEG/EEG data are available. Most are developed for specific acquisition systems and are often designed by the manufacturers of these systems. They are typically unsuitable for research for several reasons: they are mainly driven by the requirements of clinical environment and FDA and CE certifications their all-graphical interface seldom provides information about the underlying data analysis, file formats, are sometimes proprietary and undocumented source code and description of the algorithms are not accessible to the user, and they are expensive. The research community needs solutions that are completely open, with the possibility of directly manipulating the code, data, and parameters.

As a result, many laboratories have developed their own tools for MEG and EEG data analysis. However, these tools are often not shared either because of the lack of interest or because of the required effort to support the software, develop documentation, and create and maintain a distribution website. However, the approach of developing individual tools is very limiting because of the limited availability of human resources assigned to software development in most research groups and the breadth of expertise that is required (electrophysiology, electromagnetic modeling, signal processing, statistics, classification, software optimization, real-time processing, human-machine interfaces ergonomics, etc.).

In the past two decades, many projects have been developed to offer open and free alternatives to the wide range of commercial solutions. Common among these projects is the support by a large community of developers around the world, who produce free and reusable source code. For this purpose, the free software community equipped itself with tools to facilitate collaborative work, such as version managers, forums, wikis, and discussion lists. This approach to collaborative software development has not only reached a high level of maturity, but also proved its efficiency. The best example is probably the Linux operating system, whose stability matches or exceeds that of commercially produced operating systems.

In the realm of functional brain mapping, open-source tools such as SPM [65] and EEGLab [35] have been broadly adopted in many research labs throughout the world. Providing open access to source code in combination with a willingness to accept additions and modifications from other sites clearly appeals both to users in clinical and neuroscientific research and others involved in methodology development. A variety of public licenses also allows developers to choose whether all or part of the code remains in the public domain. Importantly for software developed in academic and nonprofit labs, which are dependent on externally funded research support, recent experience indicates that open-source distribution is valued by the resesarch community and credit for this distribution is attributed to the original developers.

Free software packages with similar features to Brainstorm (general purpose software for MEG/EEG) are EEGLab, FieldTrip, and MNE. The first two are written under the Matlab environment, with noncompiled scripts, and are supported by large communities of users connected with active forums and diffusion lists. EEGLab offers a simple but functional interface, and its target application is oriented towards the preprocessing of recordings and ICA analysis. FieldTrip is a rich and powerful toolbox that offers the widest range of functionalities, but without a graphic interface its usage requires good skills in Matlab programming. MNE is also organized as a set of independent functions, easily scriptable and mostly oriented towards the preprocessing of the recordings and the source estimation using minimum norm technique, but written in C++ and compiled for Linux and MacOSX platforms.

Brainstorm, in contrast, is an integrated application rather than a toolbox. At the present time, it offers fewer features than FieldTrip but on the other hand, its intuitive interface, its powerful visualization tools, and the structure of its database allow the user to work at a higher level. It is possible to complete in a few minutes, and within a few mouse clicks, what would take hours otherwise: there is no need to write any scripts, and no need to think about where data files are stored on hard drives the data is directly accessible, and a simple mouse click is sufficient to open a wide variety of display windows. It enables the researcher to concentrate on exploring his or her data. When visual exploration is complete and group analysis needs to be performed, Brainstorm offers a very high level scripting system, based on the interface and the database. The resulting code is easy to understand, and with few arguments: all the contextual information is gathered automatically from the database when needed, in contrast to FieldTrip, for example, where this information has to be specifically passed in arguments to each function.

To conclude, Brainstorm now represents a potentially highly-productive option for researchers using MEG or EEG however, it is a work in progress and some key features are still missing. In the spirit of other open source developments, to the extent possible, we will reuse functions developed by other groups, which will then jointly maintain. Similarly, other developers are welcome to use code from Brainstorm in their software.

Acknowledgment

This software was generated primarily with support from the National Institutes of Health under Grants nos. R01-EB002010, R01-EB009048, and R01-EB000473. Primary support also includes permanent site support from the Centre National de la Recherche Scientifique (CNRS, France) for the Cognitive Neuroscience and Brain Imaging Laboratory (La Salpêtrière Hospital and Pierre and Marie Curie University, Paris, France). Additional support was provided by two grants from the French National Research Agency (ANR) to the Cognitive Neuroscience Unit (Inserm/CEA, Neurospin, France) and to the ViMAGINE project (ANR-08-BLAN-0250), and by the Epilepsy Center in the Cleveland Clinic Neurological Institute. The authors are grateful to all the people who contributed to the conception, the development, or the validation of specific Brainstorm functionalities. In alphabetical order: Charles Aissani, Syed Ashrafulla, Elizabeth Bock, Lucie Charles, Felix Darvas, Ghislaine Dehaene-Lambertz, Claude Delpuech, Belma Dogdas, Antoine Ducorps, Guillaume Dumas, John Ermer, Line Garnero, Alexandre Gramfort, Matti Hämäläinen, Louis Hovasse, Esen Kucukaltun-Yildirim, Etienne Labyt, Karim N'Diaye, Alexei Ossadtchi, Rey Ramirez, Denis Schwartz, Darren Weber, and Lydia Yahia-Cherif. The software, extensive documentation, tutorial data, user forum, and reference publications are available at http://neuroimage.usc.edu/brainstorm.

References

  1. S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Processing Magazine, vol. 18, no. 6, pp. 14–30, 2001. View at: Publisher Site | Google Scholar
  2. M. X. Huang, J. C. Mosher, and R. M. Leahy, “A sensor-weighted overlapping-sphere head model and exhaustive head model comparison for MEG,” Physics in Medicine and Biology, vol. 44, no. 2, pp. 423–440, 1999. View at: Publisher Site | Google Scholar
  3. F. Darvas, J. J. Ermer, J. C. Mosher, and R. M. Leahy, “Generic head models for atlas-based EEG source analysis,” Human Brain Mapping, vol. 27, no. 2, pp. 129–143, 2006. View at: Publisher Site | Google Scholar
  4. J. C. Mosher, P. S. Lewis, and R. M. Leahy, “Multiple dipole modeling and localization from spatio-temporal MEG data,” IEEE Transactions on Biomedical Engineering, vol. 39, no. 5, pp. 541–557, 1992. View at: Publisher Site | Google Scholar
  5. J. W. Phillips, R. M. Leahy, and J. C. Mosher, “MEG-Based imaging of focal neuronal current sources,” IEEE Transactions on Medical Imaging, vol. 16, no. 3, pp. 338–348, 1997. View at: Google Scholar
  6. S. Baillet and L. Garnero, “A Bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem,” IEEE Transactions on Biomedical Engineering, vol. 44, no. 5, pp. 374–385, 1997. View at: Publisher Site | Google Scholar
  7. D. M. Schmidt, J. S. George, and C. C. Wood, “Bayesian inference applied to the electromagnetic inverse problem,” Human Brain Mapping, vol. 7, no. 3, pp. 195–212, 1999. View at: Google Scholar
  8. G. L. Barkley and C. Baumgartner, “MEG and EEG in epilepsy,” Journal of Clinical Neurophysiology, vol. 20, no. 3, pp. 163–178, 2003. View at: Publisher Site | Google Scholar
  9. A. Arieli, A. Sterkin, A. Grinvald, and A. Aertsen, “Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses,” Science, vol. 273, no. 5283, pp. 1868–1871, 1996. View at: Google Scholar
  10. C. Tallon-Baudry and O. Bertrand, “Oscillatory gamma activity in humans and its role in object representation,” Trends in Cognitive Sciences, vol. 3, no. 4, pp. 151–162, 1999. View at: Publisher Site | Google Scholar
  11. G. Pfurtscheller and F. H. Lopes Da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at: Publisher Site | Google Scholar
  12. P. Tass, M. G. Rosenblum, J. Weule et al., “Detection of n:m phase locking from noisy data: application to magnetoencephalography,” Physical Review Letters, vol. 81, no. 15, pp. 3291–3294, 1998. View at: Google Scholar
  13. C. W. J. Granger, B. N. Huangb, and C. W. Yang, “A bivariate causality between stock prices and exchange rates: evidence from recent Asianflu,” Quarterly Review of Economics and Finance, vol. 40, no. 3, pp. 337–354, 2000. View at: Google Scholar
  14. W. Hesse, E. Möller, M. Arnold, and B. Schack, “The use of time-variant EEG Granger causality for inspecting directed interdependencies of neural assemblies,” Journal of Neuroscience Methods, vol. 124, no. 1, pp. 27–44, 2003. View at: Publisher Site | Google Scholar
  15. H. B. Hui, D. Pantazis, S. L. Bressler, and R. M. Leahy, “Identifying true cortical interactions in MEG using the nulling beamformer,” NeuroImage, vol. 49, no. 4, pp. 3161–3174, 2010. View at: Publisher Site | Google Scholar
  16. J. L. P. Soto, D. Pantazis, K. Jerbi, S. Baillet, and R. M. Leahy, “Canonical correlation analysis applied to functional connectivity in MEG,” in Proceedings of the 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '10), pp. 113–116, April 2010. View at: Publisher Site | Google Scholar
  17. D. Pantazis, T. E. Nichols, S. Baillet, and R. M. Leahy, “A comparison of random field theory and permutation methods for the statistical analysis of MEG data,” NeuroImage, vol. 25, no. 2, pp. 383–394, 2005. View at: Publisher Site | Google Scholar
  18. S. Baillet, J. C. Mosher, and R. M. Leahy, “BrainStorm beta release: a Matlab software package for MEG signal processing and source localization and visualization,” NeuroImage, vol. 11, no. 5, p. S915, 2000. View at: Google Scholar
  19. J. C. Mosher, S. Baillet, F. Darvas et al., “Electromagnetic brain imaging using brainstorm,” vol. 7, no. 2, pp. 189–190. View at: Google Scholar
  20. A. Tzelepi, N. Laskaris, A. Amditis, and Z. Kapoula, “Cortical activity preceding vertical saccades: a MEG study,” Brain Research, vol. 1321, pp. 105–116, 2010. View at: Publisher Site | Google Scholar
  21. F. Amor, S. Baillet, V. Navarro, C. Adam, J. Martinerie, and M. Le Van Quyen, “Cortical local and long-range synchronization interplay in human absence seizure initiation,” NeuroImage, vol. 45, no. 3, pp. 950–962, 2009. View at: Publisher Site | Google Scholar
  22. T. A. Bekinschtein, S. Dehaene, B. Rohaut, F. Tadel, L. Cohen, and L. Naccache, “Neural signature of the conscious processing of auditory regularities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 106, no. 5, pp. 1672–1677, 2009. View at: Publisher Site | Google Scholar
  23. F. Carota, A. Posada, S. Harquel, C. Delpuech, O. Bertrand, and A. Sirigu, “Neural dynamics of the intention to speak,” Cerebral Cortex, vol. 20, no. 8, pp. 1891–1897, 2010. View at: Publisher Site | Google Scholar
  24. M. Chaumon, D. Hasboun, M. Baulac, C. Adam, and C. Tallon-Baudry, “Unconscious contextual memory affects early responses in the anterior temporal lobe,” Brain Research, vol. 1285, pp. 77–87, 2009. View at: Publisher Site | Google Scholar
  25. S. Moratti and A. Keil, “Not what you expect: experience but not expectancy predicts conditioned responses in human visual and supplementary cortex,” Cerebral Cortex, vol. 19, no. 12, pp. 2803–2809, 2009. View at: Publisher Site | Google Scholar
  26. D. Pantazis, G. V. Simpson, D. L. Weber, C. L. Dale, T. E. Nichols, and R. M. Leahy, “A novel ANCOVA design for analysis of MEG data with application to a visual attention study,” NeuroImage, vol. 44, no. 1, pp. 164–174, 2009. View at: Publisher Site | Google Scholar
  27. P. Hansen, M. Kringelbach, and R. Salmelin, Eds., Meg: An Introduction to Methods, Oxford University Press, Oxford, UK, 2010.
  28. R. Salmelin and S. Baillet, “Electromagnetic brain imaging,” Human Brain Mapping, vol. 30, no. 6, pp. 1753–1757, 2009. View at: Publisher Site | Google Scholar
  29. The MoinMoin Wiki Engine, http://moinmo.in/.
  30. vBulleti, http://www.vbulletin.com/.
  31. D. W. Shattuck and R. M. Leahy, “Brainsuite: an automated cortical surface identification tool,” Medical Image Analysis, vol. 8, no. 2, pp. 129–142, 2002. View at: Publisher Site | Google Scholar
  32. Y. Cointepas, J.-F. Mangin, L. Garnero, J.-B. Poline, and H. Benali, “BrainVISA: software platform for visualization and analysis of multi-modality brain data,” Neuroimage, vol. 13, no. 6, p. S98, 2001. View at: Google Scholar
  33. FreeSurfer, http://surfer.nmr.mgh.harvard.edu/.
  34. FieldTrip, Donders Institute for Brain, Cognition and Behaviour, http://fieldtrip.fcdonders.nl/.
  35. A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, pp. 9–21, 2004. View at: Publisher Site | Google Scholar
  36. M. S. Hämäläinen, MNE software, http://www.nmr.mgh.harvard.edu/martinos/userInfo/data/sofMNE.php.
  37. D. L. Collins, A. P. Zijdenbos, V. Kollokian et al., “Design and construction of a realistic digital brain phantom,” IEEE Transactions on Medical Imaging, vol. 17, no. 3, pp. 463–468, 1998. View at: Google Scholar
  38. R. M. Leahy, J. C. Mosher, M. E. Spencer, M. X. Huang, and J. D. Lewine, “A study of dipole localization accuracy for MEG and EEG using a human skull phantom,” Electroencephalography and Clinical Neurophysiology, vol. 107, no. 2, pp. 159–173, 1998. View at: Publisher Site | Google Scholar
  39. F. Darvas, D. Pantazis, E. Kucukaltun-Yildirim, and R. M. Leahy, “Mapping human brain function with MEG and EEG: methods and validation,” NeuroImage, vol. 23, no. 1, pp. S289–S299, 2004. View at: Publisher Site | Google Scholar
  40. P. Berg and M. Scherg, “A fast method for forward computation of multiple-shell spherical head models,” Electroencephalography and Clinical Neurophysiology, vol. 90, no. 1, pp. 58–64, 1994. View at: Publisher Site | Google Scholar
  41. A. Gramfort, T. Papadopoulo, E. Olivi, and M. Clerc, “OpenMEEG: opensource software for quasistatic bioelectromagnetics,” BioMedical Engineering Online, vol. 9, article 45, 2010. View at: Publisher Site | Google Scholar
  42. M. S. Hämäläinen and R. J. Ilmoniemi, “Interpreting magnetic fields of the brain: minimum norm estimates,” Medical and Biological Engineering and Computing, vol. 32, no. 1, pp. 35–42, 1994. View at: Google Scholar
  43. A. M. Dale, A. K. Liu, B. R. Fischl et al., “Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity,” Neuron, vol. 26, no. 1, pp. 55–67, 2000. View at: Google Scholar
  44. R. D. Pascual-Marqui, “Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details,” Methods and Findings in Experimental and Clinical Pharmacology, vol. 24, no. D, pp. 5–12, 2002. View at: Google Scholar
  45. B. D. Van Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” IEEE ASSP Magazine, vol. 5, no. 2, pp. 4–24, 1988. View at: Publisher Site | Google Scholar
  46. R. O. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–280, 1986, Reprint of the original 1979 paper from the RADC Spectrum Estimation Workshop. View at: Google Scholar
  47. N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou et al., “Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain,” NeuroImage, vol. 15, no. 1, pp. 273–289, 2002. View at: Publisher Site | Google Scholar
  48. C. Tallon-Baudry, O. Bertrand, C. Wienbruch, B. Ross, and C. Pantev, “Combined EEG and MEG recordings of visual 40 Hz responses to illusory triangles in human,” NeuroReport, vol. 8, no. 5, pp. 1103–1107, 1997. View at: Google Scholar
  49. M. S. Worden, J. J. Foxe, N. Wang, and G. V. Simpson, “Anticipatory biasing of visuospatial attention indexed by retinotopically specific alpha-band electroencephalography increases over occipital cortex,” The Journal of Neuroscience, vol. 20, no. 6, p. RC63, 2000. View at: Google Scholar
  50. W. Klimesch, M. Doppelmayr, T. Pachinger, and B. Ripper, “Brain oscillations and human memory: EEG correlates in the upper alpha and theta band,” Neuroscience Letters, vol. 238, no. 1-2, pp. 9–12, 1997. View at: Publisher Site | Google Scholar
  51. O. Jensen and C. D. Tesche, “Frontal theta activity in humans increases with memory load in a working memory task,” European Journal of Neuroscience, vol. 15, no. 8, pp. 1395–1399, 2002. View at: Publisher Site | Google Scholar
  52. W. Klimesch, M. Doppelmayr, H. Russegger, T. Pachinger, and J. Schwaiger, “Induced alpha band power changes in the human EEG and attention,” Neuroscience Letters, vol. 244, no. 2, pp. 73–76, 1998. View at: Publisher Site | Google Scholar
  53. C. S. Herrmann, M. Grigutsch, and N. A. Busch, “EEG oscillations and wavelet analysis,” in Event-Related Potentials-A Methods Handbook, pp. 229–259, MIT Press, Cambridge, Mass, USA, 2005. View at: Google Scholar
  54. C. E. McCulloch and S. R. Searle, Generalized, Linear, and Mixed Model, John Wiley & Sons, New York, NY, USA, 2001.
  55. W. D. Penny, A. P. Holmes, and K. J. Friston, “Random effects analysis,” Human Brain Function, vol. 2, pp. 843–850, 2004. View at: Google Scholar
  56. D. Pantazis and R. M. Leahy, “Meg: an introduction to methods,” in Statistical Inference in MEG Distributed Source Imaging, chapter 10, Oxford University Press, Oxford, UK, 2010. View at: Google Scholar
  57. J. A. Mumford and T. Nichols, “Modeling and inference of multisubject fMRI data,” IEEE Engineering in Medicine and Biology Magazine, vol. 25, no. 2, pp. 42–51, 2006. View at: Publisher Site | Google Scholar
  58. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society Series B, vol. 57, no. 1, pp. 289–300, 1995. View at: Google Scholar
  59. T. Nichols and S. Hayasaka, “Controlling the familywise error rate in functional neuroimaging: a comparative review,” Statistical Methods in Medical Research, vol. 12, no. 5, pp. 419–446, 2003. View at: Publisher Site | Google Scholar
  60. D. J. Kroon, “Iterative Closest Point using finite difference optimization to register 3D point clouds affine,” http://www.mathworks.com/matlabcentral/fileexchange/24301-finite-iterative-closest-point. View at: Google Scholar
  61. D. Shepard, “Two- dimensional interpolation function for irregularly- spaced data,” in Proceedings of the ACM National Conference, pp. 517–524, 1968. View at: Google Scholar
  62. A. A. Joshi, D. W. Shattuck, P. M. Thompson, and R. M. Leahy, “Surface-constrained volumetric brain registration using harmonic mappings,” IEEE Transactions on Medical Imaging, vol. 26, no. 12, pp. 1657–1668, 2007. View at: Publisher Site | Google Scholar
  63. J. L. P. Soto, D. Pantazis, K. Jerbi, J. P. Lachaux, L. Garnero, and R. M. Leahy, “Detection of event-related modulations of oscillatory brain activity with multivariate statistical analysis of MEG data,” Human Brain Mapping, vol. 30, no. 6, pp. 1922–1934, 2009. View at: Publisher Site | Google Scholar
  64. J. Lefèvre and S. Baillet, “Optical flow approaches to the identification of brain dynamics,” Human Brain Mapping, vol. 30, no. 6, pp. 1887–1897, 2009. View at: Publisher Site | Google Scholar
  65. SPM, http://www.fil.ion.ucl.ac.uk/spm/.

Copyright

Copyright © 2011 François Tadel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Top Video Interviewing Tools

With a recent survey finding 63% of HR managers having conducted an online interview, video interviewing software is becoming an important part of the recruiting tech stack.

    , receiving an eye-opening $93 million in funding to date, one of its differentiators is incorporating Industrial-Organizational Psychology in its pre-hire assessments and interview analyses. , specializing in video interviewing software, also offers other recruitment solutions such as digital structured interviews, automated reference checking, and more. Major clients include the United Nations, Samsung, Canon, and the Atlanta Hawks. , a Chicago-based company has a variety of solutions such as interview building, interview scheduler, interview prep, and more. They work towards the success of over 900 clients such as emerging businesses, colleges, universities as well as, large-midsize companies.

Statistical Package for Social Science (SPSS)

SPSS is the most popular quantitative analysis software program used by social scientists.

Made and sold by IBM, it is comprehensive, flexible, and can be used with almost any type of data file. However, it is especially useful for analyzing large-scale survey data.

It can be used to generate tabulated reports, charts, and plots of distributions and trends, as well as generate descriptive statistics such as means, medians, modes and frequencies in addition to more complex statistical analyses like regression models.

SPSS provides a user interface that makes it easy and intuitive for all levels of users. With menus and dialogue boxes, you can perform analyses without having to write command syntax, like in other programs.

It is also simple and easy to enter and edit data directly into the program.

There are a few drawbacks, however, which might not make it the best program for some researchers. For example, there is a limit on the number of cases you can analyze. It is also difficult to account for weights, strata and group effects with SPSS.


Top Research Tools and Software for Academics and Research Students

If you are conducting research, it is very important that you have appropriate methods and tools to carry out your research. If you are a non-native English speaker, then you need a research tool to help you with your written language. If your research involves data analysis, then you need a good statistical research tool for your work. It is also important that you keep tabs on what other people in your research arena are doing, so you need research tools such as Google Scholar and ResearchGate to collaborate with your peers. You also need a good plagiarism checking software to avoid academic misconduct. Finally, you need a research project management software to stay on top of the deadlines. In this blog, we review some of the useful tools for research that researchers can use to be more productive.

1. REF-N-WRITE Academic Writing Tool

Ref-N-Write is a fantastic research tool for beginner writers and non-native English speakers. This is a Microsoft Word add-in. This tool allows users to import research papers into MS Word. Then the tool allows you to search the research documents while you are writing your research paper or academic essay. In essence, this tool is similar to Google search engine the difference is that instead of searching the internet you are searching research papers and academic documents stored on your computer. REF-N-WRITE functions within MS Word and the search results are displayed in a panel that pops up from the bottom. You can expand the search results and jump to the exact location in the source document in a few clicks. This research tool is fantastic to lookup for writing ideas from related research papers or documents from your colleagues. The REF-N-WRITE tool also comes with a database of academic and scientific phrases. You can use this to polish your writing by substituting colloquial terms and informal statements in your text with academically acceptable words and phrases. REF-N-WRITE also features text-to-voice option that helps you pick up grammatical errors and sentence structural issues.

Useful Links:

2. Free Online Statistical Testing Tools

One of the most important requirement while writing up your research is the use of appropriate statistical methods and analysis to back up your claims. Whether you are doing quantitative or qualitative research, statistical analysis will be an indispensable part of your workflow. There are plenty of research tools available that allows you to do a wide variety of statistical analyses for your research. However, most of the time, you will find yourself performing basic calculations stuff such as mean, standard deviation, confidence intervals, standard error, etc. to make your work sound scientific. Also, you need to use some form of statistical test to test the significance of the difference between two groups or cohorts and compute the p-value. Some of the widely used statistical tests for this purpose include T-test, F-test, Chi-square test, Pearson correlation coefficient and ANOVA. Following are the list of free popular statistical research tools available online. These tools will allow you to cut and copy your data directly from your spreadsheet and perform the required statistical analysis.

Useful Links:

3. Microsoft Excel

One of the widely used tools for research is Microsoft Excel. MS Excel has plenty of features that will come in handy when you are doing a research project. Excel is a must have research tool if your study involves a lot of quantitative analysis. Excel offers a wide range of statistical functions such as AVERAGE, MIN, MAX, SUM, etc that you can apply to the cells in a few clicks. You can visualize your data using a wide variety of chart types, for example, bar plot, scatter plot, etc. You can use pivot tables to organize and generate summaries of your data easily. For complex statistical analysis, you can use Data Analysis ToolPak Excel add-in. This add-in comes with a wide variety of statistical analysis tools such as Descriptive statistics, Histogram, F-test, Random number generation, Fourier analysis, etc.

Useful Links:

4. Google Scholar

Google Scholar is a free online research tool offered by Google. This tool allows users to search for academic literature, scientific articles, journals, white papers and patents across the web. This is an excellent tool for research. It not only searches well-known databases, it also looks for articles in university repositories, so your chances of finding the full-text PDF of the research article you are after is very high. You can set up keyword alerts so that Google Scholar notifies you when there is a new article in your field or from your co-authors. You can manage multiple libraries of papers. You can label paper or article, and Google Scholar will organize them for you. Google Scholar displays vital information about the article such as citation number, versions and other articles citing the current article. Google Scholar also alerts you if somebody else has cited your paper. You can download citations in a wide variety of formats – MLA, APA, Chicago, Harvard, Vancouver, – and you can easily export the citation to EndNote and other bibliography managers. On the whole, Google Scholar is an indispensable tool for researchers.

Useful Links:

5. ResearchGate

ResearchGate is a social networking site for people doing research. The site contains more than 11 million members that include scientists, academics, Ph.D. students, and researchers. Users can create an account using a valid institutional email address. Once successful, they can create a profile, upload pictures, list publications and upload full-text papers. ResearchGate is a perfect research tool for researchers and academics looking for collaborations. You can follow updates from your colleagues or peers with similar interests. You will be notified if somebody reads or cites your paper, and also you will know if the people you are following have published new research. You can email other members and request for full-text of their listed publications. ResearchGate also computes a RG score based on your profile and publications. This is different from H-score computed by Google Scholar or citation score given by journals. On the whole, ResearchGate is an excellent tool for research if you want to keep tabs on your colleagues’ research and collaborate with different institutions.

Useful Links:

6. Plagiarism detection software tools

Plagiarism is seen as academic misconduct. Plagiarism is not taken lightly by academic and research institutions and is punished and penalized severely. Plagiarism occurs when you copy and paste a large chunk of text from a document written by someone else without giving credit to the author. This is seen as copying and taking credits for somebody’s work. Even if you paraphrase the text and use it in your text, it will still be seen as plagiarism. One of the common forms of plagiarism is self-plagiarism. Self-plagiarism is the use of one’s own previous work in another context without citing that it was used previously. This is because once you publish your work, the publisher holds the copyright for your text, so you need to either get permission from the publisher to reuse the text or you should cite the source. There are plenty of plagiarism detection software and online checking tools available that you can use to check how much of your text overlap with previously published materials. You can fix these mistakes before submitting your academic essay or research paper. Some of the tools for checking plagiarism are listed below.

Useful Links:

7. Project management tools

It is so easy for your research project to go out of hands when you are multi-tasking and dealing with multiple deadlines. It is good practice to choose a project management tool to keep on top of your research project. These tools can help you minimise the amount of time you spend on managing the project and instead concentrate on research work. Find a tool that allows you to lay out what is to be done, by whom and by then. Sometimes it would be helpful if you can visualize your tasks and the timeline for execution using simple diagrams such as a Gantt chart. There are plenty of research project management tools available you can simply pick the one that suits your research project. Here are some popular research management tools used in the academic community.


Predict iQ Section

Predict iQ analyses your respondents’ survey responses and embedded data in order to predict when a customer will eventually churn (abandon the company). Once a churn prediction model is configured in Predict iQ, newly collected responses will be evaluated for how likely the respondent is to churn, allowing you to be proactive in your company’s customer retention.

For in-depth information about using Predict iQ, check out our dedicated support page!


Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm

Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.

Keywords: Brainstorm EEG EEGLAB auditory N100 auditory processing source localization.

Figures

Schematic illustration of the processing…

Schematic illustration of the processing pipeline. The dashed line indicates that alternative processing…

ICA based artifact attenuation. Left)…

ICA based artifact attenuation. Left) Original EEG time course, shown for a subset…

Sensor level analysis. Shown is…

Sensor level analysis. Shown is the grand average (Red line) of all subjects…

Grand average source level activity…

Grand average source level activity for the N100 component. Shown is the activation…


Bigmelon: tools for analysing large DNA methylation datasets

Motivation: The datasets generated by DNA methylation analyses are getting bigger. With the release of the HumanMethylationEPIC micro-array and datasets containing thousands of samples, analyses of these large datasets using R are becoming impractical due to large memory requirements. As a result there is an increasing need for computationally efficient methodologies to perform meaningful analysis on high dimensional data.

Results: Here we introduce the bigmelon R package, which provides a memory efficient workflow that enables users to perform the complex, large scale analyses required in epigenome wide association studies (EWAS) without the need for large RAM. Building on top of the CoreArray Genomic Data Structure file format and libraries packaged in the gdsfmt package, we provide a practical workflow that facilitates the reading-in, preprocessing, quality control and statistical analysis of DNA methylation data.We demonstrate the capabilities of the bigmelon package using a large dataset consisting of 1193 human blood samples from the Understanding Society: UK Household Longitudinal Study, assayed on the EPIC micro-array platform.

Availability and implementation: The bigmelon package is available on Bioconductor (http://bioconductor.org/packages/bigmelon/). The Understanding Society dataset is available at https://www.understandingsociety.ac.uk/about/health/data upon request.

Supplementary information: Supplementary data are available at Bioinformatics online.

© The Author(s) 2018. Published by Oxford University Press.

Figures

Example of bigmelon workflow. The…

Example of bigmelon workflow. The workflow is broken up into three parts: Data…

Demonstration of outlyx on Understanding…

Demonstration of outlyx on Understanding Society Dataset ( n = 1193). ( A…

Comparison of quantile normalization on…

Comparison of quantile normalization on 52 GB β matrix from Marmal-aid dataset (…

Median time spent randomly accessing…

Median time spent randomly accessing different sized portions of data from the Marmal-Aid…


What tools are available for EEG analysis on the R platform? - Psychology

Collection of R packages to demonstrate the use of R Project software for basic analysis tasks.

The packages are intended to be used as a part of R. Therefore, the basic requirement is to have R installed beforehand. R Project is available for download from any of the list of CRAN mirrors. Detailed installation instructions and support is available here. For a more user friendly experience with R we recommend the RStudio, an open source free IDE for R.

Once R is up and running (with or without RStudio), we are almost ready to use the packages from this repository. R enables downloading and installing packages directly from GitHub by using the function install_github of the devtools package in just two lines of code:

An example of the second line for one of the packages in this repository would be

openEHRapi - using openEHR REST API from R

The package is designed to collect data from an openEHR server with AQL queries via REST web service architecture. Healthcare data is obtained by calling a REST API, and then format the returned result set in R to prepare the data for further analysis. Saving data records (compositions) to an openEHR server is also enabled.

The package openEHRapi includes functions:

  • get_query : for a given AQL query returns the result set in a data frame.
  • post_composition : stores new data records (compositions).

For more details on the usage of the package please refer to the included vignette.

EhrscapeR - using EhrScape REST API from R

The package is designed to collect data from open health data platform EhrScape with AQL queries via REST web service architecture. Healthcare data is obtained by calling a REST API, and then format the returned result set in R to prepare the data for further analysis. Saving data records (compositions) to EhrScape is also enabled.

The package EhrscapeR includes functions:

  • get_query : for a given AQL query returns the result set in a data frame.
  • get_query_csv : for a given AQL query returns the CSV formated result set in a data frame.
  • post_composition : stores new data records (compositions) in EhrScape.

For more details on the usage of the package please refer to the included vignette.

ParseGPX - parsing GPX data to R data frame

The R package parseGPX was designed for reading and parsing of GPX files containing GPS data. GPS data has become broadly available by integrating low-cost GPS chips into portable consumer devices. Consequently, there is an abundance of online and offline tools for GPS data visualization and analysis with R project being in the focus in this example. The data itself can be generated in several different file formats, such as txt, csv, xml, kml, gpx. Among these the GPX data format is ment to be the most universal intended for exchanging GPS data between programs, and for sharing GPS data with other users. Unlike many other data files, which can only be understood by the programs that created them, GPX files actually contain a description of what's inside them, allowing anyone to create a program that can read the data within. Several R packages already exist with functions for reading and parsing of GPX data files, e.g. plotKML , maptools , rgdal with corresponding functions readGPX , readGPS and readOGR .

The presented package parseGPX contains the function parse_gpx to read, parse and optionally save GPS data.

For more details on the usage of the package please refer to the included vignette.

AnalyzeGPS - analyze GPS data

The R package analyzeGPS offers functions for basic preparation and analysis of the GPS data:

  • readGPS : imports the GPS data in csv format into R data frame,
  • distanceGPS : calculation of distance between two data points or vectors of data points,
  • speedGPS : calculation of velocity between GPS data points,
  • accGPS : calculation of acceleration between GPS data points,
  • gradeGPS : calculation of grade or inclination between GPS data points.

Additionally, an example GPS dataset myGPSData.csv, acquired during cycling of a person. It is a data frame with 7771 rows and 5 variables:

  • lon : longitude data,
  • lat : latitude data,
  • ele : elevation data,
  • time : GPS time stamp - GMT time zone,
  • tz_CEST : time stamp converted to CEST time zone.

For more details on the usage of the package please refer to the included vignette.

ZephyrECG - parse and read Zephyr BH3 ECG data

The R package zephyrECG was designed to parse ECG data acquired with the Zephyr BioHarness 3 (BH3) monitor and how to import it into R. The package includes functions

  • separate_bh3 : parses and separates multiple sessions of ECG data recorded with the Zephyr BH3 monitor into separate csv files,
  • read_ecg : imports the ECG data stored in a csv file to a data frame in R.

For more details on the usage of the package please refer to the included vignette.

HeartBeat - heart beat detection from single-lead ECG signal

The R package heartBeat was designed for heart beat detection from single-lead ECG signals. The ECG data is expected to be already loaded into a data frame and ready to use (for importing data recorded with Zephyr BioHarness 3 monitor, please see the package zephyrECG). The package includes functions

  • heart_beat : detection of heart beats,
  • HRdistribution : reads the signal and the output of heart_beat function and determines instant heart rates, their distribution and a basic histogram,
  • annotateHR : adds factorized code to ECG data points according to heart rate determined previously with functions heart_beat and HRdistribution .

For more details on the usage of the package please refer to the included vignette.

StressHR - heart rate variability analysis for stress assessment

The R package stressHR assesses mental stress based on heart rate. Heart beats and heart rate are previously detected from single-lead ECG signal by using the heartBeat package. The package includes functions

  • hrv_analyze : executes the heart rate variability (HRV) on heart beat positions written in an ASCII file ( Rsec_data.txt ),
  • merge_hrv : merges the HRV data with the initial ECG data frame.

For more details on the usage of the package please refer to the included vignette.

MuseEEG - parse and read EEG data from InterAxon Muse device

The R package museEEG was designed to parse and read the EEG data collected with the InterAxon Muse device. The device stores the acquired data directly in .muse format and the manufacturer offers a tool MusePlayer that converts the .muse data to .csv format. The package is comprised of read_eeg function, which reads a file in csv format acquired by MUSE hardware and sorts the EEG data in it into a data frame.

For more details on the usage of the package please refer to the included vignette.

EmotionEEG - emotional valence and arousal assessment based on EEG recordings

The R package emotionEEG uses the EEG data collected with the InterAxon Muse device and assesses emotional valence and arousal based on asymmetry analysis. The EEG data prepared by the museEEG package contains EEG signal values in microvolts and alpha and beta absolute band powers and ratios in decibels. Emotional valence is calcluated based on the ratio of alpha band power between right and left EEG chanels FP2 and FP1. Emotional arousal is calculated based on the mean value of beta to alpha ratios of left and right EEG channels FP1 and FP2. The package includes functions

  • read_eeg : reads raw EEG data in csv format acquired by MUSE hardware and sorts it into a data frame,
  • emotion_analysis : uses the EEG data collected with Muse and assesses emotional valence and arousal based on asymmetry analysis.

For more details on the usage of the package please refer to the included vignette.

CycleR - cycling analysis through GPS data

The R package cycleR was designed to calculate advanced cycling parameters based on GPS data of the cycling route, for example power output, climb categorization, route-time segmentation. Calculations used in the package are based on Strava glossary & calculations The package analyzeGPS is required. The package is comprised of functions

  • segment_time : segments the cycling route according to activity,
  • categorize : detects and categorizes the climbs on the route,
  • cycling_power : assesses the total power produced by a cyclist on a bike ride given the GPS data and additional physical parameters.

For more details on the usage of the package please refer to the included vignette.

RunneR - running analysis with GPS data

The R package runneR was designed to calculate advanced running parameters based on GPS data of the running route, for example pace, calories burned, route segmentation, moving time. Calculations used in the package are based on Strava glossary & calculations The package analyzeGPS is required. The package is comprised of functions

  • analyze_run : determines the total time, moving time, resting time, time spent ascending, descending and on the flat and also pace on a running route described with GPS data,
  • cb_activity : estimates the amount of calories that you burn while running activity based on the MET (Metabolic Equivalent) data for physical activities,
  • cb_running : estimates the calories that you burn while running any given distance,
  • cb_hr : estimates the calories that you burn during aerobic (i.e. cardiorespiratory) exercise, based on your average heart rate while performing the exercise.

For more details on the usage of the package please refer to the included vignette.


Affiliations

School of Social Sciences, Nanyang Technological University, HSS 04-19, 48 Nanyang Avenue, Singapore, Singapore

Dominique Makowski, Tam Pham, Zen J. Lau & S. H. Annabel Chen

Behavioural Science Institute, Radboud University, Nijmegen, Netherlands

Département de psychologie, Université de Montréal, Montréal, Canada

Centre de Recherche de l’Institut Universitaire Geriatrique de Montréal, Montréal, Canada

Eureka Robotics, Singapore, Singapore

Life Science Informatics, THM University of Applied Sciences, Giessen, Germany

Centre for Research and Development in Learning, Nanyang Technological University, Singapore, Singapore

Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

Corresponding author


Watch the video: An introduction to EEG analysis: event-related potentials (May 2022).


Comments:

  1. Amadeo

    I would add something else, of course, but in fact, almost everything is said.

  2. Thutmose

    We will collect for you on the Internet a database of potential customers



Write a message