Gruss Magnetic Resonance Research Center

Computing Resources and Facilities

MRRC Hive

The MRRC's computing infrastructure is designed around a "cloud" or "grid computing" paradigm, meaning that users' desktop computers, laptops, and workstations act as interfaces to a centralized MRRC server, which in turn facilitates access to users' data space and the powerful Einstein HPSC Cluster (see below).  This "cloud" of geographically distributed data drives, the Einstein Cluster master node, and end-user terminals, desktops, laptops, and workstations, plus the "hub" MRRC server Laconic, is known as the Hive

There are three major benefits to this design. 

First, the Einstein Cluster itself, rather than the user's own computer, performs most computationally intensive jobs, vastly expediting analyses and shortening the research life cycle. 

Second, it permits the user to access their documents, research data, user settings, and other on-campus Einstein facilities – including the HPSC Cluster – from anywhere, on campus or off-site, through any computer, kiosk, or smartphone with internet access, simply by accessing the MRRC Hive Portal

And, finally, the centralized configuration of the Hive simplifies technical support and troubleshooting, so that technical issues can be recognized and resolved quickly by the MRRC computer administrator – and end-users never have to worry about installing programs, complex Linux system configuration or missing dependencies for their applications.

 
 
 

MRRC and Einstein HPSC Cluster Assets

The MRRC owns 16 compute nodes of the Einstein high-performance supercomputing (HPSC) cluster.  The MRRC's 16 data-processing nodes, each of which consist of two quad-core, 64-bit 2.5GHz Intel Xeon 5420 processors, 32 gigabytes (GB) 800 MHz RAM, and 500 GB “scratch space," gives the MRRC user access to 100% of the duty cycle of a combined 128 GB of RAM, 8 terabytes (TB) scratch space, and 128 processing cores interlinked through high-speed, low-latency, Infiniband fabric with a theoretical throughput of 40 Gbps. Additionally, MRRC users currently have immediate access to the unused duty cycle of another cache of 496 cores (48 nodes), which are shared across several Einstein core facilities

The 512 Einstein HPSC cluster cores, including those owned by the MRRC, currently have access to a total 1,920 GB (1.875 TB) RAM and have a theoretical throughput of 3x1012 billion floating point operations per second (3 TFLOPS).  Another phase of expansion of the Einstein Cluster is planned to take place throughout the 2010 calendar year.  This expansion will add 12 AMD 6000-series systems, each with four 12-core CPUs and 128GB RAM, effectively doubling the Cluster capacities to 1,024 cores (128 nodes) and 3,840 GB of RAM, incorporate massively symmetric multiprocessing (SMP) systems, and uniformly implement Infiniband fabric across the HPSC system. 

Einstein's long-term HPSC growth plan is to scale the Cluster to 500-1000 total nodes.

Multiple MPI (parallel) environments are configured on the cluster, including OpenMPI, Mpich2 and MATLAB Parallel Toolbox.

MRRC processing nodes and dataspace are linked via 4Gbps fiber channel to a 12 TB file system array and high-capacity (total 31,200 GB/30.5 TB) LTO-4 tape backup.

 
 
 

Other MRRC Computer Infrastructure and Facilities

The MRRC also provides three Linux data analysis workstations for research assistants and six higher-end workstations for postdoctoral associates and faculty, as well as several terminals (Sun Microsystems Sun Ray® thin clients) connected to the cluster in a "virtual desktop" infrastructure (VDI). There is also a guest kiosk for visiting users to check and send email, exchange data with the MRRC file system, and print documents; a development workstation for editing E-Prime scripts, IDL and other programs, as well as audio, video, pictures, and animations for experiments.

Color Laser Printers are also available on both floors of the center and can be accessed by all users.

The Hive itself also hosts several MRRC Technical Facilities, including a web server, an MRRC Wiki, a centralized MRI Atlases Repository, SQL Database server, and facilities for FTP and WebDAV file transfers.  There are also plans to further expand the Hive-hosted facilities to include neuroinformatics and telepresence facilities and a "sibling" Beowulf-type cluster running Kerrighed for load balancing, high-availability "cloud/grid" access, and facilitation of specialized functions with real-time demands, such as on-the-fly functional MRI reconstruction and analysis.

 
 
 

Software

General Image Analysis/Scientific Software

Managed software access is provided to all users, and includes the following software; MedX, AFNI, Analyze, DTIStudio, IDL, LCModel spectroscopy software and both FSL, and SPM. Other computing and statistical software is available through the University including SPSS, Reference Manager and Endnote.

 
 
 

Specialized Image Analysis Software

Image analysis software is also developed in-house for a variety of purposes. The code is written in C++, MATLAB, or IDL. The packages include algorithms for image registration using a variety of similarity measurers, MR image segmentation, diffusion tensor image analysis, perfusion analysis (for arterial spin labeling methods), non-parametric randomization based fMRI statistical analysis, half-Fourier image reconstruction, image format conversion programs, principal component analysis, independent component analysis, wavelet transform analysis of fMRI data, various image interpolation techniques, automatic histogram analysis and threshold selection techniques, high-level MR image feature detection programs, clustering algorithms, and relaxographic imaging.

 

General Image Analysis Software

  • MEDx
  • MATLAB
  • MRUI
  • MEDinria
  • 3D Slicer
  • FSL
  • CocoMAC-3D
  • DTI Gradient Table creator (via MATLAB)
  • AFNI
  • MRI Convert
  • DTI Studio
  • MRI Studio
 
 
  • HIRES
  • Landmarker
  • 3DiCSI
  • ROI Editor
  • AMIDE
  • CATNAP (via MATLAB)
  • TrackVis
  • LC Model
  • FreeSurfer
  • SPM
  • mricro/mricron
  • PARtoNRRD
 
 

Specialized/In-House Software & IDEs

  • IDL
  • ART Tools
  • DCM Toolkit
  • GNU Scientific Library