If (top != self) { window.location = 'about:blank'; }
NASA Logo, National Aeronautics and Space Administration
High-End Computing Program

+ Home > About Us > Facilities & Services > Computing Systems Overview

COMPUTING SYSTEMS OVERVIEW

This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS).


Information about HEC Systems and Related Resources
  NAS NCCS
Systems

Pleiades

SGI ICE cluster

162 racks (11,232 nodes)

198,432 cores
4.49 petflops peak
1.54 petaflops LINPACK rating (November 2013)
616 terabytes of memory
Intel Xeon Westmere 5670 processors (2.93 GHz); Intel Xeon Westmere X5675 processors (3.06 GHz); Intel Xeon Sandy Bridge E5-2670 processors (2.6 GHz); Intel Xeon Ivy Bridge E5-2680v2 processors (2.8 GHz); and Intel Xeon Haswell E5-2680v3 (2.5 GHz) processors

GPU/Westmere nodes:

2 racks (64 nodes; 1 GPU per node)

768 total cores (General Purpose Intel Xeon Westmere Cores)
32,768 total cores (GPU Streaming Cores)
43 teraflops, peak
3.148 terabytes of memory

 

Endeavour

2-node SGI UV 2000 system

1,536 cores
32 teraflops, peak

6 terabytes of memory

Intel Xeon E5-4650L Sandy Bridge processors (2.6 GHz)

 

Merope

36 racks (1,152 nodes)

12,032 cores
141 teraflops, peak

27 terabytes of memory

Intel Xeon X5670 Westmere processors (2.93 GHz), Intel Xeon Nehalem X5570 processors (2.93 GHz)

Discover
Aggregate System:

67 racks (3,354 nodes)
43,048 cores

1.121 petaflops, peak

102.088 terabytes of memory

Scalable Units 1+ to 4+ = IBM iDataPlex Cluster System

12,384 cores

Intel Xeon Westmere (2.8 GHz)

Scalable Unit 7 = Dell PowerEdge C6100 System

14,400 cores
Intel Xeon Westmere (2.8 GHz)
Scalable Unit 8 = IBM iDataPlex Cluster System
7,680 cores

Intel Xeon Sandy Bridge (2.6 GHz)
28,800 Intel Xeon Phi coprocessor cores (Many Integrated Core–MIC)
Scalable Unit 9 = IBM iDataPlex Cluster System

7,680 cores
Intel Xeon Sandy Bridge (2.6 GHz)

 

GPU

IBM iDataPlex
32 nodes (2 GPUs per node)

384 total cores (General Purpose Intel Xeon Westmere Cores)

28,672 total cores (GPU Streaming Cores)

36.4 teraflops, peak

1.92 terabytes of memory

Storage

Online:
25 petabytes of RAID disk capacity (combined total for all systems)

Archive Capacity:
126 petabytes

Online:
12.2 petabytes of RAID

Archive Capacity:
45 petabytes

Networking SGI NUMAlink
Voltaire InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
SGI NUMAlink
Mellanox Technologies and SilverStorm Technologies InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
Visualization and Analysis

Hyperwall-2
128-screen tiled LCD wall arranged in 8x16 configuration
M easures 23-ft. wide by 10-ft. high
128 graphics processing units (Nvidia GeForce GTX 780 Ti)
128 teraflops, peak processing power
2,560 Intel Xeon E5-2680v2 (Ivy Bridge) cores (10-core)
57 teraflops, peak processing power
393 gigabytes of GDDR5 graphics memory
1.5 petabytes of storage

Data Exploration Theater
Visualization Wall/Hyperwall

15 Samsung UD55C 55-inch displays in 5x3 configuration
Measures 20 ft. wide by 6-ft.10-in. high
DVI connection
1920 x 1080 screen resolution @1080p

Visualization Wall/Hyperwall Cluster
16 Dell Precision WorkStation R5400s
2 dual-core Intel Xeon Harpertown processors per node
4 GB of memory per node
NVIDIA Quadro FX 1700 graphics
1 Gigabit Ethernet network connectivity
Control Station
One Dell FX100 Thin Client

Dali Data Analysis Nodes
IBM System x3950
272 Intel Xeon cores
24 NVIDIA Tesla M2070 GPUs with 10,752 “streaming GPU” CUDA cores
4.3 terabytes ofmemory
10-gigabit Ethernet network connectivity
Fibre channel access to the IBM GPFS file systems (~1 gigabyte/sec for large single stream file access)
NFS access to Dirac (archive) and data portal file systems

Data Portal

HP BladeSystem C7000
16 nodes, each containing:

  • 2 quad-core Intel Xeon 2.83 GHz processors
  • 8 gigabytes of memory

371 terabytes of network storage (GPFS managed)

NFS served to compute hosts

 

USA.gov NASA Logo - nasa.gov