If (top != self) { window.location = 'about:blank'; }
NASA Logo, National Aeronautics and Space Administration
High-End Computing Program

+ Home > About Us > Facilities & Services > Computing Systems Overview


This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS).

Information about HEC Systems and Related Resources


SGI ICE cluster

163 racks (11,176 nodes)

184,800 cores
3.59 petflops peak
1.54 petaflops LINPACK rating (November 2013)
502 terabytes of memory
Intel Xeon Westmere 5670 processors (2.93 GHz); Intel Xeon Westmere X5675 processors (3.06 GHz); Intel Xeon Sandy Bridge E5-2670 processors (2.6 GHz); and Intel Xeon Ivy Bridge E5-2680v2 processors (2.8 GHz)

GPU/Westmere nodes:

2 racks (64 nodes; 1 GPU per node)

768 total cores (General Purpose Intel Xeon Westmere Cores)
32,768 total cores (GPU Streaming Cores)
43 teraflops, peak
3.148 terabytes of memory



2-node SGI UV 2000 system

1,536 cores
32 teraflops, peak

6 terabytes of memory

Intel Xeon E5-4650L Sandy Bridge processors (2.6 GHz)



8 racks (512 nodes)

6,144 cores
72 teraflops, peak

12 terabytes of memory

Intel Xeon X5670 Westmere processors (2.93 GHz)

Aggregate System:

67 racks (3,354 nodes)
43,048 cores

1.121 petaflops, peak

102.088 terabytes of memory

Scalable Units 1+ to 4+ = IBM iDataPlex Cluster System

12,384 cores

Intel Xeon Westmere (2.8 GHz)

Scalable Unit 7 = Dell PowerEdge C6100 System

14,400 cores
Intel Xeon Westmere (2.8 GHz)
Scalable Unit 8 = IBM iDataPlex Cluster System
7,680 cores

Intel Xeon Sandy Bridge (2.6 GHz)
28,800 Intel Xeon Phi coprocessor cores (Many Integrated Core–MIC)
Scalable Unit 9 = IBM iDataPlex Cluster System

7,680 cores
Intel Xeon Sandy Bridge (2.6 GHz)



IBM iDataPlex
32 nodes (2 GPUs per node)

384 total cores (General Purpose Intel Xeon Westmere Cores)

28,672 total cores (GPU Streaming Cores)

36.4 teraflops, peak

1.92 terabytes of memory


25 petabytes of RAID disk capacity (combined total for all systems)

Archive Capacity:
126 petabytes

12.2 petabytes of RAID

Archive Capacity:
45 petabytes

Networking SGI NUMAlink
Voltaire InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
Mellanox Technologies and SilverStorm Technologies InfiniBand
10-Gigabit Ethernet
1-Gigabit Ethernet
Visualization and Analysis

128-screen tiled LCD wall arranged in 8x16 configuration
M easures 23-ft. wide by 10-ft. high
128 graphics processing units (Nvidia 8800GTX cards)
128 teraflops, peak processing power
1,024 AMD Opteron cores (quad-core)
9 teraflops, peak processing power
208 gigabytes of GDDR5 graphics memory
1.5 petabytes of storage

Data Exploration Theater
Visualization Wall/Hyperwall

15 Samsung UD55C 55-inch displays in 5x3 configuration
Measures 20 ft. wide by 6-ft.10-in. high
DVI connection
1920 x 1080 screen resolution @1080p

Visualization Wall/Hyperwall Cluster
16 Dell Precision WorkStation R5400s
2 dual-core Intel Xeon Harpertown processors per node
4 GB of memory per node
NVIDIA Quadro FX 1700 graphics
1 Gigabit Ethernet network connectivity
Control Station
One Dell FX100 Thin Client

Dali Data Analysis Nodes
IBM System x3950
272 Intel Xeon cores
24 NVIDIA Tesla M2070 GPUs with 10,752 “streaming GPU” CUDA cores
4.3 terabytes ofmemory
10-gigabit Ethernet network connectivity
Fibre channel access to the IBM GPFS file systems (~1 gigabyte/sec for large single stream file access)
NFS access to Dirac (archive) and data portal file systems

Data Portal

HP BladeSystem C7000
16 nodes, each containing:

  • 2 quad-core Intel Xeon 2.83 GHz processors
  • 8 gigabytes of memory

371 terabytes of network storage (GPFS managed)

NFS served to compute hosts


USA.gov NASA Logo - nasa.gov