Slim 1U server for production AI workloads
The Axon 4 is an efficient server to accelerate AI workloads "at the edge" either on-site in an office or factory. By combining the the powerful SX-Aurora TSUBASA Vector Engine hardware and the Xpress AI platform you are able to deploy AI solutions in a turn-key manner. The server automatically synchronizes with your Xpress.AI account and will deploy applications by simply purchasing the solution here. The Vector Engine accelerator with it's large memory capacity ensures that you can deploy multiple AI models without worrying about running out of memory.
|CPU||1 x Intel(R) Xeon(R) Silver 4210 CPU @ 2.12GHz|
|Memory||DDR4-2400 128 GB (Configurable Up to 512GB)|
|OS Drive||1 TB NVMe SSD|
|Data Drive||2TB SATA HDD (Configurable up to 18TB)|
|VE Card||2 x 48GB Vector Engine 2.0 Type 20A or 20B|
The SX-Aurora TSUBASA (a.k.a. Vector Engine) is NEC's latest iteration of their super computer processor. It has a higher memory capacity than most GPUs and a bandwidth of up to 1.53 TB/s. This reduces the memory bottleneck when creating large models. Each core of the Vector Engine provides 614 GFLOPs of performance enabling a total system performance of up to 12.28 TFLOPs at FP32.
The NEC Vector Engine Processor was developed using 16 nm FinFET process technology for extreme high performance and low power consumption. The Vector Engine Processor had the world's first implementation of one processor with six HBM2 memory modules using Chip-on-Wafer-on-Substrate technology, leading to the world-record memory bandwidth of 1.35 TB/s at that time which the latest version expands to 1.53 TB/s
The memory and compute balance of the system means that it is a powerful solution for both memory-intensive Big Data workloads and compute-intensive Deep Learning workloads.