The SX-Aurora TSUBASA AI Platform is designed for big data (BD) analytics and machine learning (ML). This versatile and scalable solution brings supercomputer vector processing performance, supported by a large memory subsystem to many diverse markets requiring real-time data analytics on a large scale. Its compact and cost-effective PCIe card form factor fits in everything from workstation towers to rack-mounted servers.
NEC X launched the Vector Engine Data Acceleration Center (VEDAC) in Silicon Valley to allow users to experience vector processing benefits for BD analytics. The VEDAC hosts 18 SX-Aurora TSUBASA data accelerator cards. Developers can access these advanced data accelerators with the NEC Frovedis framework and a compatible version of TensorFlow.
NEC X offers this advanced data accelerator through the VEDAC learning center. Contact the NEC X sales team for more information about this exciting product.
The SX-Aurora TSUBASA AI Platform:
- Processes information up to 10 times faster than other available solutions
- Provides a reliable and robust data acceleration solution for real-time computations of large volume
- Delivers a proven solution in next-generation BD, AI, ML applications
- Is compatible with open environment platforms such as Linux OS for ease of programming
- Supports several types of deployments including container-based, tower and rack-mount-based systems
The NEC Vector Engine Processor was developed for high performance and low power consumption. It is the world’s first implementation of a single processor surrounded by six HBM2 memory modules, interconnected using chip-on-wafer-on-substrate technology.
The second-generation VE, the Vector Engine Type 20 (VE20) data accelerator card, powers the latest model of the SX-Aurora TSUBASA supercomputer. The VE20 generation data accelerator has an improved memory bandwidth up to 1.53TB/s, as well as an increase in number of cores per processor.
NEC continuously pursues highest compute performance for HPC and AI/ML applications with the highest level of memory bandwidth per processor and high peak performance benchmarks.
The VE20 generation has 10 vector cores reaching an astounding 3.07TF peak performance. Each vector core and a 16MB shared cache are connected by a two-dimensional mesh network, achieving a 400GB/s maximum bandwidth per vector core.
The vector processor is based on a 16nm FinFET process technology for extremely high performance and low power consumption. It achieves an industry-leading performance of 307GF per core, and an average memory bandwidth of 150 GB/s per core.
Each core consists of 32 vector pipelines, each containing three fused multiply-add (FMA) units. Each core contains 64 fully functional vector registers with 256 entries of 8 bytes width each. These vector registers can feed the functional units with data and receive results, thereby handling double-precision data at full speed.
Open Source! github.com/frovedis
The combination of single instruction, multiple data (SIMD), pipelining and NEC’s Frovedis middleware is unique to the SX-Aurora TSUBASA AI platform. It accelerates data analysis operations on top of the Apache Spark MLlib and DataFrame framework, while maintaining compatibility with standard programming languages such as C, C++ and Fortran.
The unique architecture significantly increases speed and reduces power consumption for memory-intensive statistical machine learning and data frame applications. It is well-suited for large-scale e-commerce recommendation engines, high-throughput financial and document transactions, fraud detection, enterprise governance, and authentication.
|Semi – Custom Solution||Information||Partner|
|PII Data Redaction Framework||When paired with the Consulting and Cognitive Computing Framework® from OppLane, NEC’s SX-Aurora TSUBASA™ artificial intelligence (AI) platform makes it easy to comply with governance regulations requiring personally identifiable information (PII) data redaction.|