News

TYAN Exhibits Artificial Intelligence and Deep Learning Optimized Server Platforms at GTC Japan 2018

TYAN GPU-optimized server platforms powered by NVIDIA� Tesla� V100 32GB and P4 GPU accelerators are designed for Machine Learning, Artificial Intelligence, and Deep Neural Networks

Tokyo, Japan, September 13, 2018 --(PR.com)-- TYAN�, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, is showcasing AI-powered server platforms including NVIDIA� Tesla� V100 32GB PCIe, P4 PCIe and V100 SXM2 GPU accelerators at the GPU Technology Conference (GTC) Japan, Grand Prince Hotel New Takanawa from September 13-14.

�AI is transforming every industry by enabling more accurate decisions to be made based on the massive amounts of data being collected. TYAN�s leading portfolio of GPU server platforms are based on the latest NVIDIA Tesla technology and are optimized to provide customers faster overall performance, greater efficiency, and lower energy and per-unit cost computation for the AI revolution,� said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit.

Featuring maximum performance and system density, TYAN Thunder HX TA88-B7107 takes full advantage of NVIDIA NVLinkTM technology and supports eight NVIDIA Tesla V100 SXM2 32GB GPUs packed within a 2U server enclosure. With four outstanding PCIe x16 slots available for high-speed networking and 24 DIMM slots supporting up to 3TB of system memory, the Thunder HX TA88-B7107 is the GPU server platform with the highest performance for popular Artificial Intelligence and Machine Learning applications.

TYAN also offers GPU servers with support for NVIDIA Tesla V100 32GB and Tesla P4 GPU accelerators in standard PCIe form factor. This includes a pair of 4U server systems - the Thunder HX FT77D-B7109 with support for up to eight Tesla V100 32GB or sixteen Tesla P4 GPUs for massively parallel workloads such as scientific computing and large-scale facial recognition, and the Thunder HX FA77-B7119 with support for up to ten Tesla V100 32GB or twenty Tesla P4 GPUs within a single server enclosure and is ideal for AI training and inferencing applications.

The Thunder HX FT48T-B7105 is a pedestal workstation platform that supports up to five Tesla V100 32GB or ten Tesla P4 GPU cards. This high-end workstation gives maximum I/O to the professional power users, and is a great platform for 3D rendering and image processing.

The Intel� Xeon� Scalable Processor-based Thunder HX GA88-B5631 and AMD EPYC� 7000 processor-based Transport HX GA88-B8021 both feature support for up to four NVIDIA Tesla V100 32GB GPU cards within a 1U server and are the highest density of AI training GPU servers available on the market. Both platforms offer an outstanding PCIe x16 slot next to the GPU cards to accommodate high-speed networking adapters up to 100Gb/s such as EDR InfiniBand or 100 Gigabit Ethernet. These platforms are ideal for Artificial Intelligence, Machine Learning, and Deep Neural Network workloads. In addition, the Transport GA88-B8021 can optionally deploy up to six Tesla P4 GPU accelerators for AI inferencing applications.

Furthermore, TYAN Thunder SX TN76-B7102 is a multi-purpose server system supports up to two NVIDIA Tesla V100 32GB or two Tesla P4 GPU accelerators in a 2U enclosure. The system provides up to 12 3.5� hot-swap drives for data storage and is designed to support Intel�s Omni-Path Fabric node interconnects with a bandwidth of 100Gb/s.

TYAN GPU Server Platform Supports NVIDIA Tesla V100 32GB and P4 GPUs
- 2U/8-GPU Thunder HX TA88-B7107: 2U dual-socket Intel Xeon Scalable Processor-based platform with support for up to eight NVIDIA Tesla V100 SXM2 GPUs, 24 DDR4 DIMM slots, four outstanding PCIe x16 slots for high-performance network, and 2 hot-swap 2.5" NVMe U.2 drives
- 4U/10-GPU Thunder HX FA77-B7119: 4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to ten NVIDIA Tesla V100 32GB or twenty Tesla P4 GPU accelerators, 24 DDR4 DIMM slots, one outstanding PCIe x16 slot for high-performance network, and 14 2.5" hot-swap SATA 6Gb/s devices, 4 of the bays can support NVMe U.2 drives by option
- 4U/8-GPU Thunder HX FT77D-B7109: 4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to eight NVIDIA Tesla V100 32GB or sixteen Tesla P4 GPU accelerators, 24 DDR4 DIMM slots, one outstanding PCIe x16 slot for high-performance network, and 14 2.5" hot-swap SATA 6Gb/s devices, 4 of the bays can support NVMe U.2 drives by option
- Pedestal/5-GPU Thunder HX FT48T-B7105: Pedestal dual-socket Intel Xeon Scalable Processor-based platform with support for up to five NVIDIA Tesla V100 32GB or ten Tesla P4 GPU accelerators, 12 DDR4 DIMM slots, one outstanding PCIe x16 slot for high-performance network, and 4 or 8 3.5� hot-swap SAS 6Gb/s devices
- 1U/4-GPU Thunder HX GA88-B5631: 1U single-socket Intel Xeon Scalable Processor-based platform with support for up to four NVIDIA Tesla V100 32GB or four Tesla P4 GPU accelerators, 12 DDR4 DIMM slots, one outstanding PCIe x16 slot for high-performance network, and two 2.5" hot-swap SATA 6Gb/s devices
- 1U/4-GPU Transport HX GA88-B8021: 1U single-socket AMD EPYC 7000 processor-based platform with support for up to four NVIDIA Tesla V100 32GB or six Tesla P4 GPU accelerators, 16 DDR4 DIMM slots, one outstanding PCIe x16 slot for high-performance network, and two 2.5" hot-swap SATA 6Gb/s devices
- 1U/2-GPU Thunder SX TN76-B7102: 2U dual-socket Intel Xeon Scalable Processor-based platform with support for up to two NVIDIA Tesla V100 32GB or two Tesla P4 GPU accelerators, 24 DDR4 DIMM slots, one outstanding PCIe x 16 slot for high-performance network, and 12 3.5� hot-swap SATA 6Gb/s devices, 4 of the bays support NVMe U.2 drives by option

To Top