Tencent Cloud will integrate NVIDIA’s GPU computing and deep learning platform into its public cloud computing platform, to help advance artificial intelligence for enterprise customers.
This will provide users with access to a set of new cloud services powered by Tesla GPU accelerators, including the latest Pascal architecture-based Tesla P100 and P40 GPU accelerators with NVIDIA NVLink technology for connecting multiple GPUs and NVIDIA deep learning software.
NVIDIA’s AI computing technology is used worldwide by cloud service providers, enterprises, startups and research organisations for a wide range of applications.
“Companies around the world are harnessing their data with our AI computing technology to create breakthrough products and services,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Through Tencent Cloud, more companies will have access to NVIDIA’s deep learning platform, the world’s most broadly adopted AI platform.”
Tencent Cloud VP Sam Xie said GPU offerings with NVIDIA’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services.
Organizations across many industries are seeking greater access to the core AI technologies required to develop advanced applications, such as facial recognition, natural language processing, traffic analysis, intelligent customer service, and machine learning.
The parallel processing capabilities of GPUs make the NVIDIA computing platform highly effective at accelerating a host of other data-intensive workloads, including advanced analytics and high performance computing.
As part of the two companies’ collaboration, Tencent Cloud intends to offer customers a wide range of cloud products based on NVIDIA’s AI computing platforms. This will include GPU cloud servers incorporating NVIDIA Tesla P100, P40 and M40 GPU accelerators and NVIDIA deep learning software.
Tencent Cloud launched GPU servers based on NVIDIA Tesla M40 GPUs and NVIDIA deep learning software in December.
During the first half of this year, these cloud servers will integrate up to eight GPU accelerators, providing users with superior performance while meeting the requirements for deep learning and algorithms that involve ultra-high data volume and ultra-sized equipment.