If Nvidia’s Volta GPU is 132% faster than Pascal, imagine what the new Titan’s like…

Imagine what the next-gen Titan could be capable of...

The lucky peops sporting shiny new Nvidia Volta machines have started trolling everyone with the performance of their $150,000 graphics array. We still don’t know if they can play Crysis, however…

Can’t wait for Volta? Here’s our pick of the best graphics cards to buy today.

The only people who were really  satisfied with the performance of AMD’s RX Vega GPU were the green-tinged few at Nvidia. It meant they could stick with their original plans for a consumer Volta graphics card release sometime in 2018. Y’see, there were some noises ahead of the launch of the RX Vega 64 that they might do a ‘new Intel’ and pull in the release if AMD gave them serious competition at the high end of the GPU stack.

Instead they’ve done an ‘old Intel’ and decided what they’ve got out right now – the full Pascal 10-series range – is more than enough to cope with the vagaries of Vega’s gaming prowess. Though they may still drop a GTX 1070 Ti to really stick the boot in.

It’s for that reason we still know next to nothing about how Nvidia’s Volta GPUs will perform when we finally see consumer versions distilling the graphical essence of the GV100 silicon down into more manageable, and more affordable (to purchase and produce), forms next year.  What we do now know, however, is what the tippy-top of Nvidia’s Volta range can do against the last generation of the green team’s GPU tech in the professional world. 

And, holy crap, are they fast.

Nvidia Volta supercomputer

The Nvidia DGX-1 is their six-figure workstation with eight of the new Tesla V100 cards installed in it. The first benchmark scores we’ve seen, versus a similarly specced Tesla P100 rig, show the new architecture running 132% faster. To be fair that is with both systems running Geekbench tests via the CUDA API in a Linux environment, and not Deus Ex: Mankind Divided at 4K in Windows.  

So, the Geekbench performance of the Nvidia DGX-1 versus the equivalent last-gen machine may not have a whole lot of direct relevance when it comes to what Volta will look like in our machines, but that doesn’t stop me from wishing Nvidia could somehow manage to deliver the same frame rate boost in gaming as they have with the compute performance.

Just imaging Hitman running at 176fps at 4K… those 165Hz 4K G-Sync HDR monitors would actually become relevant. Okay, so we can pretty much guarantee the consumer Volta GPUs aren’t going to deliver such gaming highs, but hey, a guy can dream, can’t he?

NVIDIA Rumored To Launch Pascal GeForce GTX 1070 Ti Graphics Card

So NVIDIA GeForce has been a silent bunch since the launch of the highly successful GeForce GTX 1080 Ti but rumor is that a new card may possible be in the works. Posted over at Chinese sources and caught by Videocardz, this new card is rumored to be known as the GeForce GTX 1070 Ti.

NVIDIA Rumored To Launch a Pascal Based GeForce GTX 1070 Ti Graphics Card With 8 GB G5 Memory

First of all, I would like to state that there’s no official confirmation of any sorts regarding this SKU so all of the details are rumors at best. The details allege that NVIDIA is working on what is to be a brand new Pascal graphics card. The card will be known as the GeForce GTX 1070 Ti and feature a Pascal GP104 silicon.

Technically, this card will be similar to the GP104 based GTX 1080 and GTX 1070. The differences will lie in the configuration of the chip itself. It is stated that the GTX 1070 Ti will come with 2304 CUDA Cores and 8 GB of GDDR5 memory along a 256-bit bus interface. Now this looks to be an interesting graphics card as it will be sandwiched in between the GTX 1070 and GTX 1080.

To be honest, that gap isn’t too huge to begin with. Also worth noting is that the GeForce GTX 1080 is retailing for $499 US while the GTX 1070 has an official MSRP of $349 US. The only price point I can think in between them is $399-$449. The former is too close to a GTX 1070 while the latter is close to a GTX 1080. And let’s just not talk about the GTX 1070 custom models which fall in the same price segment.

So maybe we are looking at a price drop on the GTX 1070 to around $299 US and a sudden intro of the GTX 1070 Ti after that. I know it sounds really weird but the only reason this rumor was worth a post was due to a picture a guy took with his mobile showing what seems to be ASUS’s GTX 1070 Ti STRIX OC (8 GB) model. Whether that’s true or not is yet to be confirmed but we will have a word with our sources if they have more details on the card. And no, Volta isn’t coming this year.


NVIDIA Allegedly Readying Headless Pascal Crypto-Mining GPUs For Ethereum, Bitcoin And Others

The so-called cryptocurrency market is booming, or at least parts of it are. One of the most popular digital currencies is Bitcoin. It recently topped the $3,000 mark for the first time since its debut in 2009, and had you owned just $27 worth at that time, you’d be sitting on $15,000,000. While volatile, the price just keeps going up over time. This has kept the mining market alive, and NVIDIA reportedly plans to capitalize on it by releasing specialized Pascal cards tuned specifically for digging up crytocurrencies.

News of NVIDIA’s plans can be found all over the web, though most reports trace back to a couple of sources. One is a tech site called Goldfries that claims it received word that a dedicated mining GPU is in the works and that it will arrive around the middle of next month. These specialized graphics cards will not have any display outputs because they’re not meant for gaming, graphics work, or anything that requires a monitor.

NVIDIA GeForce GTX 1060
The mining version of this card will not have any display outputs.

At least one of the GPUs is said to be based on NVIDIA’s GeForce GTX 1060 graphics card with 6GB of GDDR5 memory. It will stick to reference clocks with base and boost clockspeeds set at 1,506MHz and 1,708MHz, respectively, though add-in board partners may opt to overclock. The card will take up two expansion slots, just as a regular GeForce GTX 1060, and will be backed by a three-month warranty. Pricing is expected to be around $200, which is around $50 less than the regular variant.

Here is a look at that card:

As you can see, it doesn’t look like the gaming variant. It features a custom PCB and has a passive cooling solution that consists of a finned aluminum block. There is no active blower on this card; keeping it cool will require adequate airflow in a chassis and/or an open-air configuration.

It’s also said that NVIDIA is working on a mining card based on its more powerful GeForce GTX 1080 GPU. NVIDIA is also sticking to reference clocks with this one—1,60MHz base and 1,733MHz boost—though it remains to be seen if its AIB partners offer overclocked models. Either way, these cards will be optimized for cryptocurrency mining. Pricing will start at $350, versus $499 for gaming models.

Whether or not some variants come with active cooling is not yet known.  Either way, expect board partners such as MSI, Inno3D, and others to get in on the action with specialized cards for miners.

GPU Mining System

One of the other main sources of leaks on this subject is Videocardz, a rumor site that is hit or miss when it comes to this sort of thing. The site claims that dedicated mining systems will be offered with specialized graphics cards inside. One of them will feature a mobile Celeron processor paired with 4GB of DDR3 RAM and a 64GB mSATA solid state drive. Everything will housed in a sturdy enclosure that resists dusts and has plenty of airflow to keep temperatures in check.

These cards and rigs will used to mine Bitcoin, Ethereum, and other similar crytpocurrencies. If you haven’t heard of it before, Ethereum is a newer type of digital currency that is attracting considerable attention among miners as of late. Ethereum’s value has skyrocketed since being introduced two years ago and it now has a market cap of more than $34.5 billion, compared to Bitcoin’s market cap of $42.4 billion.

loadDisqus(jQuery(‘#initdisqus’), disqus_identifier, url);

else {
setTimeout(function () { disqusDefer(); }, 50);


function loadDisqus(source, identifier, url) {

if (jQuery(“#disqus_thread”).length) {


if (window.DISQUS) {

reload: true,
config: function () {
this.page.identifier = identifier;
this.page.url = url;

} else {

//insert a wrapper in HTML after the relevant “show comments” link

disqus_identifier = identifier; //set the identifier argument
disqus_url = url; //set the permalink argument

//append the Disqus embed script to HTML
var dsq = document.createElement(‘script’); dsq.type = ‘text/javascript’; dsq.async = true;
dsq.src = ‘https://’ + disqus_shortname + ‘.disqus.com/embed.js’;



function disqusEvent()
idleTime = 0;

NVIDIA Pascal GPUs For Crytocurrency Mining Price, Specs Detailed

Earlier this week, it was reported that NVIDIA could be preparing Cryptocurrency specific GPUs that will be used for mining purposes. The NVIDIA Pascal architecture based graphics cards have been further detailed.

NVIDIA Pascal Based GP106 and GP104 GPUs For Cryprocurrency Mining Detailed – Specifications, Prices and Mining Performance Listed

NVIDIA has prepared two GPUs that are specifically aimed at cryptocurrency mining. Both designs are based on existing Pascal GPU but are tuned towards mining so they deliver better efficiency at crypto mining compared to regular variants. The GPUs include a GP104-100 and a GP106-100 card that will be offered by NVIDIA’s AIB partners.

It should be noted that all cards will ship with a 3 months warranty so that’s 90 days. The cards are planned for launch on 19th June although we can expect different dates from different partners. It is stated that the P104-100 delivers 30% better performance per watt compared to the GTX 1060 3 GB model while the P106-100 delivers 10% better performance per watt compared to the GeForce GTX 1060 3 GB model. All cards will ship without any display connectivity.

The specifications of these models can be found below:

NVIDIA P104-100 “GTX 1080” With 8 GB GDDR5X – Starting at $350 US

The NVIDIA P104-100 model utilizes the same design as the NVIDIA GeForce GTX 1080. It delivers much higher efficiency and is fine tuned for crypto mining purposes. While the cards will ship with different clock speeds, the base model is tuned to operate at 1607 MHz base, 1733 MHz boost, 10 Gbps GDDR5X speeds along a 256-bit bus.

Power will be provided by a single 8-pin connector and TDP is rated at 180W. The base models will be shipping with a price of just $350 US while the one that is listed by Inno3D (P6D-N104-1SDN P104-100 Twin X2 8GB GDDR5X) has a price of $370 US which is much lower than the $499 US price of regular gaming variants.

It is stated that NVIDIA P104-100 cards will be able to hit mining rates of up to 60 MH/s with some modified VBIOS and further chip tuning but that remains to be seen.

NVIDIA P106-100 “GTX 1060” With 6 GB GDDR5 – Starting at $200 US

The NVIDIA P106-100 model utilizes the same design as the NVIDIA GeForce GTX 1060. It delivers much higher efficiency and is fine tuned for crypto mining purposes. While the cards will ship with different clock speeds, the base model is tuned to operate at 1506 MHz base, 1708 MHz boost, 8.0 Gbps GDDR5 speeds along a 192-bit bus.

Power will be provided by a single 6-pin connector and TDP is rated at 120W. The base models will ship with a price of just $200 US which is lower than the $249 US price of the GeForce GTX 1060 6 GB gaming models. The one listed by Inno3D (N5G-N106-4SDN P106-100 Twin X2 6GB GDDR5) has a price of $235 US.

Colorful GP106-100 (Image Credits: Videocardz):

NVIDIA GeForce GTX 1070 Memory Overclock Works Wonders in Crypto Mining

It looks like NVIDIA is really getting ready to take a bit of the cryptocurrency mining craze. We have also detailed a full mining rig which will be available later this month, featuring multiple Pascal GPU based cards. Furthermore, you can check out a very detailed article by LegitReviews who managed to get a great hashrate and much lower power on the GeForce GTX 1070 compared to other crypto tuned cards after small tweaks. The card delivered around 27 MH/s in stock configuration.

The price of 1 Ether (ETH) is currently $383.43, so with a hashing power of 27 MH/s you are looking at a profit of around $185 a month ($2,255) a year in stock form. If you pay $369.99 for a GeForce GTX 1070 and ETH pricing stays this level you’ll have the card paid off right at 60 days. via Legitreviews

A simple tuning of the GeForce GTX 1070 by overclocking the memory boosted the hash rate to 32.1 MH/s which is very impressive.

In, stock form the NVIDIA GeForce GTX 1070 Founders Edition had a hashing power of 27 MH/s. When you plug that into a profit calculator you are looking at a profit of around $185 a month or $2,255 a year. By overclocking the memory and lowering the power target we were able to improve the hashrate, lower power consumption, reduce the GPU temperature and have the fans running at a lower speed for a much quieter system. This improved hashrate means that we have the potential to make around $218 a month or $2,666 per year. That is an extra $411 a year and it means you’ll have your card paid off in about 51 days instead of 60 days! Buy two of these cards and place it in this system and you are looking at making over $5,000 a year in extra income! via LegitReviews

We should expect even more details being shared by NVIDIA and their partners in the weeks ahead.


Entry-Level Pascal for Laptops, Just in Time for Computex

This morning NVIDIA has taken the wraps off of a new video card for laptops, the GeForce MX150. Aimed at the entry-level market for discrete GPUs – that is, laptops that need performance only a bit above an integrated GPU – the MX150 is NVIDIA’s Pascal-based successor to the previous 930M/940M series of laptop adapters that have been in computers over the last couple of years. Today’s reveal is undoubtedly tied to next week’s Computex trade show, so we should expect to see a number of laptops using the new adapter announced in the coming days.

From a technical perspective, details on the GeForce MX150 are very limited. Traditionally NVIDIA does not publish much in the way of details on their low-end laptop parts, and unfortunately the MX150’s launch isn’t any different. We’re still in the process of shaking down NVIDIA for more information, but what usually happens in these cases is that these low-end products don’t have strictly defined specifications. At a minimum, OEMs are allowed to dial in clockspeeds to meet their TDP and performance needs. However in prior generations we’ve also seen NVIDIA and OEMs use multiple GPUs under the same product name – mixing in GM107 and GM108, for example – so there’s also a strong possibility that will happen here as well.

Officially, all NVIDIA says about the new video card is that it uses GDDR5 and that it offers around 33% better performance than the GeForce 940MX, a (typically) GM108-based product. Based on the market segment and NVIDIA’s recent activities in the desktop space, the “baseline” MX150 is without a doubt GP108, NVIDIA’s entry-level GPU that was just recently launched in the GeForce GT 1030 for desktops. Information about this chip is limited, but here’s my best guess for baselime MX150 specifications.

Best Guess: NVIDIA Laptop Video Card Specification Comparison
  Typical MX150 Typical 940MX
CUDA Cores 384? 384
ROPs 16 8
Boost Clock Variable Variable
Memory Type GDDR5 GDDR5/DDR3
Memory Bus Width 64-bit? 64-bit
VRAM <=2GB <=2GB
GPU GP108? GM108
Manufacturing Process TSMC 16nm TSMC 28nm
Launch Date 05/26/2017 03/2016

The limited 33% performance improvement over the existing 940MX comes as a bit of a surprise, but it makes sense within the context of the specifications. Relative to a GDDR5 940MX, the MX150 does not have a significant specification advantage over the aforementioned 940MX, with the same number of CUDA cores and similar memory bandwidth. The one stand-out here is ROP throughput, which doubles thanks to GP108’s higher ROP count.

Ultimately what this means is that most of MX150’s performance advantage over the 940MX comes from clockspeed improvements, with a smaller uptick from architectural gains. The counterpoint to that is that these are entry-level laptop parts that are frequently going to be paired with 15W Intel U-series CPUs, so vendors are going to play it safe on clockspeeds in order to maximize energy efficiency. NVIDIA does advertise these GPUs as offering multiple times the performance of Intel’s HD 620 iGPU, however given the higher power consumption of the GPU, I’m more curious how things would compare against Intel’s 28W Iris Plus 650 configurations.

Owing to OEM configurability and general NVIDIA secrecy, NVIDIA does not publish official TDPs for these parts. But it’s interesting to note that while performance has only gone up 33%, NVIDIA is claiming that power efficiency/perf-per-watt has tripled. This strongly implies that NVIDIA’s baseline specifications for the product are favoring TDP over significant clockspeed gains, so I’m very interested to see what the real-world TDPs are going to be like. 940MX was a 20-30W part (depending on who you asked and what GPU they used), so with the jump from 28nm to 16nm, NVIDIA should have a good bit of room for drawing down TDPs. Though ultimately what this may mean is that MX150 is closer to a 930M(X) replacement than a 940M(X) replacement if we’re framing things in terms of power consumption.

Otherwise, as a GP108 part this is the Pascal architecture we’ve all come to know and love. Relative to NVIDIA’s desktop parts, this is actually a more substantial upgrade, as the previous 930M/940M parts were based on NVIDIA’s Maxwell 1-generation GM108 GPUs, and not the newer Maxwell 2 GM2xx series. The difference being that these earlier parts lacked the DirectX feature level 12_1, HDMI 2.0, and low-level performance optimizations (e.g. newer color compress) that we better know the Maxwell family for. So while MX150 isn’t meant for serious gaming laptops, it has a much richer feature set to draw from for both rendering and media tasks. CUDA road coders will likely also appreciate the fact that the newer part will offer CUDA compute capabilities much closer to NVIDIA’s current-generation server hardware, such as fine-grained preemption.

Finally, like its predecessor, expect to see the GeForce MX150 frequently paired up with Intel’s U-series CPUs in ultrabooks. While this SKU isn’t strictly limited to slim form factors – and someone will probably put it into a larger device for good measure – it’s definitely how NVIDIA is positioning the part, as the GTX 1050 series is for larger devices. Also expect to see most (if not all) MX150 parts running in Optimus mode, which continues to be a strong selling point for encouraging OEMs to include a dGPU.

With Computex kicking off next week, we should see a flurry of laptop announcements. Though not all of the relevant laptop announcements have gone out yet, NVIDIA’s announcement names Acer, Asus, Clevo, MSI, and HP as laptop vendors who will all be shipping MX150-eqipped laptops. NVIDIA and their various partners will in turn hit the ground running here, as NVIDIA’s announcement notes that MX150-equipped laptops will begin shipping next month.

Hands-on: eGPU enclosure + GTX 1080 Ti w/ MacBook Pro – Pascal works w/ macOS, but truly shines on Windows [Video]

After last week’s exciting release of Nvidia’s beta Pascal drivers for Mac, I was looking forward to trying the top of the line consumer GPU with my MacBook Pro. That GPU is none other than the venerable GeForce GTX 1080 Ti — a $700 card with 11 Gbps of GDDR5X memory and a 11 GB frame buffer. Needless to say this card is one that will interest those looking to push their games to the next level.

I took the time to install my EVGA GTX 1080 Ti FE inside of my Akitio Node external GPU enclosure. After connecting the unit to my 2016 MacBook Pro via a Thunderbolt 3 cable, all it took was a simple Terminal script and a reboot to get the unit working with a (required in macOS) external display.

Nvidia’s drivers are still in beta, and from my hands-on time, the experience is far from perfect in macOS. That said, you can most certainly see the potential and performance differences between the 13-inch MacBook Pro’s integrated Intel Iris Graphics 550 GPU, and the beastly 1080 Ti. As expected, it’s also a lot faster than the GTX 1050 Ti that I tested last week.

If you’re looking to truly experience the power of such a card with the MacBook Pro, however, you’ll need to step into the Windows world, and run a Boot Camp installation. The GTX 1080 Ti + Windows turns the MacBook Pro into an insanely powerful and flexible gaming machine with just a single Thunderbolt 3 cable. Watch our hands-on video walkthrough as we explain.

Pascal in macOS

Here’s how I was able to get the GTX 1080 Ti working with my 2016 MacBook Pro. This same technique will work with any Nvidia GPU with Pascal architecture, such as the GTX 1080, 1070, 1060, 1050, etc.

Step 1: Install the GTX 1080 Ti or other Pascal graphics card inside of the Akitio Node, and connect the 6+8 pin power connectors.

Step 2: Boot into macOS.

Step 3: Connect the Thunderbolt 3 cable from the Node to one of the Thunderbolt 3 ports on the 2016 MacBook Pro. If you’re using a 13-inch model with four Thunderbolt 3 ports, Apple recommends connecting to the ports on the left side of the machine.

Video walkthrough

Subscribe to 9to5Mac on YouTube for more hands-on videos

Step 4: Connect to an external display from your graphics card via HDMI or Display Port.

Step 5: Connect the power to the Akitio Node, and turn it on.

Step 6: Open a Terminal window and paste in the following to download, install and configure the beta Nvidia drivers:

curl -o ~/Desktop/automate-eGPU.sh https://raw.githubusercontent.com/goalque/automate-eGPU/master/automate-eGPU.sh && chmod +x ~/Desktop/automate-eGPU.sh && cd ~/Desktop && sudo ./automate-eGPU.sh

Enter your password when requested and press Return.

Follow the instructions presented in Terminal, and reboot your Mac when requested. After rebooting, you will see macOS on the externally connected display. You should also see Nvidia’s Driver Manager app running in the menu bar, and available in System Preferences. Special thanks to goalque for providing such a simple-to-use script.

Here is what you will see when you run the Automate-eGPU script

macOS Benchmarks

We ran a few Benchmarks using Unigine Heaven, Valley and CineBench, and you can very obviously see the performance difference between the integrated Intel GPU and the discrete Nvidia GPU. Benches were performed at 1080p resolution on an external display.

Even if you have a relatively weak MacBook such as the entry-level model without the Touch Bar, you can get big performance gains with this setup when it comes to gaming. Of course, a more powerful quad-core MacBook will perform even better, but all eligible models stand to reap serious gaming benefits.

But those performance gains are largely stymied by the bugginess and general unreliability of such a setup on macOS. Remember, these are just beta drivers, and it’s still very much early in the game.

Many of the games that I tried via the Mac App Store, or via a service like Steam experienced some issues with bugs or crashing. It’s clear that the potential is there waiting to be tapped into, but it’s not quite ready for prime time. Therefore, it would be difficult for me to recommend investing into such a setup if you’re only interested in running macOS…

Windows is where it’s at

And that’s where Windows comes in. Yes, Pascal is now somewhat compatible with macOS, but in reality, you’re going to want to run Windows on your MacBook Pro if you want to be able to justify such an investment. By going this route, you’re essentially turning your MacBook Pro into a bonafide high-powered Windows gaming machine. And let me tell you folks, it flies.

The great thing about running Windows with an eGPU setup is that, unlike in macOS, the GPU can drive the MacBook Pro’s internal display, and you’ll find stable drivers that work well for most games. That means that you can utilize an extra long Thunderbolt 3 cable for added flexibility while enjoying the latest games.

To learn how to install Windows 10 on your MacBook Pro, be sure to watch and follow our full hands-on Boot Camp guide for Windows 10. The video has been viewed over 200,000 times now, and lots of people have been surprised by how easy it is to run Windows on a MacBook Pro.

After Windows is installed on your MacBook Pro, simply connect the eGPU via a Thunderbolt 3 cable, and install the GPU drivers.

Please note that you’ll need to plug in the Thunderbolt 3 cable after the Windows logo appears as it starts the boot process.

Windows gaming and benchmarks

We saw how much the MacBook Pro improved in benchmarks when using the GTX 1080 Ti, but how about on Windows? As you’ll see, the Windows environment is where such an eGPU setup thrives.

Since Windows allows me to use the MacBook Pro’s internal display, I included benches for that as well. Notice the huge performance gap between Windows and macOS.

But benchmarks only tell a portion of the story. The games heavily benefit from the added power of the eGPU.

Forza 3 Horizon running on the MacBook Pro’s internal display

I was able to run graphics-intensive titles like Forza 3 Horizon at maximum settings on my external display at 4K resolution. The game looks absolutely incredible, and stays locked in at 30 FPS for a smooth experience, but can easily fluctuate around 40-45 FPS if you disable the FPS lock. Trying to pull off a feat like that using the MacBook Pro’s Intel Iris 550 Graphics would be utterly laughable. In fact, the game won’t even run at all with the integrated GPU.

Don’t even try to let the Intel Iris 550 do the heavy lifting

Other games, like Rocket League, play at maximum settings on my 4K external display while locked at a silky smooth 60 FPS. My experience while using the eGPU in macOS was nothing like that, in fact, Rocket League could barely hold up a consistent 30 FPS while running at 4K.


eGPUs are here to stay. They afford a great deal of on demand power while allowing you to have a light and nimble machine like the 13-inch 2016 MacBook Pro. One of the biggest remaining hurdles is the fact that eGPU enclosures are still very hard to come by. The Akitio Node is just now becoming more readily available, while other eGPU solutions are backordered, or still in the works.

As mentioned, you can use any Pascal-based card in the Akitio Node and have it function with macOS. That means that you don’t necessarily have to spend $700 on the 1080 Ti. More price-conscious cards like the GTX 1060, 1070, and 1080 can still provide ample gaming performance gains.

Having Pascal support on macOS is nice, but the real fun and potential is realized by running Windows via Boot Camp. If you’re a gamer with a 2016 MacBook Pro, then you are in possession of a very flexible machine that can run macOS for day-to-day work, and Windows for intense gaming sessions when connected to any Pascal-powered eGPU.

Would you consider using an eGPU with your MacBook Pro given the obvious gaming benefits? Sound off with your opinions down below.

More eGPU-related coverage

Baidu Advances AI in the Cloud with Latest NVIDIA Pascal GPUs

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="SANTA CLARA, CA–(Marketwired – Apr 17, 2017) – NVIDIA ( NASDAQ : NVDA ) today announced that its deep learning platform is now available as part of Baidu Cloud’s deep learning service, giving enterprise customers instant access to the world’s most adopted AI tools.” data-reactid=”11″>SANTA CLARA, CA–(Marketwired – Apr 17, 2017) – NVIDIA ( NASDAQ : NVDA ) today announced that its deep learning platform is now available as part of Baidu Cloud’s deep learning service, giving enterprise customers instant access to the world’s most adopted AI tools.

The new Baidu Cloud offers the latest GPU computing technology, including Pascal™ architecture-based NVIDIA® Tesla® P40 GPUs and NVIDIA deep learning software. It provides both training and inference acceleration for open-source deep learning frameworks, such as TensorFlow and PaddlePaddle.

“Baidu and NVIDIA are long-time partners in advancing the state of the art in AI,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Baidu understands that enterprises need GPU computing to process the massive volumes of data needed for deep learning. Through Baidu Cloud, companies can quickly convert data into insights that lead to breakthrough products and services.”

“Our partnership with NVIDIA has long provided Baidu with a competitive advantage,” said Shiming Yin, vice president and general manager of Baidu Cloud Computing. “Baidu Cloud Service powered by NVIDIA’s deep learning software and Pascal GPUs will help our customers accelerate their deep learning training and inference, resulting in faster time to market for a new generation of intelligent products and applications.”

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="World’s Most Adopted AI Platform
NVIDIA’s deep learning platform is the world’s most adopted platform for building AI services. All key deep learning frameworks are accelerated on NVIDIA’s platform, which is available from leading cloud service providers worldwide, including Alibaba, Amazon, Google, IBM and Microsoft. Organizations ranging from startups to leading multinationals are taking advantage of GPUs in the cloud to achieve faster results without massive capital expenditures or complexity of managing the infrastructure.” data-reactid=”15″>World’s Most Adopted AI Platform
NVIDIA’s deep learning platform is the world’s most adopted platform for building AI services. All key deep learning frameworks are accelerated on NVIDIA’s platform, which is available from leading cloud service providers worldwide, including Alibaba, Amazon, Google, IBM and Microsoft. Organizations ranging from startups to leading multinationals are taking advantage of GPUs in the cloud to achieve faster results without massive capital expenditures or complexity of managing the infrastructure.

Organizations are increasingly turning to GPU computing to develop advanced applications in areas such as natural language processing, traffic analysis, intelligent customer service, personalized recommendations and understanding video.

The massively efficient parallel processing capabilities of GPUs make NVIDIA’s platform highly effective at accelerating a host of other data-intensive workloads, from AI and deep learning to advanced analytics to high performance computing.

Baidu Cloud’s deep learning service is available today.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Keep Current on NVIDIA
Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.” data-reactid=”19″>Keep Current on NVIDIA
Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="About NVIDIA
NVIDIA’s ( NASDAQ : NVDA ) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.” data-reactid=”20″>About NVIDIA
NVIDIA’s ( NASDAQ : NVDA ) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Certain statements in this press release including, but not limited to, statements as to: the impact, benefits and availability of NVIDIA’s deep learning platform and Baidu Cloud’s deep learning service; and organizations taking advantage or GPUs in the cloud and increasingly turning to GPU computing are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-K for the fiscal period ended January 29, 2017. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

© 2017 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Pascal, and Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.