TL;DR In the second post in the PyTorch for Computer Vision series, we try to understand the role a GPU plays in the deep learning pipeline, and if we need to use one in ours (and which graphics card to buy if you don’t have one already; note: you don’t have to buy one). Not having GPUs is ok for the first few experiments though, and we also discuss a few cheap/free cloud alternatives.

If you don’t have a GPU (and don’t plan to buy one), you can skip to the next post (coming soon), where we discuss setting up the Python environment and necessary libraries.

Here is what we’ll be covering in this post:

  • Part 1: Why we GPUs are useful for Deep Learning? (and the tools involved)
  • Part 2: Do you need to buy a GPU for Deep Learning?
  • Part 3: What GPUs are good for Deep Learning?

Part 1: Why GPUs are useful for Deep Learning?

GPUs (Graphics Processing Units) were originally designed to render game visuals faster. Any frame inside a Game consists of hundreds, if not thousands, of elements that need to interact with each other and their graphical environment, which might itself be changing. All this requires a lot of computations, and GPUs were built to provide the increasing amount of computations games demanded with each passing year.

Since GPUs were really good at performing a lot of matrix operations (a lot of the operations (geometric transformations) that need to be done to render a game screen are matrix operations), eventually, people started using GPUs for scientific computing. GPUs are really great at performing matrix operations (like matrix multiplication, for example).

We haven’t talked about Neural Networks or Convolutional Neural Networks yet, but as you’ll see, training those models involves a lot of matrix operations. Technically, we need to do a lot of Tensor (roughly, the n-dimensional version of matrices) operations, but those can be expressed in terms of matrix operations.

 

What are CUDA and cuDNN?

If you have used a graphics card for gaming, you should have installed the necessary drivers. These drivers implement functions and other primitives using which we can exchange data and computations with the GPU, and they are geared towards graphics rendering.

We need similar drivers for general purpose computing on graphics cards. That is what CUDA is. It defines and implements the API surface necessary to talk to GPUs for general purpose (scientific) computing. In this context, GPUs are rebranded as GPGPUs (General Purpose Graphics Processing Unit).

In the words on Nvidia, CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

cuDNN (CUda Deep Neural Network library) is a Deep Learning library written on top of CUDA which provides very fast and optimised implementations of various components of a Convolutional Neural Network (which is heavily used in Computer Vision). Most deep learning libraries can be accelerated using cuDNN (including PyTorch), so we’ll be installing it in our workstation.

Part 2: Do you need to buy a GPU for Deep Learning?

Do you need a GPU for Deep Learning?

Typically, most models being built for production will require some kind of GPGPU computing power for model training. It is easy to get started with toy datasets (like MNIST) without a GPU, but as soon as you move to a larger dataset or a sophisticated network (of you own or pre-trained), a GPU can decide if your model will be ready in a few days or a few months! That’s not an exaggeration. Depending on your hardware, a 10x to 20x speed-up (or even more) is entirely possible.

We are going to need a GPU

We are going to need a GPU

But, do you need to buy a GPU for Deep Learning?

It’s great if you have a GPU for training Deep Learning models. If not, you should still be able to keep up with the first half of this series.

You don’t have to buy a GPU to go through the more compute-heavy experiments though. It can be very cheap (and at times, free) to use cloud GPU instance to run short experiments. Cloud instances also tend to come with more powerful GPUs than what you can typically buy with your money.

Later on, we’ll see how to setup and use cheap cloud instances for training the models. Google Colab provides free GPU instances for the time being, and if that’s still the case by the time we get to more intensive experiments with our models, then it shouldn’t cost you a dime! We’ll be covering a few other cloud solutions apart from Google Colab later on. Here are a few other cloud solutions you can consider:

  1. AWS p3 instances are good for deep learning, but a bit pricey. They are great if you can put together a workflow using spot instances (we’ll cover it).
  2. FloydHub used to be good. Prices were half of similar AWS instances and it provided great tooling for a Deep Learning workflow. But the prices have gone up recently and I no longer use it. Keep an eye out for the pricing.
  3. PaperSpace Gradient is also great (provides tooling around the workflow) and has a good pricing at the moment.
  4. Google TPUv2 is an interesting development. Remember how I mentioned earlier that GPUas are good at matrix operations but we need to do tensor operations for deep learning but it was ok because tensor operations can be expressed as matrix operations? Well, TPUs (Tensor Processing Units) can do tensor operations directly. PyTorch doesn’t support TPUs yet, but the effort is going on and we’ll keep an eye out for it.

Part 3: What GPUs are good for Deep Learning?

In case you are considering buying a card on your own, here is a quick overview of a few popular cards and my recommendations.

Choose your graphics card carefully

Choose your graphics card carefully

  1. Nvidia (CUDA) or AMD (OpenCL)?: Most deep learning libraries support only CUDA for the time being. Go with Nvidia cards.
  2. GTX 980ti or GTX 1080ti?: The 10xx series is better than the 9xx series in terms of performance, power consumption and value for money.
  3. Titan XP?: This card provides marginally better performance than the next best card (1080ti), but for a much larger price. Skip this.
  4. GTX 1080ti?: This is one of the best value-for-money cards at the top end. If you have money to spare, go for this. Also, this has 11gb of RAM, which will be really useful when you want to load really large networks into memory for transfer learning.
  5. GTX 1080, 1070ti, 1070?: These are all decent choices and provide value for money. They all come with 8gb of RAM and are suitable for most computer vision tasks. These are good enough for competing in Kaggle
  6. GTX 1060: This is a value-for-money card that comes with 6gb of RAM, which is not going to be enough to load a lot of the larger networks into memory for further training. VGG16 and VGG19 will still work on this card though (we’ll cover this later).

Anything below this is ok for playing around, but not for serious work. Also, if you can get a bargain on them, a card from GTX 9xx series should also be fine. A 4gb card is also fine if you are toying with the idea of Deep Learning. Just make sure you don’t buy GTX 970. A 6gb/4gb card will be ok for NLP and other kind of deep learning modelling tasks, but it’ll seriously limit the kind of experiments you can do in Computer Vision.

If you have more money, you can also consider Tesla K80 or Tesla V100, but I consider them out of reach of most individual deep learning practitioners and enthusiasts.

I don't spend my money, but when I do, I dump them on GPUs

Me in my first year of Deep Learning

What next?

In the next post, we’ll learn how we to set up a graphics card with CUDA and cuDNN for deep learning. It used to be a painful process earlier, but it’s pretty smooth now. I went through those steps on a brand new rig myself yesterday, so the steps worked without a hitch.