Tuesday, April 26, 2022

Laptop GPU vs Google Colaboratory

 Laptop RTX 3060 GPU vs Google Colaboratory comparison

When you are starting the path of machine learning and deep learning there are several questions that comes to your head. One of them is do I need to buy a PC or laptop with a Expensive GPU or can I use other free options such free version of Google Colaboratory. Will there be any performance impact?

Lets look at some stats of Google Colaboratory vs Laptop GPU. In this test a laptop with NVIDIA Gefore RTX 3060 was used. The GPU RAM is 6GB which is half from the desktop counterpart. Furthermore, laptop contains AMD Ryzen 7 5800H CPU with 16GB RAM.The test environment run Tensorflow 2.0 with GPU support.

Following results were achieved from the conducted test.

Epoch 1/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.5152 - accuracy: 0.8146
Epoch 2/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.3927 - accuracy: 0.8587
Epoch 3/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.3527 - accuracy: 0.8731
Epoch 4/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.3291 - accuracy: 0.8811
Epoch 5/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.3081 - accuracy: 0.8878
Epoch 6/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.2937 - accuracy: 0.8923
Epoch 7/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.2799 - accuracy: 0.8972
Epoch 8/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.2736 - accuracy: 0.9001
Epoch 9/10
1875/1875 [==============================] - 8s 4ms/step - loss: 0.2591 - accuracy: 0.9044
Epoch 10/10
1875/1875 [==============================] - 9s 5ms/step - loss: 0.2542 - accuracy: 0.9069
1min 27s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

So the test took total 1min and 27s.

Now lets check the Google Colaboratory results.

Epoch 1/10
1875/1875 [==============================] - 13s 5ms/step - loss: 0.5143 - accuracy: 0.8161
Epoch 2/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.3917 - accuracy: 0.8591
Epoch 3/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.3520 - accuracy: 0.8729
Epoch 4/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.3272 - accuracy: 0.8805
Epoch 5/10
1875/1875 [==============================] - 11s 6ms/step - loss: 0.3084 - accuracy: 0.8878
Epoch 6/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2950 - accuracy: 0.8922
Epoch 7/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2791 - accuracy: 0.8979
Epoch 8/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2695 - accuracy: 0.9012
Epoch 9/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2621 - accuracy: 0.9046
Epoch 10/10
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2538 - accuracy: 0.9067
1 loop, best of 1: 2min 26s per loop

So the test took total 2min and 26s.

As you can see laptop GPU is significantly faster and when you consider the GPU instance provided by Google Colaboratory its a Tesla K80 GPU.

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 10148524266383294271
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 11320098816
locality {
  bus_id: 1
  links {
  }
}
incarnation: 10352065308459422991
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7"
xla_global_id: 416903419
]

So you may come to a conclusion that spending for PC with laptop or GPU is beneficial, but there are some draw backs. When you advance more in the field training model size may increase and the 6GB or 12GB VRAM may be not enough. Using a cloud instance will be useful in such scenarios as they are easily scalable but you may need pay for it based on the usage.

Free cloud instance may get disconnected sometimes, and tests won't run continuously when you close the tabs.

So If you have some money to spend its handy to have machine with a GPU. Because you can create small training set and finalize everything in larger cloud environment.

Following codes were used to conduct the test using fashion_mnist data set from Keras

import tensorflow as tf
from tensorflow import keras
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
print(train_images.shape)
print(train_labels[0])
# checking images
import matplotlib.pyplot as plt
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
plt.imshow(train_images[0])
class_names[train_labels[0]]
# scaling
train_images_scaled = train_images / 255.0
test_images_scaled = test_images / 255.0
def get_model(hidden_layers=1):
    # Flatten layer for input
    layers = [keras.layers.Flatten(input_shape=(28, 28))]
    # hideen layers
    for i in range(hidden_layers):
        layers.append(keras.layers.Dense(500, activation='relu'),)
    # output layer    
    layers.append(keras.layers.Dense(10, activation='sigmoid'))
    model = keras.Sequential(layers)
    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    return model
%%timeit -n1 -r1
with tf.device('/GPU:0'):
    gpu_model = get_model(hidden_layers=5)
    gpu_model.fit(train_images_scaled, train_labels, epochs=10)

How to check Tensorflow is working with your GPU

How to check Tensorflow is working with your GPU

If you have installed Tensorflow with GPU support in your laptop or PC you may be wondering how to check whether it is working with GPU. There are several methods to do this. 

In your Jupyter notebook you can run following commands.

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))

This will output the number of available GPUs

Num GPUs Available:  1

If the Tensorflow is not installed properly with GPU support number of GPUs will be shown as 0

To view the details of the GPU you can use following command.

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

This will output details similar to below.

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 13418863311310513005
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 3667263488
locality {
  bus_id: 1
  links {
  }
}
incarnation: 8234622026470474717
physical_device_desc: "device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6"
xla_global_id: 416903419
]

Physical device description will show you the details of the GPU in your PC or laptop. Here it is shown as NVIDIA GeForce RTX 3060 laptop GPU which is in my laptop.

If you are using free version of Google Colab following details will be shown.

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 5996649638546807441
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 11320098816
locality {
  bus_id: 1
  links {
  }
}
incarnation: 8194472726421902671
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7"
xla_global_id: 416903419
]

Google Colaboratory free version provides you with Tesla K80 GPU.