Using Google Colab for Deep Learning: A Comprehensive Guide
Introduction to Google Colab
Google Colab (Colaboratory) is a free cloud-based Jupyter notebook environment provided by Google. It enables users to write and execute Python code in their browsers with free GPU and TPU support. It is widely used for deep learning, machine learning, and data science experiments.
Why Use Google Colab for Deep Learning?
✅ Free access to GPUs and TPUs – No need to invest in expensive hardware.
✅ Pre-installed libraries – TensorFlow, PyTorch, Keras, NumPy, Pandas, and more.
✅ Cloud storage integration – Seamless connection with Google Drive.
✅ Easy collaboration – Share notebooks just like Google Docs.
✅ No setup required – Runs entirely on the cloud.
1. Getting Started with Google Colab
Step 1: Access Google Colab
- Open your browser and go to Google Colab.
- Sign in with your Google account.
- Click on “New Notebook” to create a fresh notebook.
2. Understanding the Colab Interface
When you open a Colab notebook, you’ll see:
🔹 Code Cells – Execute Python code just like Jupyter Notebooks.
🔹 Text Cells – Add formatted text using Markdown.
🔹 Toolbar – Options for runtime, inserting code/text, file management, etc.
🔹 Sidebar – Access files, terminals, and session information.
3. Setting Up the Environment
Google Colab already has popular libraries pre-installed, including TensorFlow, PyTorch, NumPy, Pandas, OpenCV, Matplotlib, etc.
3.1 Checking Installed Libraries
import tensorflow as tf
import torch
import numpy as np
import pandas as pd
print("TensorFlow Version:", tf.__version__)
print("PyTorch Version:", torch.__version__)
print("NumPy Version:", np.__version__)
3.2 Installing Additional Libraries
If you need a package that isn’t pre-installed, use:
!pip install keras
For system-level dependencies, use:
!apt-get install ffmpeg
4. Connecting Google Colab to Google Drive
Google Colab can read and write files from Google Drive.
4.1 Mounting Google Drive
from google.colab import drive
drive.mount('/content/drive')
This will prompt an authentication key. Copy and paste it to grant access.
4.2 Accessing Files in Google Drive
import os
os.listdir('/content/drive/My Drive/')
5. Using Free GPU/TPU in Google Colab
Google provides free GPU and TPU access, making it ideal for deep learning.
5.1 Enabling GPU/TPU
- Click on Runtime > Change Runtime Type.
- Select GPU or TPU from the hardware accelerator dropdown.
- Click Save.
5.2 Checking GPU Availability
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using device:", device)
For TensorFlow:
import tensorflow as tf
print("GPU Available:", tf.config.list_physical_devices('GPU'))
5.3 Using TPU
import tensorflow as tf
print("TPU Available:", 'TPU' in tf.__version__)
6. Running Deep Learning Models in Google Colab
6.1 Training a Neural Network Using TensorFlow
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Load dataset
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0
# Define a simple model
model = keras.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dropout(0.2),
layers.Dense(10, activation='softmax')
])
# Compile model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train model
model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))
6.2 Training a Neural Network Using PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
# Data transformations
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
# Load dataset
train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
# Define a simple neural network
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc1 = nn.Linear(28*28, 128)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = x.view(-1, 28*28)
x = self.relu(self.fc1(x))
x = self.fc2(x)
return x
# Initialize model
model = SimpleNN().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training loop
epochs = 5
for epoch in range(epochs):
for images, labels in train_loader:
images, labels = images.to(device), labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.item()}")
7. Exporting and Saving Models
7.1 Saving TensorFlow Models
model.save('/content/drive/My Drive/mnist_model.h5')
7.2 Saving PyTorch Models
torch.save(model.state_dict(), '/content/drive/My Drive/mnist_model.pth')
8. Sharing and Collaborating on Colab Notebooks
8.1 Share via Link
- Click on “Share” in the top-right corner.
- Set permissions (View/Edit/Comment).
- Share the generated link.
8.2 Download the Notebook
Click File > Download .ipynb to save the notebook.
8.3 Convert Notebook to Python Script
!jupyter nbconvert --to script notebook.ipynb
9. Running Shell Commands in Google Colab
Colab allows executing shell commands by prefixing them with !
.
!ls -la # List files
!pwd # Print working directory
!nvidia-smi # Check GPU status
10. Running Google Colab in Dark Mode
from google.colab import output
output.enable_custom_widget_manager()