Skip to content
Pasqal Documentation

Training on CPU with Trainer

This guide explains how to train models on CPU using Trainer from qadence.ml_tools, covering single-process and multi-processing setups.

  • nprocs: Number of processes to run. To enable multi-processing and launch separate processes, set nprocs > 1.
  • compute_setup: The computational setup used for training. Options include cpu, gpu, and auto.

For more details on the advanced training options, please refer to TrainConfig Documentation

By adjusting TrainConfig, you can seamlessly switch between single and multi-core CPU training. To enable CPU-based training, update these fields in TrainConfig:

  • backend="cpu": Ensures training runs on the CPU.
  • nprocs=1: Uses one CPU core.
train_config = TrainConfig(
compute_setup="cpu",
)
  • backend="gloo": Uses the Gloo backend for CPU multi-processing.
  • nprocs=4: Utilizes 4 CPU cores.
train_config = TrainConfig(
compute_setup="cpu",
backend="gloo",
nprocs=4,
)

Single-Process Training: Simple and suitable for small datasets. Use backend="cpu".

import torch
from torch import nn, optim
from torch.utils.data import DataLoader, TensorDataset
from qadence.ml_tools import TrainConfig, Trainer
from qadence.ml_tools.optimize_step import optimize_step
Trainer.set_use_grad(True)
# Dataset, Model, and Optimizer
x = torch.linspace(0, 1, 100).reshape(-1, 1)
y = torch.sin(2 * torch.pi * x)
dataloader = DataLoader(TensorDataset(x, y), batch_size=16, shuffle=True)
model = nn.Sequential(nn.Linear(1, 16), nn.ReLU(), nn.Linear(16, 1))
optimizer = optim.SGD(model.parameters(), lr=0.01)
# Single-Process Training Configuration
train_config = TrainConfig(compute_setup="cpu", max_iter=5, print_every=1)
# Training
trainer = Trainer(model, optimizer, train_config, loss_fn="mse", optimize_step=optimize_step)
trainer.fit(dataloader)

Multi-Processing Training: Best for large datasets, utilizes multiple CPU processes. Use backend="gloo" and set nprocs.

import torch
from torch import nn, optim
from torch.utils.data import DataLoader, TensorDataset
from qadence.ml_tools import TrainConfig, Trainer
from qadence.ml_tools.optimize_step import optimize_step
Trainer.set_use_grad(True)
# __main__ is recommended.
if __name__ == "__main__":
x = torch.linspace(0, 1, 100).reshape(-1, 1)
y = torch.sin(2 * torch.pi * x)
dataloader = DataLoader(TensorDataset(x, y), batch_size=16, shuffle=True)
model = nn.Sequential(nn.Linear(1, 16), nn.ReLU(), nn.Linear(16, 1))
optimizer = optim.SGD(model.parameters(), lr=0.01)
# Multi-Process Training Configuration
train_config = TrainConfig(
compute_setup="cpu",
backend="gloo",
nprocs=4,
max_iter=5,
print_every=1)
trainer = Trainer(model, optimizer, train_config, loss_fn="mse", optimize_step=optimize_step)
trainer.fit(dataloader)