Solve a QUBO problem
In this notebook, we solve a quadratic unconstrained binary optimization (QUBO) problem with Qadence. QUBOs are very popular combinatorial optimization problems with a wide range of applications. Here, we solve the problem using the QAOA 1 variational algorithm by embedding the QUBO problem weights onto a register as standard for neutral atom quantum devices.
Additional background information on QUBOs can be found here (external), directly solved using the pulse-level interface Pulser.
Define and solve QUBO
Section titled “Define and solve QUBO”Pre-requisite: optimal register coordinates for embedding the QUBO problem
import numpy as npimport numpy.typing as nptfrom scipy.optimize import minimizefrom scipy.spatial.distance import pdist, squareformfrom qadence import RydbergDevice
def qubo_register_coords(Q: np.ndarray, device: RydbergDevice) -> list: """Compute coordinates for register."""
def evaluate_mapping(new_coords, *args): """Cost function to minimize. Ideally, the pairwise distances are conserved""" Q, shape = args new_coords = np.reshape(new_coords, shape) interaction_coeff = device.coeff_ising new_Q = squareform(interaction_coeff / pdist(new_coords) ** 6) return np.linalg.norm(new_Q - Q)
shape = (len(Q), 2) np.random.seed(0) x0 = np.random.random(shape).flatten() res = minimize( evaluate_mapping, x0, args=(Q, shape), method="Nelder-Mead", tol=1e-6, options={"maxiter": 200000, "maxfev": None}, ) return [(x, y) for (x, y) in np.reshape(res.x, (len(Q), 2))]With the embedding routine define above, we can translate a matrix defining a QUBO problem to a set of atom coordinates for the register. The QUBO problem is initially defined by a graph of weighted edges and a cost function to be optimized. The weighted edges are represented by a
real-valued symmetric matrix Q which is used throughout the tutorial.
import torchfrom qadence import QuantumModel
seed = 0np.random.seed(seed)torch.manual_seed(seed)
# QUBO problem weights (real-value symmetric matrix)Q = np.array( [ [-10.0, 19.7365809, 19.7365809, 5.42015853, 5.42015853], [19.7365809, -10.0, 20.67626392, 0.17675796, 0.85604541], [19.7365809, 20.67626392, -10.0, 0.85604541, 0.17675796], [5.42015853, 0.17675796, 0.85604541, -10.0, 0.32306662], [5.42015853, 0.85604541, 0.17675796, 0.32306662, -10.0], ])
# Loss function to guide the optimization routinedef loss(model: QuantumModel, *args) -> tuple[torch.Tensor, dict]: to_arr_fn = lambda bitstring: np.array(list(bitstring), dtype=int) cost_fn = lambda arr: arr.T @ Q @ arr samples = model.sample({}, n_shots=1000)[0] cost_fn = sum(samples[key] * cost_fn(to_arr_fn(key)) for key in samples) return torch.tensor(cost_fn / sum(samples.values())), {}The QAOA algorithm needs a variational quantum circuit with optimizable parameters. For that purpose, we use a fully analog circuit composed of two global rotations per layer on different axes of the Bloch sphere. The first rotation corresponds to the mixing Hamiltonian and the second one to the embedding Hamiltonian 1. In this setting, the embedding is realized by the appropriate register coordinates and the resulting qubit interaction. Details on the analog blocks used here can be found in the analog basics tutorial.
Rydberg level
from qadence import QuantumCircuit, Register, RydbergDevicefrom qadence import chain, AnalogRX, AnalogRZ
# Device specification and atomic registerdevice = RydbergDevice(rydberg_level=70)
reg = Register.from_coordinates( qubo_register_coords(Q, device), device_specs=device)
# Analog variational quantum circuitlayers = 2block = chain(*[AnalogRX(f"t{i}") * AnalogRZ(f"s{i}") for i in range(layers)])circuit = QuantumCircuit(reg, block)By initializing the QuantumModel with this circuit we can check the initial counts where no clear solution can be found.
model = QuantumModel(circuit)initial_counts = model.sample({}, n_shots=1000)[0]initial_counts = OrderedCounter({'00000': 101, '10000': 87, '01000': 75, '00110': 72, '00100': 70, '01010': 64, '01001': 62, '00101': 53, '00010': 51, '00011': 48, '01011': 46, '00001': 45, '10010': 45, '00111': 38, '10001': 34, '11000': 29, '10100': 18, '01100': 14, '01110': 11, '01111': 10, '10110': 7, '11001': 7, '11010': 6, '01101': 4, '10101': 2, '10011': 1})Finally, we can proceed with the variational optimization. The cost function
defined above is derived from bitstring computations and therefore non differentiable. We use Qadence
ML facilities to run gradient-free optimizations using the
nevergrad (external) library.
from qadence.ml_tools import Trainer, TrainConfig, num_parametersimport nevergrad as ng
Trainer.set_use_grad(False)
config = TrainConfig(max_iter=100)
optimizer = ng.optimizers.NGOpt( budget=config.max_iter, parametrization=num_parameters(model))
trainer = Trainer(model, optimizer, config, loss)
trainer.fit()
optimal_counts = model.sample({}, n_shots=1000)[0]Finally, let's plot the solution. The expected bitstrings are marked in red.
import matplotlib.pyplot as plt
# Known solutions to the QUBO problem.solution_bitstrings = ["01011", "00111"]
def plot_distribution(C, ax, title): C = dict(sorted(C.items(), key=lambda item: item[1], reverse=True)) color_dict = {key: "r" if key in solution_bitstrings else "b" for key in C} ax.set_xlabel("bitstrings") ax.set_ylabel("counts") ax.set_xticks([i for i in range(len(C.keys()))], C.keys(), rotation=90) ax.bar(C.keys(), C.values(), color=color_dict.values()) ax.set_title(title)
fig, axs = plt.subplots(1, 2, figsize=(12, 4))plot_distribution(initial_counts, axs[0], "Initial counts")plot_distribution(optimal_counts, axs[1], "Optimal counts")