Skip to content
Pasqal Documentation

Backends

Backends allow execution of Qadence abstract quantum circuits. They could be chosen from a variety of simulators, emulators and hardware and can enable circuit differentiability (external). The primary way to interact and configure a backend is via the high-level API QuantumModel.

PyQTorch (external): An efficient, large-scale simulator designed for quantum machine learning, seamlessly integrated with the popular PyTorch (external) deep learning framework for automatic differentiability. It also offers analog computing for time-(in)dependent pulses. See PyQTorchBackend.

Pulser (external): A Python library for pulse-level/analog control of neutral atom devices. Execution via QuTiP (external). See PulserBackend.

More: Proprietary Qadence extensions provide more high-performance backends based on tensor networks or differentiation engines. For more enquiries, please contact: info@pasqal.com.

The DifferentiableBackend class enables different differentiation modes for the given backend. This can be chosen from two types:

  • Automatic differentiation (AD): available for PyTorch based backends (PyQTorch).
  • Parameter Shift Rules (PSR): available for all backends. See this section for more information on differentiability and PSR.

In practice, only a diff_mode should be provided in the QuantumModel. Please note that diff_mode defaults to None:

import sympy
import torch
from qadence import Parameter, RX, RZ, Z, CNOT, QuantumCircuit, QuantumModel, chain, BackendName, DiffMode
x = Parameter("x", trainable=False)
y = Parameter("y", trainable=False)
fm = chain(
RX(0, 3 * x),
RX(0, x),
RZ(1, sympy.exp(y)),
RX(0, 3.14),
RZ(1, "theta")
)
ansatz = CNOT(0, 1)
block = chain(fm, ansatz)
circuit = QuantumCircuit(2, block)
observable = Z(0)
# DiffMode.GPSR is available for any backend.
# DiffMode.AD is only available for natively differentiable backends.
model = QuantumModel(circuit, observable, backend=BackendName.PYQTORCH, diff_mode=DiffMode.GPSR)
# Get some values for the feature parameters.
values = {"x": (x := torch.tensor([0.5], requires_grad=True)), "y": torch.tensor([0.1])}
# Compute expectation.
exp = model.expectation(values)
# Differentiate the expectation wrt x.
dexp_dx = torch.autograd.grad(exp, x, torch.ones_like(exp))
dexp_dx = (tensor([3.6398]),)

Every backend in Qadence inherits from the abstract Backend class: Backend and implement the following methods:

  • run: propagate the initial state according to the quantum circuit and return the final wavefunction object.
  • sample: sample from a circuit.
  • expectation: computes the expectation of a circuit given an observable.
  • convert: convert the abstract QuantumCircuit object to its backend-native representation including a backend specific parameter embedding function.

Backends are purely functional objects which take as input the values for the circuit parameters and return the desired output from a call to a method. In order to use a backend directly, embedded parameters must be supplied as they are returned by the backend specific embedding function.

Here is a simple demonstration of the use of the PyQTorch backend to execute a circuit in non-differentiable mode:

from qadence import QuantumCircuit, FeatureParameter, RX, RZ, CNOT, hea, chain
# Construct a feature map.
x = FeatureParameter("x")
z = FeatureParameter("y")
fm = chain(RX(0, 3 * x), RZ(1, z), CNOT(0, 1))
# Construct a circuit with an hardware-efficient ansatz.
circuit = QuantumCircuit(3, fm, hea(3,1))

The abstract QuantumCircuit can now be converted to its native representation via the PyQTorch backend.

from qadence import backend_factory
# Use only PyQtorch in non-differentiable mode:
backend = backend_factory("pyqtorch")
# The `Converted` object
# (contains a `ConvertedCircuit` with the original and native representation)
conv = backend.convert(circuit)
conv.circuit.original = ChainBlock(0,1,2)
├── ChainBlock(0,1)
│ ├── RX(0) [params: ['3*x']]
│ ├── RZ(1) [params: ['y']]
│ └── CNOT(0, 1)
└── ChainBlock(0,1,2) [tag: HEA]
├── ChainBlock(0,1,2)
│ ├── KronBlock(0,1,2)
│ │ ├── RX(0) [params: ['theta_0']]
│ │ ├── RX(1) [params: ['theta_1']]
│ │ └── RX(2) [params: ['theta_2']]
│ ├── KronBlock(0,1,2)
│ │ ├── RY(0) [params: ['theta_3']]
│ │ ├── RY(1) [params: ['theta_4']]
│ │ └── RY(2) [params: ['theta_5']]
│ └── KronBlock(0,1,2)
│ ├── RX(0) [params: ['theta_6']]
│ ├── RX(1) [params: ['theta_7']]
│ └── RX(2) [params: ['theta_8']]
└── ChainBlock(0,1,2)
├── KronBlock(0,1)
│ └── CNOT(0, 1)
└── KronBlock(1,2)
└── CNOT(1, 2)
conv.circuit.native = QuantumCircuit(
(operations): ModuleList(
(0): Sequence(
(operations): ModuleList(
(0): Sequence(
(operations): ModuleList(
(0): RX(target: (0,), param: a25217fb-3556-49b8-a44d-c92752daa906)
(1): RZ(target: (1,), param: 19cc9211-cb2e-4de7-99d6-c31fd18ec2c9)
(2): CNOT(control: (0,), target: (1,))
)
)
(1): Sequence(
(operations): ModuleList(
(0): Sequence(
(operations): ModuleList(
(0): Merge(
(operations): ModuleList(
(0): RX(target: (0,), param: 2c782915-786b-4ebe-b9dd-c08fa2fa687e)
(1): RY(target: (0,), param: 548ef485-8c00-4858-bce2-78d6850fccce)
(2): RX(target: (0,), param: 944b50f0-3df6-4349-bf82-fe9aeb93a945)
)
)
(1): Merge(
(operations): ModuleList(
(0): RX(target: (1,), param: 44ccea8f-a203-4c28-8992-5837337e3e02)
(1): RY(target: (1,), param: 851fe64a-9b70-439b-9a66-de437d1b282c)
(2): RX(target: (1,), param: d98ad82c-cae3-43d7-8c19-077aa45e025f)
)
)
(2): Merge(
(operations): ModuleList(
(0): RX(target: (2,), param: 5c2c42fc-9064-4fb5-857b-3cbba3c12806)
(1): RY(target: (2,), param: 36d4c1a8-bec0-4b59-baeb-e18ac7d991b3)
(2): RX(target: (2,), param: 0d785898-5201-4213-a883-f25d587fcc8d)
)
)
)
)
(1): Sequence(
(operations): ModuleList(
(0): Sequence(
(operations): ModuleList(
(0): CNOT(control: (0,), target: (1,))
)
)
(1): Sequence(
(operations): ModuleList(
(0): CNOT(control: (1,), target: (2,))
)
)
)
)
)
)
)
)
)
)

Additionally, Converted contains all fixed and variational parameters, as well as an embedding function which accepts feature parameters to construct a dictionary of circuit native parameters. These are needed as each backend uses a different representation of the circuit parameters:

import torch
# Contains fixed parameters and variational (from the HEA)
conv.params
inputs = {"x": torch.tensor([1., 1.]), "y":torch.tensor([2., 2.])}
# get all circuit parameters (including feature params)
embedded = conv.embedding_fn(conv.params, inputs)
conv.params = {
theta_6: tensor([0.1990], requires_grad=True)
theta_5: tensor([0.5638], requires_grad=True)
theta_0: tensor([0.3975], requires_grad=True)
theta_7: tensor([0.0192], requires_grad=True)
theta_2: tensor([0.0217], requires_grad=True)
theta_4: tensor([0.0650], requires_grad=True)
theta_8: tensor([0.2804], requires_grad=True)
theta_3: tensor([0.9994], requires_grad=True)
theta_1: tensor([0.4961], requires_grad=True)
}
embedded = {
a25217fb-3556-49b8-a44d-c92752daa906: tensor([3., 3.], grad_fn=<ViewBackward0>)
19cc9211-cb2e-4de7-99d6-c31fd18ec2c9: tensor([2., 2.])
2c782915-786b-4ebe-b9dd-c08fa2fa687e: tensor([0.3975], grad_fn=<ViewBackward0>)
548ef485-8c00-4858-bce2-78d6850fccce: tensor([0.9994], grad_fn=<ViewBackward0>)
944b50f0-3df6-4349-bf82-fe9aeb93a945: tensor([0.1990], grad_fn=<ViewBackward0>)
44ccea8f-a203-4c28-8992-5837337e3e02: tensor([0.4961], grad_fn=<ViewBackward0>)
851fe64a-9b70-439b-9a66-de437d1b282c: tensor([0.0650], grad_fn=<ViewBackward0>)
d98ad82c-cae3-43d7-8c19-077aa45e025f: tensor([0.0192], grad_fn=<ViewBackward0>)
5c2c42fc-9064-4fb5-857b-3cbba3c12806: tensor([0.0217], grad_fn=<ViewBackward0>)
36d4c1a8-bec0-4b59-baeb-e18ac7d991b3: tensor([0.5638], grad_fn=<ViewBackward0>)
0d785898-5201-4213-a883-f25d587fcc8d: tensor([0.2804], grad_fn=<ViewBackward0>)
}

With the embedded parameters, QuantumModel methods are accessible:

output = backend.run(conv.circuit, embedded)
print(f"{output = }")
output = tensor([[ 0.1345-0.1265j, 0.0219-0.0564j, 0.1319+0.0874j, 0.2485+0.4139j,
-0.6961-0.3491j, -0.2552-0.0051j, 0.0017+0.0561j, -0.0691+0.1566j],
[ 0.1345-0.1265j, 0.0219-0.0564j, 0.1319+0.0874j, 0.2485+0.4139j,
-0.6961-0.3491j, -0.2552-0.0051j, 0.0017+0.0561j, -0.0691+0.1566j]],
grad_fn=<TBackward0>)

If there is a requirement to work with a specific backend, it is possible to access directly the native circuit. For example, should one wish to use PyQtorch noise features directly instead of using the NoiseHandler interface from Qadence:

from pyqtorch.noise import Depolarizing
inputs = {"x": torch.rand(1), "y":torch.rand(1)}
embedded = conv.embedding_fn(conv.params, inputs)
# Define a noise channel on qubit 0
noise = Depolarizing(0, error_probability=0.1)
# Add noise to circuit
conv.circuit.native.operations.append(noise)

When running With noise, one can see that the output is a density matrix:

density_result = backend.run(conv.circuit, embedded)
print(density_result.shape)
torch.Size([1, 8, 8])