API specification
The emu-mps API is based on the specification here. Concretely, the classes are as follows:
MPSBackend
Section titled “MPSBackend”
Bases: EmulatorBackend
A backend for emulating Pulser sequences using Matrix Product States (MPS), aka tensor trains.
Source code in pulser/backend/abc.py
def __init__( self, sequence: pulser.Sequence, *, config: EmulationConfig | None = None, mimic_qpu: bool = False,) -> None: """Initializes the backend.""" super().__init__(sequence, mimic_qpu=mimic_qpu) config = config or self.default_config if not isinstance(config, EmulationConfig): raise TypeError( "'config' must be an instance of 'EmulationConfig', " f"not {type(config)}." ) # See the BackendConfig definition to see why this works self._config = type(self.default_config)(**config._backend_options)
resume(autosave_file)
staticmethod
Section titled “
resume(autosave_file)
staticmethod
”Resume simulation from autosave file. Only resume simulations from data you trust! Unpickling of untrusted data is not safe.
Source code in emu_mps/mps_backend.py
@staticmethoddef resume(autosave_file: str | pathlib.Path) -> Results: """ Resume simulation from autosave file. Only resume simulations from data you trust! Unpickling of untrusted data is not safe. """ if isinstance(autosave_file, str): autosave_file = pathlib.Path(autosave_file)
if not autosave_file.is_file(): raise ValueError(f"Not a file: {autosave_file}")
with open(autosave_file, "rb") as f: impl: MPSBackendImpl = pickle.load(f)
impl.autosave_file = autosave_file impl.last_save_time = time.time() impl.config.init_logging() # FIXME: might be best to take logger object out of config.
logging.getLogger("global_logger").warning( f"Resuming simulation from file {autosave_file}\n" f"Saving simulation state every {impl.config.autosave_dt} seconds" )
return MPSBackend._run(impl)
run()
Section titled “
run()
”Emulates the given sequence.
RETURNS | DESCRIPTION |
---|---|
Results
|
the simulation results |
Source code in emu_mps/mps_backend.py
def run(self) -> Results: """ Emulates the given sequence.
Returns: the simulation results """ assert isinstance(self._config, MPSConfig)
impl = create_impl(self._sequence, self._config) impl.init() # This is separate from the constructor for testing purposes.
results = self._run(impl)
return impl.permute_results(results, self._config.optimize_qubit_ordering)
MPSConfig
Section titled “MPSConfig”
Bases: EmulationConfig
The configuration of the emu-mps MPSBackend. The kwargs passed to this class are passed on to the base class. See the API for that class for a list of available options.
PARAMETER | DESCRIPTION |
---|---|
dt
|
the timestep size that the solver uses. Note that observables are only calculated if the evaluation_times are divisible by dt.
TYPE:
|
precision
|
up to what precision the state is truncated
TYPE:
|
max_bond_dim
|
the maximum bond dimension that the state is allowed to have.
TYPE:
|
max_krylov_dim
|
the size of the krylov subspace that the Lanczos algorithm maximally builds
TYPE:
|
extra_krylov_tolerance
|
the Lanczos algorithm uses this*precision as the convergence tolerance
TYPE:
|
num_gpus_to_use
|
during the simulation, distribute the state over this many GPUs 0=all factors to cpu. As shown in the benchmarks, using multiple GPUs might alleviate memory pressure per GPU, but the runtime should be similar.
TYPE:
|
optimize_qubit_ordering
|
Optimize the register ordering. Improves performance and accuracy, but disables certain features.
TYPE:
|
interaction_cutoff
|
Set interaction coefficients below this value to
TYPE:
|
log_level
|
How much to log. Set to
TYPE:
|
log_file
|
If specified, log to this file rather than stout.
TYPE:
|
autosave_prefix
|
filename prefix for autosaving simulation state to file
TYPE:
|
autosave_dt
|
minimum time interval in seconds between two autosaves. Saving the simulation state is only possible at specific times, therefore this interval is only a lower bound.
TYPE:
|
kwargs
|
arguments that are passed to the base class
TYPE:
|
Examples:
>>> num_gpus_to_use = 2 #use 2 gpus if available, otherwise 1 or cpu>>> dt = 1 #this will impact the runtime>>> precision = 1e-6 #smaller dt requires better precision, generally>>> MPSConfig(num_gpus_to_use=num_gpus_to_use, dt=dt, precision=precision,>>> with_modulation=True) #the last arg is taken from the base class
Source code in emu_mps/mps_config.py
def __init__( self, *, dt: int = 10, precision: float = 1e-5, max_bond_dim: int = 1024, max_krylov_dim: int = 100, extra_krylov_tolerance: float = 1e-3, num_gpus_to_use: int = DEVICE_COUNT, optimize_qubit_ordering: bool = False, interaction_cutoff: float = 0.0, log_level: int = logging.INFO, log_file: pathlib.Path | None = None, autosave_prefix: str = "emu_mps_save_", autosave_dt: int = 600, # 10 minutes **kwargs: Any,): kwargs.setdefault("observables", [BitStrings(evaluation_times=[1.0])]) super().__init__( dt=dt, precision=precision, max_bond_dim=max_bond_dim, max_krylov_dim=max_krylov_dim, extra_krylov_tolerance=extra_krylov_tolerance, num_gpus_to_use=num_gpus_to_use, optimize_qubit_ordering=optimize_qubit_ordering, interaction_cutoff=interaction_cutoff, log_level=log_level, log_file=log_file, autosave_prefix=autosave_prefix, autosave_dt=autosave_dt, **kwargs, ) if self.optimize_qubit_ordering: self.check_permutable_observables()
MIN_AUTOSAVE_DT = 10 assert ( self.autosave_dt > MIN_AUTOSAVE_DT ), f"autosave_dt must be larger than {MIN_AUTOSAVE_DT} seconds"
self.monkeypatch_observables()
self.logger = logging.getLogger("global_logger") if log_file is None: logging.basicConfig( level=log_level, format="%(message)s", stream=sys.stdout, force=True ) # default to stream = sys.stderr else: logging.basicConfig( level=log_level, format="%(message)s", filename=str(log_file), filemode="w", force=True, ) if (self.noise_model.runs != 1 and self.noise_model.runs is not None) or ( self.noise_model.samples_per_run != 1 and self.noise_model.samples_per_run is not None ): self.logger.warning( "Warning: The runs and samples_per_run values of the NoiseModel are ignored!" )
Bases: State[complex, Tensor]
Matrix Product State, aka tensor train.
Each tensor has 3 dimensions ordered as such: (left bond, site, right bond).
Only qubits are supported.
This constructor creates a MPS directly from a list of tensors. It is for internal use only.
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors for each site WARNING: for efficiency in a lot of use cases, this list of tensors IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily the exclusive owner of the list and its tensors. As a consequence, beware of potential external modifications affecting the list or the tensors. You are responsible for deciding whether to pass its own exclusive copy of the data to this constructor, or some shared objects.
TYPE:
|
orthogonality_center
|
the orthogonality center of the MPS, or None (in which case it will be orthogonalized when needed)
TYPE:
|
config
|
the emu-mps config object passed to the run method
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu, None=keep the existing device assignment.
TYPE:
|
Source code in emu_mps/mps.py
def __init__( self, factors: List[torch.Tensor], /, *, orthogonality_center: Optional[int] = None, config: Optional[MPSConfig] = None, num_gpus_to_use: Optional[int] = DEVICE_COUNT, eigenstates: Sequence[Eigenstate] = ("r", "g"),): """ This constructor creates a MPS directly from a list of tensors. It is for internal use only.
Args: factors: the tensors for each site WARNING: for efficiency in a lot of use cases, this list of tensors IS NOT DEEP-COPIED. Therefore, the new MPS object is not necessarily the exclusive owner of the list and its tensors. As a consequence, beware of potential external modifications affecting the list or the tensors. You are responsible for deciding whether to pass its own exclusive copy of the data to this constructor, or some shared objects. orthogonality_center: the orthogonality center of the MPS, or None (in which case it will be orthogonalized when needed) config: the emu-mps config object passed to the run method num_gpus_to_use: distribute the factors over this many GPUs 0=all factors to cpu, None=keep the existing device assignment. """ super().__init__(eigenstates=eigenstates) self.config = config if config is not None else MPSConfig() assert all( factors[i - 1].shape[2] == factors[i].shape[0] for i in range(1, len(factors)) ), "The dimensions of consecutive tensors should match" assert ( factors[0].shape[0] == 1 and factors[-1].shape[2] == 1 ), "The dimension of the left (right) link of the first (last) tensor should be 1"
self.factors = factors self.num_sites = len(factors) assert self.num_sites > 1 # otherwise, do state vector
assert (orthogonality_center is None) or ( 0 <= orthogonality_center < self.num_sites ), "Invalid orthogonality center provided" self.orthogonality_center = orthogonality_center
if num_gpus_to_use is not None: assign_devices(self.factors, min(DEVICE_COUNT, num_gpus_to_use))
n_qudits
property
Section titled “
n_qudits
property
”The number of qudits in the state.
__add__(other)
Section titled “
__add__(other)
”Returns the sum of two MPSs, computed with a direct algorithm.
The resulting MPS is orthogonalized on the first site and truncated
up to self.config.precision
.
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the summed state |
Source code in emu_mps/mps.py
def __add__(self, other: State) -> MPS: """ Returns the sum of two MPSs, computed with a direct algorithm. The resulting MPS is orthogonalized on the first site and truncated up to `self.config.precision`.
Args: other: the other state
Returns: the summed state """ assert isinstance(other, MPS), "Other state also needs to be an MPS" assert ( self.eigenstates == other.eigenstates ), f"`Other` state has basis {other.eigenstates} != {self.eigenstates}" new_tt = add_factors(self.factors, other.factors) result = MPS( new_tt, config=self.config, num_gpus_to_use=None, orthogonality_center=None, # Orthogonality is lost. eigenstates=self.eigenstates, ) result.truncate() return result
__rmul__(scalar)
Section titled “
__rmul__(scalar)
”Multiply an MPS by a scalar.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the scaled MPS |
Source code in emu_mps/mps.py
def __rmul__(self, scalar: complex) -> MPS: """ Multiply an MPS by a scalar.
Args: scalar: the scale factor
Returns: the scaled MPS """ which = ( self.orthogonality_center if self.orthogonality_center is not None else 0 # No need to orthogonalize for scaling. ) factors = scale_factors(self.factors, scalar, which=which) return MPS( factors, config=self.config, num_gpus_to_use=None, orthogonality_center=self.orthogonality_center, eigenstates=self.eigenstates, )
apply(qubit_index, single_qubit_operator)
Section titled “
apply(qubit_index, single_qubit_operator)
”Apply given single qubit operator to qubit qubit_index, leaving the MPS orthogonalized on that qubit.
Source code in emu_mps/mps.py
def apply(self, qubit_index: int, single_qubit_operator: torch.Tensor) -> None: """ Apply given single qubit operator to qubit qubit_index, leaving the MPS orthogonalized on that qubit. """ self.orthogonalize(qubit_index)
self.factors[qubit_index] = ( single_qubit_operator.to(self.factors[qubit_index].device) @ self.factors[qubit_index] )
entanglement_entropy(mps_site)
Section titled “
entanglement_entropy(mps_site)
”Returns
the Von Neumann entanglement entropy of the state mps
at the bond between sites b and b+1
S = -Σᵢsᵢ² log(sᵢ²)),
where sᵢ are the singular values at the chosen bond.
Source code in emu_mps/mps.py
def entanglement_entropy(self, mps_site: int) -> torch.Tensor: """ Returns the Von Neumann entanglement entropy of the state `mps` at the bond between sites b and b+1 S = -Σᵢsᵢ² log(sᵢ²)), where sᵢ are the singular values at the chosen bond. """ self.orthogonalize(mps_site)
# perform svd on reshaped matrix at site b matrix = self.factors[mps_site].flatten(end_dim=1) s = torch.linalg.svdvals(matrix)
s_e = torch.Tensor(torch.special.entr(s**2)) s_e = torch.sum(s_e)
self.orthogonalize(0) return s_e.cpu()
expect_batch(single_qubit_operators)
Section titled “
expect_batch(single_qubit_operators)
”Computes expectation values for each qubit and each single qubit operator in the batched input tensor.
Returns a tensor T such that T[q, i] is the expectation value for qubit #q and operator single_qubit_operators[i].
Source code in emu_mps/mps.py
def expect_batch(self, single_qubit_operators: torch.Tensor) -> torch.Tensor: """ Computes expectation values for each qubit and each single qubit operator in the batched input tensor.
Returns a tensor T such that T[q, i] is the expectation value for qubit #q and operator single_qubit_operators[i]. """ orthogonality_center = ( self.orthogonality_center if self.orthogonality_center is not None else self.orthogonalize(0) )
result = torch.zeros( self.num_sites, single_qubit_operators.shape[0], dtype=torch.complex128 )
center_factor = self.factors[orthogonality_center] for qubit_index in range(orthogonality_center, self.num_sites): temp = torch.tensordot(center_factor.conj(), center_factor, ([0, 2], [0, 2]))
result[qubit_index] = torch.tensordot( single_qubit_operators.to(temp.device), temp, dims=2 )
if qubit_index < self.num_sites - 1: _, r = torch.linalg.qr(center_factor.view(-1, center_factor.shape[2])) center_factor = torch.tensordot( r, self.factors[qubit_index + 1].to(r.device), dims=1 )
center_factor = self.factors[orthogonality_center] for qubit_index in range(orthogonality_center - 1, -1, -1): _, r = torch.linalg.qr( center_factor.view(center_factor.shape[0], -1).mT, ) center_factor = torch.tensordot( self.factors[qubit_index], r.to(self.factors[qubit_index].device), ([2], [1]), )
temp = torch.tensordot(center_factor.conj(), center_factor, ([0, 2], [0, 2]))
result[qubit_index] = torch.tensordot( single_qubit_operators.to(temp.device), temp, dims=2 )
return result
get_correlation_matrix(*, operator=n_operator)
Section titled “
get_correlation_matrix(*, operator=n_operator)
”PARAMETER | DESCRIPTION |
---|---|
operator
|
a 2x2 Torch tensor to use
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the corresponding correlation matrix |
Source code in emu_mps/mps.py
def get_correlation_matrix( self, *, operator: torch.Tensor = n_operator) -> torch.Tensor: """ Efficiently compute the symmetric correlation matrix C_ij = in basis ("r", "g").
Args: operator: a 2x2 Torch tensor to use
Returns: the corresponding correlation matrix """ assert operator.shape == (2, 2)
result = torch.zeros(self.num_sites, self.num_sites, dtype=torch.complex128)
for left in range(0, self.num_sites): self.orthogonalize(left) accumulator = torch.tensordot( self.factors[left], operator.to(self.factors[left].device), dims=([1], [0]), ) accumulator = torch.tensordot( accumulator, self.factors[left].conj(), dims=([0, 2], [0, 1]) ) result[left, left] = accumulator.trace().item().real for right in range(left + 1, self.num_sites): partial = torch.tensordot( accumulator.to(self.factors[right].device), self.factors[right], dims=([0], [0]), ) partial = torch.tensordot( partial, self.factors[right].conj(), dims=([0], [0]) )
result[left, right] = ( torch.tensordot( partial, operator.to(partial.device), dims=([0, 2], [0, 1]) ) .trace() .item() .real ) result[right, left] = result[left, right] accumulator = tensor_trace(partial, 0, 2)
return result
get_max_bond_dim()
Section titled “
get_max_bond_dim()
”Return the max bond dimension of this MPS.
RETURNS | DESCRIPTION |
---|---|
int
|
the largest bond dimension in the state |
Source code in emu_mps/mps.py
def get_max_bond_dim(self) -> int: """ Return the max bond dimension of this MPS.
Returns: the largest bond dimension in the state """ return max((x.shape[2] for x in self.factors), default=0)
get_memory_footprint()
Section titled “
get_memory_footprint()
”Returns the number of MBs of memory occupied to store the state
RETURNS | DESCRIPTION |
---|---|
float
|
the memory in MBs |
Source code in emu_mps/mps.py
def get_memory_footprint(self) -> float: """ Returns the number of MBs of memory occupied to store the state
Returns: the memory in MBs """ return ( # type: ignore[no-any-return] sum(factor.element_size() * factor.numel() for factor in self.factors) * 1e-6 )
inner(other)
Section titled “
inner(other)
”Compute the inner product between this state and other. Note that self is the left state in the inner product, so this function is linear in other, and anti-linear in self
PARAMETER | DESCRIPTION |
---|---|
other
|
the other state
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
inner product |
Source code in emu_mps/mps.py
def inner(self, other: State) -> torch.Tensor: """ Compute the inner product between this state and other. Note that self is the left state in the inner product, so this function is linear in other, and anti-linear in self
Args: other: the other state
Returns: inner product """ assert isinstance(other, MPS), "Other state also needs to be an MPS" assert ( self.num_sites == other.num_sites ), "States do not have the same number of sites"
acc = torch.ones(1, 1, dtype=self.factors[0].dtype, device=self.factors[0].device)
for i in range(self.num_sites): acc = acc.to(self.factors[i].device) acc = torch.tensordot(acc, other.factors[i].to(acc.device), dims=1) acc = torch.tensordot(self.factors[i].conj(), acc, dims=([0, 1], [0, 1]))
return acc.view(1)[0].cpu()
make(num_sites, config=None, num_gpus_to_use=DEVICE_COUNT, eigenstates=['0', '1'])
classmethod
Section titled “
make(num_sites, config=None, num_gpus_to_use=DEVICE_COUNT, eigenstates=['0', '1'])
classmethod
”Returns a MPS in ground state |000..0>.
PARAMETER | DESCRIPTION |
---|---|
num_sites
|
the number of qubits
TYPE:
|
config
|
the MPSConfig
TYPE:
|
num_gpus_to_use
|
distribute the factors over this many GPUs 0=all factors to cpu
TYPE:
|
Source code in emu_mps/mps.py
@classmethoddef make( cls, num_sites: int, config: Optional[MPSConfig] = None, num_gpus_to_use: int = DEVICE_COUNT, eigenstates: Sequence[Eigenstate] = ["0", "1"],) -> MPS: """ Returns a MPS in ground state |000..0>.
Args: num_sites: the number of qubits config: the MPSConfig num_gpus_to_use: distribute the factors over this many GPUs 0=all factors to cpu """ config = config if config is not None else MPSConfig()
if num_sites <= 1: raise ValueError("For 1 qubit states, do state vector")
return cls( [ torch.tensor([[[1.0], [0.0]]], dtype=torch.complex128) for _ in range(num_sites) ], config=config, num_gpus_to_use=num_gpus_to_use, orthogonality_center=0, # Arbitrary: every qubit is an orthogonality center. eigenstates=eigenstates, )
norm()
Section titled “
norm()
”Computes the norm of the MPS.
Source code in emu_mps/mps.py
def norm(self) -> torch.Tensor: """Computes the norm of the MPS.""" orthogonality_center = ( self.orthogonality_center if self.orthogonality_center is not None else self.orthogonalize(0) ) # the torch.norm function is not properly typed. return self.factors[orthogonality_center].norm().cpu() # type: ignore[no-any-return]
orthogonalize(desired_orthogonality_center=0)
Section titled “
orthogonalize(desired_orthogonality_center=0)
”Orthogonalize the state on the given orthogonality center.
Returns the new orthogonality center index as an integer, this is convenient for type-checking purposes.
Source code in emu_mps/mps.py
def orthogonalize(self, desired_orthogonality_center: int = 0) -> int: """ Orthogonalize the state on the given orthogonality center.
Returns the new orthogonality center index as an integer, this is convenient for type-checking purposes. """ assert ( 0 <= desired_orthogonality_center < self.num_sites ), f"Cannot move orthogonality center to nonexistent qubit #{desired_orthogonality_center}"
lr_swipe_start = ( self.orthogonality_center if self.orthogonality_center is not None else 0 )
for i in range(lr_swipe_start, desired_orthogonality_center): q, r = torch.linalg.qr(self.factors[i].view(-1, self.factors[i].shape[2])) self.factors[i] = q.view(self.factors[i].shape[0], 2, -1) self.factors[i + 1] = torch.tensordot( r.to(self.factors[i + 1].device), self.factors[i + 1], dims=1 )
rl_swipe_start = ( self.orthogonality_center if self.orthogonality_center is not None else (self.num_sites - 1) )
for i in range(rl_swipe_start, desired_orthogonality_center, -1): q, r = torch.linalg.qr( self.factors[i].view(self.factors[i].shape[0], -1).mT, ) self.factors[i] = q.mT.view(-1, 2, self.factors[i].shape[2]) self.factors[i - 1] = torch.tensordot( self.factors[i - 1], r.to(self.factors[i - 1].device), ([2], [1]) )
self.orthogonality_center = desired_orthogonality_center
return desired_orthogonality_center
overlap(other)
Section titled “
overlap(other)
”Compute the overlap of this state and other. This is defined as
Source code in emu_mps/mps.py
def overlap(self, other: State, /) -> torch.Tensor: """ Compute the overlap of this state and other. This is defined as $|\\langle self | other \\rangle |^2$ """ return torch.abs(self.inner(other)) ** 2 # type: ignore[no-any-return]
sample(*, num_shots, one_state=None, p_false_pos=0.0, p_false_neg=0.0)
Section titled “
sample(*, num_shots, one_state=None, p_false_pos=0.0, p_false_neg=0.0)
”Samples bitstrings, taking into account the specified error rates.
PARAMETER | DESCRIPTION |
---|---|
num_shots
|
how many bitstrings to sample
TYPE:
|
p_false_pos
|
the rate at which a 0 is read as a 1
TYPE:
|
p_false_neg
|
the rate at which a 1 is read as a 0
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Counter[str]
|
the measured bitstrings, by count |
Source code in emu_mps/mps.py
def sample( self, *, num_shots: int, one_state: Eigenstate | None = None, p_false_pos: float = 0.0, p_false_neg: float = 0.0,) -> Counter[str]: """ Samples bitstrings, taking into account the specified error rates.
Args: num_shots: how many bitstrings to sample p_false_pos: the rate at which a 0 is read as a 1 p_false_neg: the rate at which a 1 is read as a 0
Returns: the measured bitstrings, by count """ assert one_state in {None, "r", "1"} self.orthogonalize(0)
rnd_matrix = torch.rand(num_shots, self.num_sites).to(self.factors[0].device)
bitstrings: Counter[str] = Counter()
# Shots are performed in batches. # Larger max_batch_size is faster but uses more memory. max_batch_size = 32
shots_done = 0 while shots_done < num_shots: batch_size = min(max_batch_size, num_shots - shots_done) batched_accumulator = torch.ones( batch_size, 1, dtype=torch.complex128, device=self.factors[0].device )
batch_outcomes = torch.empty(batch_size, self.num_sites, dtype=torch.bool)
for qubit, factor in enumerate(self.factors): batched_accumulator = torch.tensordot( batched_accumulator.to(factor.device), factor, dims=1 )
# Probability of measuring qubit == 0 for each shot in the batch probas = ( torch.linalg.vector_norm(batched_accumulator[:, 0, :], dim=1) ** 2 )
outcomes = ( rnd_matrix[shots_done : shots_done + batch_size, qubit].to( factor.device ) > probas ) batch_outcomes[:, qubit] = outcomes
# Batch collapse qubit tmp = torch.stack((~outcomes, outcomes), dim=1).to(dtype=torch.complex128)
batched_accumulator = ( torch.tensordot(batched_accumulator, tmp, dims=([1], [1])) .diagonal(dim1=0, dim2=2) .transpose(1, 0) ) batched_accumulator /= torch.sqrt( (~outcomes) * probas + outcomes * (1 - probas) ).unsqueeze(1)
shots_done += batch_size
for outcome in batch_outcomes: bitstrings.update(["".join("0" if x == 0 else "1" for x in outcome)])
if p_false_neg > 0 or p_false_pos > 0: bitstrings = apply_measurement_errors( bitstrings, p_false_pos=p_false_pos, p_false_neg=p_false_neg, ) return bitstrings
truncate()
Section titled “
truncate()
”SVD based truncation of the state. Puts the orthogonality center at the first qubit. Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left. Uses self.config for determining accuracy. An in-place operation.
Source code in emu_mps/mps.py
def truncate(self) -> None: """ SVD based truncation of the state. Puts the orthogonality center at the first qubit. Calls orthogonalize on the last qubit, and then sweeps a series of SVDs right-left. Uses self.config for determining accuracy. An in-place operation. """ self.orthogonalize(self.num_sites - 1) truncate_impl(self.factors, config=self.config) self.orthogonality_center = 0
Wrapper around MPS.inner.
PARAMETER | DESCRIPTION |
---|---|
left
|
the anti-linear argument
TYPE:
|
right
|
the linear argument
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the inner product |
Source code in emu_mps/mps.py
def inner(left: MPS, right: MPS) -> torch.Tensor: """ Wrapper around MPS.inner.
Args: left: the anti-linear argument right: the linear argument
Returns: the inner product """ return left.inner(right)
Bases: Operator[complex, Tensor, MPS]
Matrix Product Operator.
Each tensor has 4 dimensions ordered as such: (left bond, output, input, right bond).
PARAMETER | DESCRIPTION |
---|---|
factors
|
the tensors making up the MPO
TYPE:
|
Source code in emu_mps/mpo.py
def __init__( self, factors: List[torch.Tensor], /, num_gpus_to_use: Optional[int] = None): self.factors = factors self.num_sites = len(factors) if not self.num_sites > 1: raise ValueError("For 1 qubit states, do state vector") if factors[0].shape[0] != 1 or factors[-1].shape[-1] != 1: raise ValueError( "The dimension of the left (right) link of the first (last) tensor should be 1" ) assert all( factors[i - 1].shape[-1] == factors[i].shape[0] for i in range(1, self.num_sites) )
if num_gpus_to_use is not None: assign_devices(self.factors, min(DEVICE_COUNT, num_gpus_to_use))
__add__(other)
Section titled “
__add__(other)
”Returns the sum of two MPOs, computed with a direct algorithm. The result is currently not truncated
PARAMETER | DESCRIPTION |
---|---|
other
|
the other operator
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the summed operator |
Source code in emu_mps/mpo.py
def __add__(self, other: MPO) -> MPO: """ Returns the sum of two MPOs, computed with a direct algorithm. The result is currently not truncated
Args: other: the other operator
Returns: the summed operator """ assert isinstance(other, MPO), "MPO can only be added to another MPO" sum_factors = add_factors(self.factors, other.factors) return MPO(sum_factors)
__matmul__(other)
Section titled “
__matmul__(other)
”Compose two operators. The ordering is that self is applied after other.
PARAMETER | DESCRIPTION |
---|---|
other
|
the operator to compose with self
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the composed operator |
Source code in emu_mps/mpo.py
def __matmul__(self, other: MPO) -> MPO: """ Compose two operators. The ordering is that self is applied after other.
Args: other: the operator to compose with self
Returns: the composed operator """ assert isinstance(other, MPO), "MPO can only be applied to another MPO" factors = zip_right(self.factors, other.factors) return MPO(factors)
__rmul__(scalar)
Section titled “
__rmul__(scalar)
”Multiply an MPO by scalar. Assumes the orthogonal centre is on the first factor.
PARAMETER | DESCRIPTION |
---|---|
scalar
|
the scale factor to multiply with
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPO
|
the scaled MPO |
Source code in emu_mps/mpo.py
def __rmul__(self, scalar: complex) -> MPO: """ Multiply an MPO by scalar. Assumes the orthogonal centre is on the first factor.
Args: scalar: the scale factor to multiply with
Returns: the scaled MPO """ factors = scale_factors(self.factors, scalar, which=0) return MPO(factors)
apply_to(other)
Section titled “
apply_to(other)
”Applies this MPO to the given MPS. The returned MPS is:
- othogonal on the first site- truncated up to `other.precision`- distributed on the same devices of `other`
PARAMETER | DESCRIPTION |
---|---|
other
|
the state to apply this operator to
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MPS
|
the resulting state |
Source code in emu_mps/mpo.py
def apply_to(self, other: MPS) -> MPS: """ Applies this MPO to the given MPS. The returned MPS is:
- othogonal on the first site - truncated up to `other.precision` - distributed on the same devices of `other`
Args: other: the state to apply this operator to
Returns: the resulting state """ assert isinstance(other, MPS), "MPO can only be multiplied with MPS" factors = zip_right( self.factors, other.factors, config=other.config, ) return MPS(factors, orthogonality_center=0, eigenstates=other.eigenstates)
expect(state)
Section titled “
expect(state)
”Compute the expectation value of self on the given state.
PARAMETER | DESCRIPTION |
---|---|
state
|
the state with which to compute
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Tensor
|
the expectation |
Source code in emu_mps/mpo.py
def expect(self, state: State) -> torch.Tensor: """ Compute the expectation value of self on the given state.
Args: state: the state with which to compute
Returns: the expectation """ assert isinstance( state, MPS ), "currently, only expectation values of MPSs are \ supported" acc = torch.ones( 1, 1, 1, dtype=state.factors[0].dtype, device=state.factors[0].device ) n = len(self.factors) - 1 for i in range(n): acc = new_left_bath(acc, state.factors[i], self.factors[i]).to( state.factors[i + 1].device ) acc = new_left_bath(acc, state.factors[n], self.factors[n]) return acc.view(1)[0].cpu()