diff --git a/tutorials-v4/pulse-level-circuit-simulation/qip-customize-device.md b/tutorials-v4/pulse-level-circuit-simulation/qip-customize-device.md index 470d8f81..02ea4f7a 100644 --- a/tutorials-v4/pulse-level-circuit-simulation/qip-customize-device.md +++ b/tutorials-v4/pulse-level-circuit-simulation/qip-customize-device.md @@ -12,10 +12,10 @@ jupyter: name: python3 --- -# Custimize the pulse-level simulation +# Customize the pulse-level simulation Author: Boxi Li (etamin1201@gmail.com) -In this note, we demonstrate examples of customizing the pulse-level simulator in qutip-qip.The notebook is divided into three parts: +In this note, we demonstrate examples of customizing the pulse-level simulator in qutip-qip. The notebook is divided into three parts: 1. Customizing the Hamiltonian model 2. Customizing the compiler 3. Customizing the noise @@ -145,7 +145,7 @@ class MyProcessor(ModelProcessor): super(MyProcessor, self).__init__( num_qubits, t1=t1, t2=t2 ) # call the parent class initializer - # The control pulse is discrete or continous. + # The control pulse is discrete or continuous. self.pulse_mode = "discrete" self.model.params.update( { @@ -193,7 +193,7 @@ circuit For circuit plotting, see [this notebook](../quantum-circuits/quantum-gates.md). -To convert a quantum circuit into the Hamiltonian model, we need a compiler. The custom definition of a compiler will be discussed in details in the next section. Because we used the Hamiltonian model of the spin chain, we here simply "borrow" the compiler of the spin chain model. +To convert a quantum circuit into the Hamiltonian model, we need a compiler. The custom definition of a compiler will be discussed in detail in the next section. Because we used the Hamiltonian model of the spin chain, we here simply "borrow" the compiler of the spin chain model. ```python processor = ModelProcessor(model=MyModel(num_qubits, h_x=1.0, h_z=1.0, g=0.1)) @@ -468,8 +468,8 @@ print( ) ``` -### Pulse dependent noise -In this second example, we demonstrate how to add an additional amplitude damping channel on the qubits. The amplitude of this decay is linearly dependent on the control pulse "sx", i.e. whenever the pulse "sx" is turned on, the decoherence is also turned on. The corresponding annihilation operator has a coefficient proportional to the control pulse amplitude. This noise can be added on top of the default T1, T2 noise. +### Pulse-dependent noise +In this second example, we demonstrate how to add an additional amplitude-damping channel on the qubits. The amplitude of this decay is linearly dependent on the control pulse "sx", i.e. whenever the pulse "sx" is turned on, the decoherence is also turned on. The corresponding annihilation operator has a coefficient proportional to the control pulse amplitude. This noise can be added on top of the default T1, T2 noise. ```python class Extral_decay_2(Noise): @@ -505,7 +505,7 @@ tlist, coeff = processor.load_circuit(circuit, compiler=gauss_compiler) result = processor.run_state(init_state=basis([2, 2], [0, 0])) print( - "Final fidelity with pulse dependent decoherence:", + "Final fidelity with pulse-dependent decoherence:", fidelity(result.states[-1], basis([2, 2], [1, 1])), ) ``` diff --git a/tutorials-v5/optimal-control/01-optimal-control-overview.md b/tutorials-v5/optimal-control/01-optimal-control-overview.md index 50c91bea..7096dea2 100644 --- a/tutorials-v5/optimal-control/01-optimal-control-overview.md +++ b/tutorials-v5/optimal-control/01-optimal-control-overview.md @@ -22,24 +22,24 @@ Jonathan Zoller (jonathan.zoller@uni-ulm.de) # Introduction -In quantum control we look to prepare some specific state, effect some state-to-state transfer, or effect some transformation (or gate) on a quantum system. For a given quantum system there will always be factors that effect the dynamics that are outside of our control. As examples, the interactions between elements of the system or a magnetic field required to trap the system. However, there may be methods of affecting the dynamics in a controlled way, such as the time varying amplitude of the electric component of an interacting laser field. And so this leads to some questions; given a specific quantum system with known time-independent dynamics generator (referred to as the *drift* dynamics generators) and set of externally controllable fields for which the interaction can be described by *control* dynamics generators: -1. what states or transformations can we achieve (if any)? -2. what is the shape of the control pulse required to achieve this? +In quantum control, we look to prepare some specific state, effect some state-to-state transfer, or effect some transformation (or gate) on a quantum system. For a given quantum system, there will always be factors that affect the dynamics that are outside of our control. As examples, the interactions between elements of the system or a magnetic fieldare required to trap the system. However, there may be methods of affecting the dynamics in a controlled way, such as the time-varying amplitude of the electric component of an interacting laser field. And so this leads to some questions; given a specific quantum system with known time-independent dynamics generator (referred to as the *drift* dynamics generators) and set of externally controllable fields for which the interaction can be described by *control* dynamics generators: +1. What states or transformations can we achieve (if any)? +2. What is the shape of the control pulse required to achieve this? -These questions are addressed as *controllability* and *quantum optimal control* [1]. The answer to question of *controllability* is determined by the commutability of the dynamics generators and is formalised as the *Lie Algebra Rank Criterion* and is discussed in detail in [1]. The solutions to the second question can be determined through optimal control algorithms, or control pulse optimisation. +These questions are addressed as *controllability* and *quantum optimal control* [1]. The answer to the question of *controllability* is determined by the commutability of the dynamics generators and is formalised as the *Lie Algebra Rank Criterion* and is discussed in detail in [1]. The solutions to the second question can be determined through optimal control algorithms or control pulse optimisation. ![qc_shematic](./images/quant_optim_ctrl.png "Schematic showing the principle of quantum control") -Quantum Control has many applications including NMR, *quantum metrology*, *control of chemical reactions*, and *quantum information processing*. +Quantum Control has many applications, including NMR, *quantum metrology*, *control of chemical reactions*, and *quantum information processing*. -To explain the physics behind these algorithms we will first consider only finite-dimensional, closed quantum systems. +To explain the physics behind these algorithms, we will first consider only finite-dimensional, closed quantum systems. # Closed Quantum Systems -In closed quantum systems the states can be represented by kets, and the transformations on these states are unitary operators. The dynamics generators are Hamiltonians. The combined Hamiltonian for the system is given by +In closed quantum systems, the states can be represented by kets, and the transformations on these states are unitary operators. The dynamics generators are Hamiltonians. The combined Hamiltonian for the system is given by $ H(t) = H_0 + \sum_{j=1} u_j(t) H_j $ -where $H_0$ is the drift Hamiltonian and the $H_j$ are the control Hamiltonians. The $u_j$ are time varying amplitude functions for the specific control. +where $H_0$ is the drift Hamiltonian and the $H_j$ are the control Hamiltonians. The $u_j$ are time-varying amplitude functions for the specific control. The dynamics of the system are governed by *Schrödingers equation*. @@ -61,7 +61,7 @@ The GRadient Ascent Pulse Engineering was first proposed in [2]. Solutions to Sc $H(t) \approx H(t_k) = H_0 + \sum_{j=1}^N u_{jk} H_j\quad$ -where $k$ is a timeslot index, $j$ is the control index, and $N$ is the number of controls. Hence $t_k$ is the evolution time at the start of the timeslot, and $u_{jk}$ is the amplitude of control $j$ throughout timeslot $k$. The time evolution operator, or propagator, within the timeslot can then be calculated as: +where $k$ is a timeslot index, $j$ is the control index, and $N$ is the number of controls. Hence, $t_k$ is the evolution time at the start of the timeslot, and $u_{jk}$ is the amplitude of control $j$ throughout timeslot $k$. The time evolution operator, or propagator, within the timeslot can then be calculated as: $X_k:=e^{-iH(t_k)\Delta t_k}$ @@ -75,15 +75,15 @@ A *figure of merit* or *fidelity* is some measure of how close the evolution is $\newcommand{\tr}[0]{\operatorname{tr}} f_{PSU} = \tfrac{1}{d} \big| \tr \{X_{targ}^{\dagger} X(T)\} \big|$ -where $d$ is the system dimension. In this figure of merit the absolute value is taken to ignore any differences in global phase, and $0 \le f \le 1$. Typically the fidelity error (or *infidelity*) is more useful, in this case defined as $\varepsilon = 1 - f_{PSU}$. There are many other possible objectives, and hence figures of merit. +where $d$ is the system dimension. In this figure of merit, the absolute value is taken to ignore any differences in global phase, and $0 \le f \le 1$. Typically, the fidelity error (or *infidelity*) is more useful, in this case defined as $\varepsilon = 1 - f_{PSU}$. There are many other possible objectives, and hence figures of merit. -As there are now $N \times M$ variables (the $u_{jk}$) and one parameter to minimise $\varepsilon$, then the problem becomes a finite multi-variable optimisation problem, for which there are many established methods, often referred to as 'hill-climbing' methods. The simplest of these to understand is that of steepest ascent (or descent). The gradient of the fidelity with respect to all the variables is calculated (or approximated) and a step is made in the variable space in the direction of steepest ascent (or descent). This method is a first order gradient method. In two dimensions this describes a method of climbing a hill by heading in the direction where the ground rises fastest. This analogy also clearly illustrates one of the main challenges in multi-variable optimisation, which is that all methods have a tendency to get stuck in local maxima. It is hard to determine whether one has found a global maximum or not - a local peak is likely not to be the highest mountain in the region. In quantum optimal control we can typically define an infidelity that has a lower bound of zero. We can then look to minimise the infidelity (from here on we will only consider optimising for infidelity minima). This means that we can terminate any pulse optimisation when the infidelity reaches zero (to a sufficient precision). This is however only possible for fully controllable systems; otherwise it is hard (if not impossible) to know that the minimum possible infidelity has been achieved. In the hill walking analogy the step size is roughly fixed to a stride, however, in computations the step size must be chosen. Clearly there is a trade-off here between the number of steps (or iterations) required to reach the minima and the possibility that we might step over a minima. In practice it is difficult to determine an efficient and effective step size. +As there are now $N \times M$ variables (the $u_{jk}$) and one parameter to minimise $\varepsilon$, then the problem becomes a finite multi-variable optimisation problem, for which there are many established methods, often referred to as 'hill-climbing' methods. The simplest of these to understand is that of steepest ascent (or descent). The gradient of the fidelity with respect to all the variables is calculated (or approximated), and a step is made in the variable space in the direction of steepest ascent (or descent). This method is a first-order gradient method. In two dimensions, this describes a method of climbing a hill by heading in the direction where the ground rises fastest. This analogy also clearly illustrates one of the main challenges in multi-variable optimisation, which is that all methods have a tendency to get stuck in local maxima. It is hard to determine whether one has found a global maximum or not - a local peak is likely not to be the highest mountain in the region. In quantum optimal control, we can typically define an infidelity that has a lower bound of zero. We can then look to minimise the infidelity (from here on we will only consider optimising for infidelity minima). This means that we can terminate any pulse optimisation when the infidelity reaches zero (to a sufficient precision). This is, however, only possible for fully controllable systems; otherwise it is hard (if not impossible) to know that the minimum possible infidelity has been achieved. In the hill walking analogy the step size is roughly fixed to a stride, however, in computations, the step size must be chosen. Clearly there is a trade-off here between the number of steps (or iterations) required to reach the minima and the possibility that we might step over a minima. In practice, it is difficult to determine an efficient and effective step size. -The second order differentials of the infidelity with respect to the variables can be used to approximate the local landscape to a parabola. This way a step (or jump) can be made to where the minima would be if it were parabolic. This typically vastly reduces the number of iterations, and removes the need to guess a step size. The method where all the second differentials are calculated explicitly is called the *Newton-Raphson* method. However, calculating the second-order differentials (the Hessian matrix) can be computationally expensive, and so there are a class of methods known as *quasi-Newton* that approximate the Hessian based on successive iterations. The most popular of these (in quantum optimal control) is the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS). The default method in the QuTiP Qtrl GRAPE implementation is the L-BFGS-B method in Scipy, which is a wrapper to the implementation described in [3]. This limited memory and bounded method does not need to store the entire Hessian, which reduces the computer memory required, and allows bounds to be set for variable values, which considering these are field amplitudes is often physical. +The second-order differentials of the infidelity with respect to the variables can be used to approximate the local landscape to a parabola. This way a step (or jump) can be made to where the minima would be if it were parabolic. This typically vastly reduces the number of iterations, and removes the need to guess a step size. The method where all the second differentials are calculated explicitly is called the *Newton-Raphson* method. However, calculating the second-order differentials (the Hessian matrix) can be computationally expensive, and so there are a class of methods known as *quasi-Newton* that approximate the Hessian based on successive iterations. The most popular of these (in quantum optimal control) is the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS). The default method in the QuTiP Qtrl GRAPE implementation is the L-BFGS-B method in Scipy, which is a wrapper to the implementation described in [3]. This limited memory and bounded method does not need to store the entire Hessian, which reduces the computer memory required, and allows bounds to be set for variable values, which, considering these are field amplitudes, is often physical. -The pulse optimisation is typically far more efficient if the gradients can be calculated exactly, rather than approximated. For simple fidelity measures such as $f_{PSU}$ this is possible. Firstly the propagator gradient for each timeslot with respect to the control amplitudes is calculated. For closed systems, with unitary dynamics, a method using the eigendecomposition is used, which is efficient as it is also used in the propagator calculation (to exponentiate the combined Hamiltonian). More generally (for example open systems and symplectic dynamics) the Frechet derivative (or augmented matrix) method is used, which is described in [4]. For other optimisation goals it may not be possible to calculate analytic gradients. In these cases it is necessary to approximate the gradients, but this can be very expensive, and can lead to other algorithms out-performing GRAPE. +The pulse optimisation is typically far more efficient if the gradients can be calculated exactly, rather than approximated. For simple fidelity measures such as $f_{PSU}$ this is possible. Firstly, the propagator gradient for each timeslot with respect to the control amplitudes is calculated. For closed systems, with unitary dynamics, a method using the eigendecomposition is used, which is efficient as it is also used in the propagator calculation (to exponentiate the combined Hamiltonian). More generally (for example open systems and symplectic dynamics) the Frechet derivative (or augmented matrix) method is used, which is described in [4]. For other optimisation goals it may not be possible to calculate analytic gradients. In these cases, it is necessary to approximate the gradients, but this can be very expensive, and can lead to other algorithms outperforming GRAPE. -QuTiP examples of GRAPE using second order gradient ascent methods are given in: +QuTiP examples of GRAPE using second-order gradient ascent methods are given in: - [pulseoptim Hadamard](./02-cpo-GRAPE-Hadamard.ipynb) - [pulseoptim QFT](./03-cpo-GRAPE-QFT.ipynb) - Open systems: [pulseoptim - Lindbladian](./04-cpo-GRAPE-QFT.ipynb) @@ -99,7 +99,7 @@ QuTiP examples of GRAPE using second order gradient ascent methods are given in: # The CRAB Algorithm It has been shown [5], the dimension of a quantum optimal control problem is a polynomial function of the dimension of the manifold of the time-polynomial reachable states, when allowing for a finite control precision and evolution time. You can think of this as the information content of the pulse (as being the only effective input) being very limited e.g. the pulse is compressible to a few bytes without loosing the target. -This is where the Chopped RAndom Basis (CRAB) algorithm [6,7] comes into play: Since the pulse complexity is usually very low, it is sufficient to transform the optimal control problem to a few parameter search by introducing a physically motivated function basis that builds up the pulse. Compared to the number of time slices needed to accurately simulate quantum dynamics (often equals basis dimension for Gradient based algorithms), this number is lower by orders of magnitude, allowing CRAB to efficiently optimize smooth pulses with realistic experimental constraints. It is important to point out, that CRAB does not make any suggestion on the basis function to be used. The basis must be chosen carefully considered, taking into account a priori knowledge of the system (such as symmetries, magnitudes of scales,...) and solution (e.g. sign, smoothness, bang-bang behavior, singularities, maximum excursion or rate of change,....). By doing so, this algorithm allows for native integration of experimental constraints such as maximum frequencies allowed, maximum amplitude, smooth ramping up and down of the pulse and many more. Moreover initial guesses, if they are available, can (however not have to) be included to speed up convergence. +This is where the Chopped RAndom Basis (CRAB) algorithm [6,7] comes into play: Since the pulse complexity is usually very low, it is sufficient to transform the optimal control problem to a few parameter search by introducing a physically motivated function basis that builds up the pulse. Compared to the number of time slices needed to accurately simulate quantum dynamics (often equals basis dimension for gradient-based algorithms), this number is lower by orders of magnitude, allowing CRAB to efficiently optimize smooth pulses with realistic experimental constraints. It is important to point out, that CRAB does not make any suggestion on the basis function to be used. The basis must be chosen carefully, taking into account a priori knowledge of the system (such as symmetries, magnitudes of scales,...) and solution (e.g. sign, smoothness, bang-bang behavior, singularities, maximum excursion or rate of change,....). By doing so, this algorithm allows for native integration of experimental constraints such as maximum frequencies allowed, maximum amplitude, smooth ramping up and down of the pulse and many more. Moreover, initial guesses, if they are available, can (however not have to) be included to speed up convergence. As mentioned in the GRAPE paragraph, for CRAB local minima arising from algorithmic design can occur, too. However, for CRAB a 'dressed' version has recently been introduced [8] that allows to escape local minima. @@ -112,32 +112,32 @@ QuTiP examples of CRAB control are given in: # The QuTiP optimal control implementation -There are two separate implementations of optimal control inside QuTiP. The first is an implementation of first order GRAPE, and is not further described here, but there are the example notebooks listed above. The second is referred to as Qtrl (when a distinction needs to be made) as this was its name before it was integrated into QuTiP. Qtrl uses the Scipy optimize functions to perform the multi-variable optimisation, typically the L-BFGS-B method for GRAPE and Nelder-Mead for CRAB. The GRAPE implementation in Qtrl was initially based on the open-source package DYNAMO, which is a MATLAB implementation, and is described in [9]. It has since been restructured and extended for flexibility and compatibility within QuTiP. Merging the GRAPE implementations is part of the near future plans. An implementation of the 'dressed' CRAB algorithm is also planned for the near future. +There are two separate implementations of optimal control inside QuTiP. The first is an implementation of first-order GRAPE, and is not further described here, but there are the example notebooks listed above. The second is referred to as Qtrl (when a distinction needs to be made) as this was its name before it was integrated into QuTiP. Qtrl uses the Scipy optimize functions to perform the multi-variable optimization, typically the L-BFGS-B method for GRAPE and Nelder-Mead for CRAB. The GRAPE implementation in Qtrl was initially based on the open-source package DYNAMO, which is a MATLAB implementation, and is described in [9]. It has since been restructured and extended for flexibility and compatibility within QuTiP. Merging the GRAPE implementations is part of the near future plans. An implementation of the 'dressed' CRAB algorithm is also planned for the near future. The rest of this section describes the Qtrl implementation and how to use it. ## Object Model -The Qtrl code is organised in a hierarchical object model in order to try and maximise configurability whilst maintaining some clarity. It is not necessary to understand the model in order to use the pulse optimisation functions, but it is the most flexible method of using Qtrl. If you just want to use a simple single function call interface (as in the notebook examples) then skip to the section on 'Using the pulseoptim functions'. +The Qtrl code is organised in a hierarchical object model in order to try and maximise configurability whilst maintaining some clarity. It is not necessary to understand the model in order to use the pulse optimisation functions, but it is the most flexible method of using Qtrl. If you just want to use a simple single-function-call interface (as in the notebook examples), then skip to the section on 'Using the pulseoptim functions'. ![qtrl-code_obj_model](./images/qtrl-code_object_model.png "Qtrl code object model") -The object's properties and methods are described in detail in the documentation, so that will not be repeated here. +The object's properties and methods are described in detail in the documentation, so they will not be repeated here. ### OptimConfig -The OptimConfig object is used simply to hold configuration parameters used by all the objects. Typically this is the subclass types for the other objects and parameters for the users specific requirements. The loadparams module can be used read parameter values from a configuration file. +The OptimConfig object is used simply to hold configuration parameters used by all the objects. Typically, this is the subclass types for the other objects and parameters for the users specific requirements. The loadparams module can be used read parameter values from a configuration file. ### Optimizer -This acts as a wrapper to the Scipy.optimize functions that perform the work of the pulse optimisation algorithms. Using the main classes the user can specify which of the optimisation methods are to be used. There are subclasses specifically for the BFGS and L-BFGS-B methods. There is another subclass for using the CRAB algorithm. +This acts as a wrapper to the Scipy.optimize functions that perform the work of the pulse optimisation algorithms. Using the main classes, the user can specify which of the optimisation methods are to be used. There are subclasses specifically for the BFGS and L-BFGS-B methods. There is another subclass for using the CRAB algorithm. ### Dynamics This is mainly a container for the lists that hold the dynamics generators, propagators, and time evolution operators in each timeslot. The combining of dynamics generators is also complete by this object. Different subclasses support a range of types of quantum systems, including closed systems with unitary dynamics, systems with quadratic Hamiltonians that have Gaussian states and symplectic transforms, and a general subclass that can be used for open system dynamics with Lindbladian operators. ### PulseGen -There are many subclasses that of pulse generators that generate different types of pulses as the initial amplitudes for the optimisation. Often the goal cannot be achieved from all starting conditions, and then typically some kind of random pulse is used and repeated optimisations are performed until the desired infidelity is reached or the minimum infidelity found is reported. +There are many subclasses of pulse generators that generate different types of pulses as the initial amplitudes for the optimization. Often, the goal cannot be achieved from all starting conditions, and then typically some kind of random pulse is used and repeated optimisations are performed until the desired infidelity is reached or the minimum infidelity found is reported. There is a specific subclass that is used by the CRAB algorithm to generate the pulses based on the basis coefficients that are being optimised. ### TerminationConditions -This is simply a convenient place to hold all the properties that will determine when the single optimisation run terminates. Limits can be set for number of iterations, time, and of course the target infidelity. +This is simply a convenient place to hold all the properties that will determine when the single optimisation run terminates. Limits can be set for the number of iterations, time, and, of course, the target infidelity. ### Stats Performance data are optionally collected during the optimisation. This object is shared to a single location to store, calculate and report run statistics. @@ -146,7 +146,7 @@ Performance data are optionally collected during the optimisation. This object i The subclass of the fidelity computer determines the type of fidelity measure. These are closely linked to the type of dynamics in use. These are also the most commonly user customised subclasses. ### PropagatorComputer -This object computes propagators from one timeslot to the next and also the propagator gradient. The options are using the spectral decomposition or Frechet derivative, as discussed above. +This object computes propagators from one timeslot to the next and also the propagator gradient. The options are using the spectral decomposition or the Frechet derivative, as discussed above. ### TimeslotComputer Here the time evolution is computed by calling the methods of the other computer objects. @@ -155,9 +155,9 @@ Here the time evolution is computed by calling the methods of the other computer The result of a pulse optimisation run is returned as an object with properties for the outcome in terms of the infidelity, reason for termination, performance statistics, final evolution, and more. ## Using the pulseoptim functions -The simplest method for optimising a control pulse is to call one of the functions in the pulseoptim module. This automates the creation and configuration of the necessary objects, generation of initial pulses, running the optimisation and returning the result. There are functions specifically for unitary dynamics, and also specifically for the CRAB algorithm (GRAPE is the default). The optimise_pulse function can in fact be used for unitary dynamics and / or the CRAB algorithm, the more specific functions simply have parameter names that are more familiar in that application. +The simplest method for optimising a control pulse is to call one of the functions in the pulseoptim module. This automates the creation and configuration of the necessary objects, generation of initial pulses, running the optimisation, and returning the result. There are functions specifically for unitary dynamics, and also specifically for the CRAB algorithm (GRAPE is the default). The optimise_pulse function can in fact, be used for unitary dynamics and / or the CRAB algorithm; the more specific functions simply have parameter names that are more familiar in that application. -A semi-automated method is to use the create_optimizer_objects function to generate and configure all the objects, then manually set the initial pulse and call the optimisation. This would be more efficient when repeating runs with different starting conditions. A example of this method is given in [pulseoptim QFT](./03-cpo-GRAPE-QFT.ipynb) +A semi-automated method is to use the create_optimizer_objects function to generate and configure all the objects, then manually set the initial pulse and call the optimization. This would be more efficient when repeating runs with different starting conditions. An example of this method is given in [pulseoptim QFT](./03-cpo-GRAPE-QFT.ipynb) diff --git a/tutorials-v5/optimal-control/02-cpo-GRAPE-Hadamard.md b/tutorials-v5/optimal-control/02-cpo-GRAPE-Hadamard.md index ea9894a4..5a4fd7b3 100644 --- a/tutorials-v5/optimal-control/02-cpo-GRAPE-Hadamard.md +++ b/tutorials-v5/optimal-control/02-cpo-GRAPE-Hadamard.md @@ -21,19 +21,19 @@ Alexander Pitchford (agp1@aber.ac.uk) Example to demonstrate using the control library to determine control pulses using the ctrlpulseoptim.optimize_pulse_unitary function. The (default) L-BFGS-B algorithm is used to optimise the pulse to -minimise the fidelity error, which is equivalent maximising the fidelity +minimise the fidelity error, which is equivalent to maximising the fidelity to an optimal value of 1. The system in this example is a single qubit in a constant field in z with a variable control field in x -The target evolution is the Hadamard gate irrespective of global phase +The target evolution is the Hadamard gate, irrespective of global phase The user can experiment with the timeslicing, by means of changing the number of timeslots and/or total time for the evolution. Different initial (starting) pulse types can be tried. The initial and final pulses are displayed in a plot -An in depth discussion of using methods of this type can be found in [1] +An in-depth discussion of using methods of this type can be found in [1] ```python import datetime @@ -82,10 +82,10 @@ n_ts = 10 evo_time = 10 ``` -### Set the conditions which will cause the pulse optimisation to terminate +### Set the conditions that will cause the pulse optimisation to terminate -At each iteration the fidelity of the evolution is tested by comparaing the calculated evolution U(T) with the target U_targ. For unitary systems such as this one this is typically: +At each iteration, the fidelity of the evolution is tested by comparaing the calculated evolution U(T) with the target U_targ. For unitary systems such as this one this is typically: f = normalise(overlap(U(T), U_targ)) For details of the normalisation see [1] or the source code. The maximum fidelity (for a unitary system) calculated this way would be 1, and hence the error is calculated as fid_err = 1 - fidelity. As such the optimisation is considered completed when the fid_err falls below such a target value. @@ -126,7 +126,7 @@ f_ext = "{}_n_ts{}_ptype{}.txt".format(example_name, n_ts, p_type) ### Run the optimisation -In this step the L-BFGS-B algorithm is invoked. At each iteration the gradient of the fidelity error w.r.t. each control amplitude in each timeslot is calculated using an exact gradient method (see [1]). Using the gradients the algorithm will determine a set of piecewise control amplitudes that reduce the fidelity error. With repeated iterations an approximation of the Hessian matrix (the 2nd order differentials) is calculated, which enables a quasi 2nd order Newton method for finding a minima. The algorithm continues until one of the termination conditions defined above has been reached. +In this step the L-BFGS-B algorithm is invoked. At each iteration the gradient of the fidelity error w.r.t. each control amplitude in each timeslot is calculated using an exact gradient method (see [1]). Using the gradients, the algorithm will determine a set of piecewise control amplitudes that reduce the fidelity error. With repeated iterations, an approximation of the Hessian matrix (the 2nd order differentials) is calculated, which enables a quasi 2nd order Newton method for finding a minima. The algorithm continues until one of the termination conditions defined above has been reached. ```python result = cpo.optimize_pulse_unitary( @@ -149,11 +149,11 @@ result = cpo.optimize_pulse_unitary( ### Report the results -Firstly the performace statistics are reported, which gives a breadown of the processing times. The times given are those that are associated with calculating the fidelity and the gradients. Any remaining processing time can be assumed to be used by the optimisation algorithm (L-BFGS-B) itself. In this example it can be seen that the majority of time is spent calculating the propagators, i.e. exponentiating the combined Hamiltonian. +Firstly the performace statistics are reported, which gives a breakdown of the processing times. The times given are those that are associated with calculating the fidelity and the gradients. Any remaining processing time can be assumed to be used by the optimisation algorithm (L-BFGS-B) itself. In this example, it can be seen that the majority of time is spent calculating the propagators, i.e., exponentiating the combined Hamiltonian. The optimised U(T) is reported as the 'final evolution', which is essentially the string representation of the Qobj that holds the full time evolution at the point when the optimisation is terminated. -The key information is in the summary (given) last. Here the final fidelity is reported and the reasonn for termination of the algorithm. +The key information is in the summary (given) last. Here the final fidelity is reported and the reason for termination of the algorithm. ```python result.stats.report() diff --git a/tutorials-v5/pulse-level-circuit-simulation/qip-customize-device.md b/tutorials-v5/pulse-level-circuit-simulation/qip-customize-device.md index 470d8f81..f1996acc 100644 --- a/tutorials-v5/pulse-level-circuit-simulation/qip-customize-device.md +++ b/tutorials-v5/pulse-level-circuit-simulation/qip-customize-device.md @@ -12,10 +12,10 @@ jupyter: name: python3 --- -# Custimize the pulse-level simulation +# Customize the pulse-level simulation Author: Boxi Li (etamin1201@gmail.com) -In this note, we demonstrate examples of customizing the pulse-level simulator in qutip-qip.The notebook is divided into three parts: +In this note, we demonstrate examples of customizing the pulse-level simulator in qutip-qip. The notebook is divided into three parts: 1. Customizing the Hamiltonian model 2. Customizing the compiler 3. Customizing the noise @@ -70,14 +70,14 @@ class MyModel(Model): def get_control(self, label): """ - The mandatory method. It Returns a pair of Qobj and int representing + The mandatory method. It returns a pair of Qobj and int representing the control Hamiltonian and the target qubit. """ return self.controls[label] def get_control_labels(self): """ - It returns all the labels of availble controls. + It returns all the labels of available controls. """ return self.controls.keys() @@ -95,14 +95,14 @@ class MyModel(Model): ] ``` -This is a quantum system of $n$ qubits arranged in a chain (same as the [spin chain model](https://qutip-qip.readthedocs.io/en/stable/apidoc/qutip_qip.device.html?highlight=spinchain#qutip_qip.device.SpinChainModel)), where we have control over three Hamiltonian: $\sigma_x$, $\sigma_z$ on each qubit, and neighbouring-qubits interaction $\sigma_x\sigma_x+\sigma_y\sigma_y$: +This is a quantum system of $n$ qubits arranged in a chain (same as the [spin chain model](https://qutip-qip.readthedocs.io/en/stable/apidoc/qutip_qip.device.html?highlight=spinchain#qutip_qip.device.SpinChainModel)), where we have control over three Hamiltonians: $\sigma_x$, $\sigma_z$ on each qubit, and neighbouring-qubits interaction $\sigma_x\sigma_x+\sigma_y\sigma_y$: $$ H = \sum_{j=0}^{n-1} c_{1,j}(t) \cdot h_x^{j}\sigma_x^{j} + \sum_{j=0}^{n-1} c_{2,j}(t) \cdot h_z^{j}\sigma_z^{j} + \sum_{j=0}^{n-2} c_{3,j}(t)\cdot g^{j}(\sigma_x^{j}\sigma_x^{j+1}+\sigma_y^{j}\sigma_y^{j+1}) $$ -where $h_x$, $h_z$, $g$ are the hardware parameters and $c_{i,j}(t)$ are the time-dependent control pulse coefficients. This Hamiltonian is the same as the one for the linear spin chain model in QuTiP. In general, the hardware parameters will not be identical for each qubit, but here, for simplicity, we represent them by three numbers: $h_x$, $h_z$ and $g$. +where $h_x$, $h_z$, $g$ are the hardware parameters and $c_{i,j}(t)$ are the time-dependent control pulse coefficients. This Hamiltonian is the same as the one for the linear spin chain model in QuTiP. In general, the hardware parameters will not be identical for each qubit, but here, for simplicity, we represent them by three numbers: $h_x$, $h_z$, and $g$. To simulate a custom quantum device, we provide the model to `ModelProcessor`, which is used for simulators based on a concrete physics model (in contrast to optimal control for arbitrary Hamiltonians). In this way, we inherit the necessary methods from `ModelProcessor` used in the simulation. @@ -145,7 +145,7 @@ class MyProcessor(ModelProcessor): super(MyProcessor, self).__init__( num_qubits, t1=t1, t2=t2 ) # call the parent class initializer - # The control pulse is discrete or continous. + # The control pulse is discrete or continuous. self.pulse_mode = "discrete" self.model.params.update( { @@ -220,9 +220,9 @@ This is a rectangular pulse that starts from time 0 and ends at time 0.25. #### Note -For discrete pulse, the time sequence is one element shorter than the pulse coefficient because we need to specify the start and the end of the pulse. If two sequences are of the same length, the last element of `coeff` will be neglected. Later, we will see continuous pulse where `coeff` and `tlist` have the same length. +For a discrete pulse, the time sequence is one element shorter than the pulse coefficient because we need to specify the start and the end of the pulse. If two sequences are of the same length, the last element of `coeff` will be neglected. Later, we will see a continuous pulse where `coeff` and `tlist` have the same length. -To give an intuitive illustration of the control pulses, we give each pulse a latex label by defining a method `get_operators_labels` and then plot the compiled pulses. +To give an intuitive illustration of the control pulses, we give each pulse a LaTeX label by defining a method `get_operators_labels` and then plot the compiled pulses. ```python processor.plot_pulses() @@ -231,11 +231,11 @@ plt.show() ## Customizing the compiler -How the quantum gates are implemented on hardware varies on different quantum systems. Even on the same physical platform, different implementation will yield different performance. The simplest way of implementation is to define a rectangular pulse like the one above. However, in reality, the control signal will have a continuous shape. In the following, we show how to customize the compiler with a gaussian pulse. +How the quantum gates are implemented on hardware varies on different quantum systems. Even on the same physical platform, different implementations will yield different performance. The simplest way of implementation is to define a rectangular pulse like the one above. However, in reality, the control signal will have a continuous shape. In the following, we show how to customize the compiler with a Gaussian pulse. A typical gate compiler function looks like the one in the following cell, with the form ``XX_compiler(self, gate, args)``. It takes two arguments, `gate` and `args`: `gate` is the quantum gate to be compiled and `args` is a dictionary for additional parameters, for instance, parameters we defined in `Processor.params`. -For each gate, the function returns the input gate, the time sequence and the pulse coefficients in an `Instruction` object. +For each gate, the function returns the input gate, the time sequence, and the pulse coefficients in an `Instruction` object. Below is an example of a rectangular pulse. @@ -405,11 +405,11 @@ plt.show() ## Customizing the noise Apart from pre-defined noise such as T1, T2 noise and random noise in the control pulse amplitude (see this [guide](https://qutip-qip.readthedocs.io/en/stable/qip-processor.html), one can also define custom noise. Here we will see two examples of customizing noise, one systematic (pulse-independent) noise and one pulse-dependent noise. -To understand how noise is processed, we briefly introduced the data structure of the simulation framework. The control elements are stored as a list of `Pulse` objects in the Processor. In each Pulse contains the idea pulse, the control noise part and the decoherence part. For systematic noise, it is saved under the `Pulse` representation labelled `"system"`, which represents the intrinsic dynamics of the quantum system. For pulse-dependent noise, we will add them to their corresponding control `Pulse`. +To understand how noise is processed, we briefly introduced the data structure of the simulation framework. The control elements are stored as a list of `Pulse` objects in the Processor. Each Pulse contains the idea pulse, the control noise part and the decoherence part. For systematic noise, it is saved under the `Pulse` representation labelled `"system"`, which represents the intrinsic dynamics of the quantum system. For pulse-dependent noise, we will add them to their corresponding control `Pulse`. The definition of noise is realized by a subclass of `UserNoise`, including two methods: - the initialization method containing the property of the noise, such as frequency or amplitude. -- the method `get_noisy_dynamics` that takes all the control pulse `pulses`, a dummy `Pulse` object representing systematic noise and the dimension of the system (here two qubits `[2,2]`). +- the method `get_noisy_dynamics` that takes all the control pulse `pulses`, a dummy `Pulse` object representing systematic noise, and the dimension of the system (here two qubits `[2,2]`). ```python @@ -427,7 +427,7 @@ class Extral_decay(Noise): We first show an example of systematic noise. Here, we introduce a ZZ crosstalk noise between neighbouring qubits with a constant strength. It is done in three steps: - Define the noise class. -- Initialize the noise object with given coupling strength. +- Initialize the noise object with the given coupling strength. - Define the Processor as usual and add the noise to the processor. In the following example, we check the fidelity of the same circuit of two X gates, but now with this additional noise. @@ -468,8 +468,8 @@ print( ) ``` -### Pulse dependent noise -In this second example, we demonstrate how to add an additional amplitude damping channel on the qubits. The amplitude of this decay is linearly dependent on the control pulse "sx", i.e. whenever the pulse "sx" is turned on, the decoherence is also turned on. The corresponding annihilation operator has a coefficient proportional to the control pulse amplitude. This noise can be added on top of the default T1, T2 noise. +### Pulse-dependent noise +In this second example, we demonstrate how to add an additional amplitude-damping channel on the qubits. The amplitude of this decay is linearly dependent on the control pulse "sx", i.e. whenever the pulse "sx" is turned on, the decoherence is also turned on. The corresponding annihilation operator has a coefficient proportional to the control pulse amplitude. This noise can be added on top of the default T1, T2 noise. ```python class Extral_decay_2(Noise): @@ -491,7 +491,7 @@ class Extral_decay_2(Noise): coeff=self.ratio * pulse.coeff, ) # One can also use add_control_noise here - # to add addtional hamiltonian as noise (see next example). + # to add additional Hamiltonian as noise (see next example). extral_decay = Extral_decay_2(0.3) @@ -505,7 +505,7 @@ tlist, coeff = processor.load_circuit(circuit, compiler=gauss_compiler) result = processor.run_state(init_state=basis([2, 2], [0, 0])) print( - "Final fidelity with pulse dependent decoherence:", + "Final fidelity with pulse-dependent decoherence:", fidelity(result.states[-1], basis([2, 2], [1, 1])), ) ```