This workbook is an introduction to the basic concepts and designs relating to the paper
Fast estimation of sparse quantum noise by Harper, Yu and Flammia
This workbook is going to go through the basic ideas behind experimental design for a trivial 3 qubit system, using qiskit to simulate it. It shows how to build the local circuits in qasm.
This is only 3 qubits, to keep the simulation time nice and short. Of course 3 qubits is way too small for this system to be any use. We are looking at recovering Paulis in the order of $4^{\delta n}$, and $\delta$ is about 0.25 (upto a maximum of, say, 0.5). Here with n = 3, thats only about 4 values! We do in fact recover more, (about 10) but the high weight Paulis are 4*3 = 12, so really you wouldn't use this protocol here (In fact the number of experiments we do you could almsot certainly use a more tomography style to recover the 64 actual Paulis.
For this introductory notebook, we need minimal software. All these packages should be available through the Julia package manager. However, we will need some to help with the simulation etc.
If you get an error trying to "use" them the error message tells you how to load them.
using Hadamard
using PyPlot
# convenience (type /otimes<tab>) - <tab> is the "tab" key.
⊗ = kron
# type /oplus<tab>
⊕ = (x,y)->mod.(x+y,2)
# I love this add on, especially as some of the simulations take a noticeable time.
using ProgressMeter
# We are going to need some more of my code
# You can get it by doing the following, in the main julia repl (hit ']' to get to the package manager)
# pkg> add https://github.com/rharper2/Juqst.jl
# Currently there is harmless warning re overloading Hadamard.
using Juqst
# This is the code in this github that implements the various peeling algorithms for us.
include("peel.jl")
using Main.PEEL
include("./localPeelFunctions.jl")
(Instructions to get under Julia)
Need to install Conda, PyCall using the julia package manager.
Easiest way is the repl (hit ], then add Conda, add PyCall) or...
using Pkg
Pkg.add("Conda")
Pkg.add("PyCall")
then use Conda to install pip
using Conda
Conda.add("pip")
Once that is done, then we can use pip to install qiskit (in a place Julia can find it).
using PyCall
run(`$(PyCall.pyprogramname) -m pip install qiskit`)
Then we can use qiskit from Julia!
using PyCall
qiskit = pyimport("qiskit")
One of the main ideas behind the papers is that we can use the protocols in Efficient learning of quantum channels and Efficient learning of quantum noise to learn the eigenvalues of an entire stabiliser group ($2^n$) entries at once to arbitrary precision. Whilst it might be quite difficult to learn the eigenvalues of an arbitrary group as this will require an arbitrary $n-$qubit Clifford gate (which can be a lot of primitive gates!) even today's noisy devices can quite easily create a 2-local Stabiliser group over $n-$ qubits.
Finally we are simulating the recovery in a 6 qubit system. That means our bitstrings are 12 bits long.
Our experiments will need two sub-sampling groups. The first subsampling group will be two of our potential MuBs (set out below) (selected randomly). The second subsampling group will have single MUBs (potentialSingles below) on the first and fourth qubit, and a potential (two qubit) MuB on qubits 2 and 3.
This maximises the seperation of Paulis using local stabiliser (two qubit) groups.
all2QlMuBs
(experiments,paulisAll,ds) = generateSensibleSubsamples([(2,1),(1,2)])
print("$experiments\n")
print("$paulisAll\n")
noOfQubits = 3
# Here there are two sets of experiments
# The tuple for each qubit, is the (number, experiment type),
# the numbers for each will add up to 4 (since we have 4 qubits)
experiments[1]
Let's pull in the figure from the paper which shows all the experimental designs:
Okay that's a bit busy, lets break it down.
Step 1 was to choose one MUB from the set for each pair of qubits, well that is what we did in PaulisAll, lets look at the first element of that
paulisAll[1]
So because it was randomly chosen, you will have to look at that and convince yourself that indeed that each of the two elements shown above are one of these four sets of MuBs.
So the first experiment (the top row of (2)) is just an experiment to extract those eigenvalues.
How do we do that?
Well we have the circuits we need in the appendix of the paper - they look like this:
Where what we are going to need on each two qubit pair is either (a) or (c), depending on the MUB randomly selected
For the two qubit expermeints, circuit 1 is (a) above and circuit (2) is (c) above.
Effectively we take an input state in the computational basis, apply these circuits, do a Pauli twirl of varing gates, reverse the circuit and measure in the computational basis.
Some code to set up the circuits on the ... err ... circuit
pX = [0 1;1 0]
pY = [0 -im;im 0]
pZ = [1 0;0 -1]
pI = [1 0;0 1]
paulis = [pI,pX,pY,pZ]
superPaulis = [makeSuper(x) for x in paulis];
experiments[1]
numberOfQubits = foldl((a,b)->a[1]+b[1],experiments[1])
# Sanity check on noisless sim
circuit = getCircuit(experiments[1],12)
qiskit.execute(circuit,qiskit.Aer.get_backend("qasm_simulator"),shots=100).result().get_counts()
circuit.draw()
# Creates a primitive noise ansatz
""" get_noise - get a primitive noise model reflecting the passed in parameters
Parameters:
- p_meas: the probability of measurement error.
- p_gate: the probability of a single qubit gate error.
- p_gate2: the probability of a cx/cz error
Returns: the noise_model.
"""
function get_noise(p_meas,p_gate,p_gate2)
error_meas = qiskit.providers.aer.noise.errors.pauli_error([("X",p_meas), ("I", 1 - p_meas)])
error_gate1 = qiskit.providers.aer.noise.errors.depolarizing_error(p_gate, 1)
error_gate2 = qiskit.providers.aer.noise.errors.depolarizing_error(p_gate2,2)
noise_model = qiskit.providers.aer.noise.NoiseModel()
noise_model.add_all_qubit_quantum_error(error_meas, "measure") # measurement error is applied to measurements
noise_model.add_all_qubit_quantum_error(error_gate1, ["x","h","s","z","u3","id"]) # single qubit gate error is applied to x gates
noise_model.add_all_qubit_quantum_error(error_gate2, ["cx"]) # two qubit gate error is applied to cx gates
return noise_model
end
noise_model = get_noise(0.02,0.01,0.015)# from qiskit.providers.aer.noise import NoiseModel
lengths=[2,4,10,20,60,100,200]
@time res = doASeries(experiments[1],lengths,50,2000,noise_model)
cc = getCounts(res,lengths,noOfQubits)
eigs = extractEigenvalues(cc,lengths)
indices = generateEigenvalues(getStabilizerForExperiment(experiments[1]))
for (ix,e) in enumerate(eigs)
print("$(PEEL.fidelityLabels(indices[ix]-1,qubits=3)): $e\n")
end
recoveredEigenvalues = Dict()
indices = generateEigenvalues(getStabilizerForExperiment(experiments[1]))
for i in 1:2^noOfQubits
recoveredEigenvalues[indices[i]] = push!(get(recoveredEigenvalues,indices[i],[]),eigs[i])
end
In the graph below, we can see the primitive noise model we employ. All multi qubit Paulis have the same eigenvalues (also known as fidelity). We get distinct bands depending on whether we have one, two or three Paulis. In reality the device would not have this clear distinction.
experiment1_SpamyEigenvalues = map(ifwht_natural,cc);
# Dont bother with the first element as its all 1.
for x = 2:2^noOfQubits
toPlot = [d[x] for d in experiment1_SpamyEigenvalues]
plot(lengths,toPlot,label=PEEL.fidelityLabels(indices[x]-1,qubits=3))
end
legend()
recoveredEigenvalues
experiments[2]
# Cycle through experiments... to generate offsets.
shots = 2000
sequences = 40
experi = experiments[1]
# This is step 2 of the figure - we move through the different qubit groups.
@showprogress 1 "QubitGroups" for (ix,stabilizerSelect) in enumerate(experi)
# If it is two qubits we have 5 stabilizer experiments/circuits
range = 1:5
if stabilizerSelect[1] == 1 # one qubit only so only 3 stabiliser experiments
range = 1:3
end
@showprogress 1 "Experiment: " for experimentType in range
newStab = stabilizerSelect
# Check if we have already done this one.
if newStab[2] != experimentType # New experiment needed.
experimentToDo = copy(experi)
newStab = (newStab[1],experimentType)
experimentToDo[ix]=newStab
# Work out the Paulis for that experiment type (we need this later)
paulisForExperiment = getStabilizerForExperiment(experimentToDo)
# Run the experiment for the requisite sequences and shots.
results = doASeries(experimentToDo,lengths,sequences,shots,noise_model)
# Extract the estimated eigenvalues (2^noOfQubits)
cc = getCounts(results,lengths,noOfQubits)
eigs = extractEigenvalues(cc,lengths)
# Work out what eigenvalues we have calcuated
indices = generateEigenvalues(paulisForExperiment)
#print("We have experiment $experimentToDo\nWIth indices $indices\n")
# Put them into our 'recoveredEigenvalue' oracle.
for i in 1:2^noOfQubits
recoveredEigenvalues[indices[i]] = push!(get(recoveredEigenvalues,indices[i],[]),eigs[i])
end
end
end
end
for k in sort(collect(keys(recoveredEigenvalues)))
print("$(PEEL.fidelityLabels(k-1,qubits=3)): $(round.(recoveredEigenvalues[k],digits=5))\n")
end
experiments
# Now we need to repeat, but this time for experiment 2
res2 = doASeries(experiments[2],lengths,sequences,shots,noise_model)
cc2 = getCounts(res2,lengths,noOfQubits)
eigs2 = extractEigenvalues(cc2,lengths)
indices2 = generateEigenvalues(getStabilizerForExperiment(experiments[2]))
print("$indices2\n")
for i in 1:2^noOfQubits
recoveredEigenvalues[indices2[i]] = push!(get(recoveredEigenvalues,indices2[i],[]),eigs2[i])
end
for k in sort(collect(keys(recoveredEigenvalues)))
print("$(PEEL.fidelityLabels(k-1,qubits=3)): $(round.(recoveredEigenvalues[k],digits=5))\n")
end
# And then cycle through experiments for experiments[2] ... to generate offsets.
experi = experiments[2]
# This is step 2 of the figure - we move through the different qubit groups.
@showprogress 1 "QubitGroups" for (ix,stabilizerSelect) in enumerate(experi)
# If it is two qubits we have 5 stabilizer experiments/circuits
range = 1:5
if stabilizerSelect[1] == 1 # one qubit only so only 3 stabiliser experiments
range = 1:3
end
@showprogress 1 "Experiment: " for experimentType in range
newStab = stabilizerSelect
# Check if we have already done this one.
if newStab[2] != experimentType # New experiment needed.
experimentToDo = copy(experi)
newStab = (newStab[1],experimentType)
experimentToDo[ix]=newStab
# Work out the Paulis for that experiment type (we need this later)
paulisForExperiment = getStabilizerForExperiment(experimentToDo)
# Run the experiment for the requisite sequences and shots.
results = doASeries(experimentToDo,lengths,sequences,shots,noise_model)
# Extract the estimated eigenvalues (2^noOfQubits)
cc = getCounts(results,lengths,noOfQubits)
eigs = extractEigenvalues(cc,lengths)
# Work out what eigenvalues we have calcuated
indices = generateEigenvalues(paulisForExperiment)
# print("We have experiment $experimentToDo\nWIth indices $indices\n")
# Put them into our 'recoveredEigenvalue' oracle.
for i in 1:2^noOfQubits
recoveredEigenvalues[indices[i]] = push!(get(recoveredEigenvalues,indices[i],[]),eigs[i])
end
end
end
end
for k in sort(collect(keys(recoveredEigenvalues)))
print("$(PEEL.fidelityLabels(k-1,qubits=3)): $(round.(recoveredEigenvalues[k],digits=5))\n")
end
using Statistics
fids= Float64[]
# Compared to the total number of Eigenvalues, which is
for i in sort(collect(keys(recoveredEigenvalues)))
print("$(PEEL.fidelityLabels(i-1,qubits=3)) $(round(mean(recoveredEigenvalues[i]),digits=4))\n")
push!(fids,round(mean(recoveredEigenvalues[i]),digits=4))
end
# Average them out
using Statistics
oracleToUse = Dict()
for k in keys(recoveredEigenvalues)
oracleToUse[k]=mean(recoveredEigenvalues[k])
end
# Generate the samples directly
samples = []
for (ix,x) in enumerate(paulisAll)
# Similarly if we reverse above (right hand least significant, then we reverse here)
push!(samples,[[y for y in generateFromPVecSamples4N(reverse(x),d)] for d in ds])
end
listOfX = [[fwht_natural([oracleToUse[x+1] for x in y]) for y in s] for s in samples];
# Some functions to give us labels:
function probabilityLabels(x;qubits=2)
str = string(x,base=4,pad=qubits)
paulis = ['I','Y','X','Z']
return map(x->paulis[parse(Int,x)+1],str)
end
function fidelityLabels(x;qubits=2)
str = string(x,base=4,pad=qubits)
paulis = ['I','X','Y','Z']
return map(x->paulis[parse(Int,x)+1],str)
end
# Use the patterns to create the listOfPs
# What is this?? Well basically it tells us which Pauli error fell into which bin, given our choice of experiment
# So the first vector (listOfPs[1][1])
listOfPs=[]
for p in paulisAll
hMap = []
# Because here we use right hand least significant - we just reverse the order we stored the experiments.
for i in reverse(p)
#print("Length $(length(i))\n")
if length(i) == 2
push!(hMap,twoPattern(i))
elseif length(i) == 4
push!(hMap,fourPattern([i])[1])
else # Assume a binary bit pattern
push!(hMap,[length(i)])
end
end
push!(listOfPs,hMap)
end
listOfPs
using LinearAlgebra
maxPass = 200
# singletons is when 'noise' threshold below which we declare we have found a singletons
# It will be related to the measurment accuracy and the number of bins
# Here we base it off the shotsToDo variance, on the basis of our hoped for recovery
# We start that one low and then slowly increase it, meaning we are more likely to accept
# If you have a certain probability distribution and this ansatz is not working, set it
# so that you get a reasonable number of hits in the first round.
singletons = (0.002*.999)/30000
singletonsInc = singletons/2
# Zeros is set high - we don't want to accept bins with very low numbers as they are probably just noise
# If the (sum(mean - value)^2) for all the offsets is below this number we ignore it.
# But then we lower it, meaning we are less likely to think a bin has no value in it.
# Obviously it should never be negative.
zerosC = (0.03*.999)/20000*2*1.1
zerosDec = (zerosC*0.99)/maxPass
mappings=[]
prevFound = 0
qubitSize = 3
listOfX = [[fwht_natural([oracleToUse[x+1] for x in y]) for y in s] for s in samples]
found = Dict()
rmappings = []
for x in mappings
if length(x) == 0
push!(rmappings,x)
else
ralt = Dict()
for i in keys(x)
ralt[x[i]]= i
end
push!(rmappings,ralt)
end
end
prevFound = 0
for i in 1:maxPass
for co = 1:length(listOfX)
bucketSize = length(listOfX[co][1])
for extractValue = 1:bucketSize
extracted = [x[extractValue] for x in listOfX[co]]
if !(PEEL.closeToZero(extracted,qubitSize*2,cutoff= zerosC))
(isit,bits,val) = PEEL.checkAndExtractSingleton([extracted],qubitSize*2,cutoff=singletons)
if isit
#print("$bits\n")
#pval = binaryArrayToNumber(j6*[x == '0' ? 0 : 1 for x in bits])
vval = parse(Int,bits,base=2)
#print("$bits, $vval $(round(dist[vval+1],digits=5)) and $(round(val,digits=5))\n")
PEEL.peelBack(listOfX,listOfPs,bits,val,found,ds,rmappings)
end
end
end
end
if length(found) > prevFound
prevFound = length(found)
else
singletons += singletonsInc
zerosC -=zerosDec
if (zerosC <= 0)
break
end
end
if length(found) > 0
print("Pass $i, $(length(found)) $(sum([mean(found[x]) for x in keys(found)]))\n")
if sum([mean(found[x]) for x in keys(found)]) >= 0.999995
break
end
end
end
for k in keys(found)
print("$(probabilityLabels(parse(Int,k,base=2),qubits=3)) -- $(round.(found[k],digits=4))\n")
end
What do we see, well we recovered 10 values, which is more than advertised. They were all single Pauli errors, and whilst there is some variation - they are all in the right ballpark. 40 seqeunces of 2,000 shots is a relatively small number ot recover to this precision - it only gets better with more qubits and more sequences.