Probability Distribution Commands

Reading in device data and fitting it

Juqst.readInCSVFileFunction

readInCSVFile(filename)

Arguments

`Filename`: full name of a csv delimited Filename

File Format

Assumes that each sequence appears on a new line and each line has the same number of entries. Assumes that it is a raw count for each of the possible measurement outcomes. Note it is important to know the order of the outcomes (this is significant). Although this can be overriden, it is a assumed that they are stored as:

    00000000
    00000001
    00000010
    00000011

    ...
    11111110
    11111111

i.e. lsb to the right.

Returns a full rows x col matrix depending on the number of entries of the file. for a 16 qubit machine, where the data was taken over 6 sequences we will return a 6x655536 array of Ints.

Juqst.transformToFidelityFunction

transformToFidelity(data)

Arguments:

`Data`, as returned by readInCSVFile - see help for that function.

Returns:

The Hadamard transformed fidelities for each sequence as a list of lists.

Juqst.fitTheFidelitiesFunction

fitTheFidelties(lengths, data; no =0)

Arguments

lengths: Array{Int64,1}, e.g. collect(1:2:24) - the length at which each of the above sets of data was gathered.

data: Array{Array{Float64,1},1}, this is a list of observed probabilities at different sequences.

no: printed with the warning message if we fail to fit. Allowing identification of problem in "batch" fits.

Take the data, transform it and see if we can fit it to an exponential decay. This proves problematic where the data is a bit sparse, as the decays can be all over the place. In general it is extremely helpful to have good 'low' gate sequence as that helps to anchor the fits. If we have an extremely high decay rate, the intercept parameter is difficult to find.

Here we check for convergence, and warn if it doesn't converge.

This enforces a cut-off to the data used for the fit. Specifically it gets rid of the 'tail' of data.

Returns

three things:

  • the fitting parameters, in the form [A,p], where A is the y-axis intercept and p is the decay probability.
  • the length of the sequence that was used for each parameter.
  • the indices of those that still failed to converge.

Typical usage

    params,_ = fitTheFidelities(collect(1:2:22),data)
Juqst.convertAndProjectFunction

ConvertAndProject(params)

Arguments

params: parameters that come from the fittingTheFidelities method (list of A and p)

Convert and project the fitted fidelities, to rerieve the 'p's representing the joint probabilities.

Returns:

the projected probability distribuion.

Projecting probabilities and marginalising probability distributions

Juqst.projectSimplexFunction

projectSimplex(probs)

Arguments

probs: a probabiliity distribution, that isn't quite

Takes the proffered probability distribution and projects it onto the closest probability distribution (where they are all >=0). Uses an algorithm similar to [Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application)[https://arxiv.org/abs/1309.1541]

Returns:

the projected probability distrubtion.

Juqst.marginaliseFunction

marginalise(q,pps)

Arguments

q the qubits you want to marginalise to, expressed as a list e.g. [1,2] marginalise away everything else pps the probability distribution you want.

Example marginalise([1,3,5],pps) returns the joint probability distribution of 1,3 and 5 with all other qubits marginalised out.

Returns

the marginalised joint probability distribution. You may need to vec this.

Measuring correlations and various other metrics

Juqst.covarianceMatrixFunction
covarianceMatrix(p)

Arguments

p a probability vector

Discussion

Computes the covariance matrix between the 0/1 random variables representing no error / error.
If x is a column vector of bits representing an error pattern, then we can compute the matrix
Expect_p[(x-μ) (x-μ)^T]
where μ = Expect_p[x].

If we set reverse to true, then we reverse the digits.
The way IBM stores its digits we wouldn't normally want to reverse them, but in general
this might not be true

Returns

the matrix Expect_p[(x-μ) (x-μ)^T]

Juqst.correlationMatrixFunction
correlationMatrix(p)

Correlation

Computes the covariance matrix $\Sigma$ and then let:
$D = \sqrt{diag{\Sigma}$

Return $D^{-1}\Sigma D^{-1}$
See also See also: [`covarianceMatrix`](@ref)
Juqst.mutualInformationMethod

mutualInformation(p1,p2,p)

Arguments

p1 a qubit position p2 a qubit position p a global probability distribution.

Reutrns

the mutualInformation between probabilities at p1, p2 given probability distribution p.

Comment

Makes use of the marginalise function to marginalise over probabilities

$MI(p1,p2) =\sum\limit_{x\in p1,y\in p2}\left[p(x,y)\log{\frac{p(x,y)}{p(x),p(y)})\right]$


Some gotchas:
    I return -1 if p1==p2 , this allows an identification of the qubit where I loop.
    if p(x,y) is 0, p(x) or p(y) is 0 then this is defined as a 'zero' part of the sum - even though the log is undefined.
Juqst.relativeEntropyMethod
relativeEntropy(P, Q)

Calculates the relative entropy between two joint probability distributions.
D(P||Q) = Sum p_j*log(p_j/q_j)

Undefined if any of the q_js are zero, unless the corresponding p_j is zero.
Juqst.JSDMethod

JSD(dist1,dist2)

Arguments

dist1, probability distribuion
dist2, probability distribution

Discussion

calcluates the Jensen-Shannon Divergence between the two distributions. This is symmetric and well defined, even if the probabilities are not in each other's support. The square root is a metric.

$JSD(P||Q) = \frac{1}{2}D(P||M)+\frac{1}{2}D(Q||M)$, where

$M =\frac{1}{2}(P+Q)$

Returns $JSD(P||Q)$

Juqst.conditionalMutualInfoMethod

conditionalMutualInfo Pass in vectors of qubits for X Y and Z and the distribution to analyse.

Discussion

Gives the conditional mutual information for the qubits
supplied in X,Y and Z where we want I(XY|Z)

$I(XY|Z) = \Sum( P(x,y,z)\log(2, P(Z)P(X,Y,Z)/P(X,Z)P(Y,Z) )$

Returns

$I(XY|Z)$

Gibbs Random Fields (one dimensional)

Juqst.gibbsRandomFieldFunction

gibbsRandomField(pps,constraints)

Arguments

  • pps: Array{Float64,1} For example the output of convertAndProject

  • constraints: Array{Int64,1}: The division of the gibbs field. See below for example.

    Takes a joint probability and the gibbs variable constraints and returns a vector of reduced probability distributions. For example the constraints might be [[1,2,3,4],[3,4,5,6],[5,6,7,8]] over a field of 8 qubits This would return an array of 3 ϕ's each 16 long (2^4), from which the joint probability can be extracted on the assumption That, say, qubits 1,2 are independent of qubits 5-8. Note: the assumption inbuilt here is that the constraints end and start with qubits in common e.g. [1,2,->3,4],[3,4<-,5,6]

Note the other version of is one gibbsRandomField, where the constraints are constraints::Array{Array{Array{Any,1},1},1} is probably (now) preferred - there you can specify the constraints more generally. In particular as tuples of qubits and the qubits they are conditioned on.

Returns

ϕ as a series of factors, depending on the constraints.

gibbsRandomField(pps,gen_constraints)

Arguments

  • pps: Array{Float64,1} For example the output of convertAndProject

  • constraints: Array{Array{Array{Any,1},1},1}: The division of the gibbs field. See below for example.

    Takes a joint probability and the variable constraints that are conditioned and returns a vector of reduced probability distributions. It is up to the user to make sure that the factorization is complete and makes sense. For example the constraints might be [[(1,2),(3,4)],[(3,),(4,5,6)],[(4,5,6,7,8),()]] over a field of 8 qubits This would return an array of 3 ϕ's the first 16 long (2^4), the second 2^4 long and the third 2^5 long from which the joint probability can be extracted on the assumption That, say, qubits 1,2 are independent of qubits 5-8.

Returns

ϕ as a series of factors, depending on the constraints.

Juqst.gibbsRandomFieldMethod

gibbsRandomField(pps,gen_constraints)

Arguments

  • pps: Array{Float64,1} For example the output of convertAndProject

  • constraints: Array{Array{Array{Any,1},1},1}: The division of the gibbs field. See below for example.

    Takes a joint probability and the variable constraints that are conditioned and returns a vector of reduced probability distributions. It is up to the user to make sure that the factorization is complete and makes sense. For example the constraints might be [[(1,2),(3,4)],[(3,),(4,5,6)],[(4,5,6,7,8),()]] over a field of 8 qubits This would return an array of 3 ϕ's the first 16 long (2^4), the second 2^4 long and the third 2^5 long from which the joint probability can be extracted on the assumption That, say, qubits 1,2 are independent of qubits 5-8.

Returns

ϕ as a series of factors, depending on the constraints.

Juqst.getGrainedPMethod

getGrainedP(ϕ,tomatch,graining)

Arguments

ϕ: the factor graph (possibly obtained through gibbsRandomField)
tomatch: the bit pattern we want
graining: the split for the factor graph.

Description

For instance say we want all bits zero apart from qubit 3 (which = 1) Then for ϕ[1], which represents bits [2,1,3,6] - we need the entry corresponding to decimal equiv of reverse(0,0,1,0) plus one, because 0 0 0 0 = first entry i.e. index 1. We will have length(ϕ) of these entries, this gives us the perVecIndex. Then we just read it out for ϕ, with a map, using foldl to multiply them for us.

Example use

   generalisedConstraints =[
                        [[1,],[2,14]],
                        [[2,14],[13,3]],
                        [[13,3],[4,12]],
                        [[4,12],[5,11]],
                        [[5,11],[6,10]],
                        [[6,10],[7,9]],
                        [[7,9,8],[]]]
    ϕ=Marginal.gibbsRandomField(singlePps,generalisedConstraints)
    reconstructed = [Marginal.getGrainedP(ϕ,tomatch,[vcat(x[1],x[2]) for x in generalisedConstraints]) for tomatch =0:(2^14-1)]
Takes a global probability distribution, uses it to fill in the factor graph variables spedified in generalisedConstraints
and the reconstructs the probability distribution as given by the factor graph.

Returns:

 for the given tomatch bitpattern the extracted probability.
Juqst.reconstructMethod
Given a distribution and a set of constraints.
Return the reconstruction of a distribution parameterized by the constraints.
Juqst.reconstructedJSMethod

reconstructedJS

Arguments

distribution: a probability distribution
constraints: a set of constraints (see first form of gibbsRandomField)

Discusion

Given a distribution and a set of constraints.
Reconstruct the distribution using the constraints.

Return

The JSD of the reconstructed distribution and the original distribution.
Juqst.marginaliseFromRawDataFunction
marginaliseFromRawData

rawData - all of the observations ie. an array of 1..|lengths] each being a $2^n$ vector.
constraints - currently a list of constraints in the same from as [`gibbsRandomField(pps,constraints)`][@ref]
lengths - the lengths the data sets were taken at

Takes in the raw observations, for each of the lengths specified in length.
Extracts the joint probabilities needed from constraints.
Marginalises the observations, transforms, fits, transforms and fills in the joint probabilities.
Returns the completed gibbs factors.

Note if your $2^n$  vectors are too big, simply re-write to pass in the actual observations and work out the marginalised observations as you scan through them,
potentially sparse arrays will help (although not sure if marginalise will work)