Classification

This tutorial for MPSTime will take you through the basic steps needed to fit an MPS to a time-series dataset.

Demo dataset

First, import or generate your data. Here, we generate a two class "noisy trendy sine" dataset for the sake of demonstration, but if you have a dataset in mind, you can skip to the next section. Our demonstration dataset consists of a sine function with a randomised phase, plus a linear trend, plus some normally distributed noise. Each time series in class $c$ at time $t$ is given by:

\[x^c_t = \sin{\left(\frac{2\pi}{20}t + \psi\right)} + \frac{mt}{T} + \sigma_c n_t\,,\]

where $m$ is the slope of a linear trend, $\psi \in [0, 2\pi)$ is a uniformly random phase offset, $\sigma_c$ is the noise scale, and $n_t \sim \mathcal{N}(0,1)$ are normally distributed random variables.

The two classes will be distinguished by their noise levels. The class one time series $x^1$ have $\sigma_1 = 0.1$, and the class two time series $x^2$ have $\sigma_2 = 0.9$. Here are a few time-series instances from each class:

The below code sets this up:

julia> using Random # fix rng seed
julia> rng = Xoshiro(1); # define trendy sine function
julia> function trendy_sine(T::Integer, n_inst::Integer, noise_std::Real, rng) X = Matrix{Float64}(undef, n_inst, T) ts = 1:T for series in eachrow(X) phase = 2 * pi * rand(rng) @. series = sin(pi/10 *ts + phase) + 3 * ts / T + noise_std * randn(rng) end return X end;
julia> ntimepoints = 100; # specify number of samples per instance
julia> ntrain_instances = 300; # specify num training instances
julia> ntest_instances = 200; # specify num test instances
julia> X_train = vcat( trendy_sine(ntimepoints, ntrain_instances ÷ 2, 0.1, rng), trendy_sine(ntimepoints, ntrain_instances ÷ 2, 0.9, rng) );
julia> y_train = vcat( fill(1, ntrain_instances ÷ 2), fill(2, ntrain_instances ÷ 2) );
julia> X_test = vcat( trendy_sine(ntimepoints, ntest_instances ÷ 2, 0.1, rng), trendy_sine(ntimepoints, ntest_instances ÷ 2, 0.9, rng) );
julia> y_test = vcat( fill(1, ntest_instances ÷ 2), fill(2, ntest_instances ÷ 2) );

Training an MPS

For the most basic use of fitMPS, select your hyperparameters, and run the fitMPS function. Some (truncated) output from our noisy trendy sine datam with default hyperparameters is given below.

julia> opts = MPSOptions() # calling this with no arguments gives default hyperparameters
julia> mps, info, test_states = fitMPS(X_train, y_train, X_test, y_test, opts);
Generating initial weight MPS with bond dimension χ_init = 4
        using random state 1234.
Initialising train states.
Initialising test states.
Using 1 iterations per update.
Training KL Div. 122.43591167452153 | Training acc. 0.51.
Test KL Div. 121.83350501986212 | Testing acc. 0.55.

Test conf: [55 45; 45 55].
Using optimiser CustomGD with the "TSGO" algorithm
Starting backward sweeep: [1/5]
Backward sweep finished.
Starting forward sweep: [1/5]
    ...

MPS normalised!

Training KL Div. -18.149569463050405 | Training acc. 1.0.
Test KL Div. -1.2806885386973699 | Testing acc. 0.925.

Test conf: [100 0; 15 85]. 

fitMPS doesn't use X_test or y_test for anything except printing performance evaluations, so it is safe to leave them blank. For unsupervised learning, input a dataset with only one class, or only pass X_train ( y_train has a default value of zeros(Int, size(X_train, 1)) ).

The mps::TrainedMPS can be passed directly to classify for classification, or init_imputation_problem to set up an imputation problem. The info info provides a short training summary, which can be pretty-printed with sweep_summary.

You can use test_states to print a summary of the MPS performance on the test set.

Julia> get_training_summary(mps, test_states; print_stats=true)

         Overlap Matrix
┌──────┬───────────┬───────────┐
│      │   |ψ1⟩    │   |ψ2⟩    │
├──────┼───────────┼───────────┤
│ ⟨ψ1| │ 5.022e-01 │ 2.216e-04 │
├──────┼───────────┼───────────┤
│ ⟨ψ2| │ 2.216e-04 │ 4.978e-01 │
└──────┴───────────┴───────────┘
          Confusion Matrix
┌──────────┬───────────┬───────────┐
│          │ Pred. |1⟩ │ Pred. |2⟩ │
├──────────┼───────────┼───────────┤
│ True |1⟩ │       100 │         0 │
├──────────┼───────────┼───────────┤
│ True |2⟩ │        15 │        85 │
└──────────┴───────────┴───────────┘
┌───────────────────┬───────────┬──────────┬──────────┬─────────────┬─────────┬───────────┐
│ test_balanced_acc │ train_acc │ test_acc │ f1_score │ specificity │  recall │ precision │
│           Float64 │   Float64 │  Float64 │  Float64 │     Float64 │ Float64 │   Float64 │
├───────────────────┼───────────┼──────────┼──────────┼─────────────┼─────────┼───────────┤
│             0.925 │       1.0 │    0.925 │ 0.924576 │       0.925 │   0.925 │  0.934783 │
└───────────────────┴───────────┴──────────┴──────────┴─────────────┴─────────┴───────────┘

Hyperparameters

There are number of hyperparameters and data preprocessing options that can be specified using MPSOptions(; key=value)

MPSTime.MPSOptionsType
MPSOptions(; <Keyword Arguments>)

Set the hyperparameters and other options for fitMPS.

Fields:

Logging

  • verbosity::Int=1: How much debug/progress info to print to the terminal while optimising the MPS. Higher numbers mean more output
  • log_level::Int=3: How much statistical output. 0 for nothing, >0 to print losses, accuracies, and confusion matrix at each step (noticeable) computational overhead) #TODO implement finer grain control
  • track_cost::Bool=false: Whether to print the cost at each Bond tensor site to the terminal while training, mostly useful for debugging new cost functions or optimisers (HUGE computational overhead)

MPS Training Hyperparameters

  • nsweeps::Int=5: Number of MPS optimisation sweeps to perform (Both forwards and Backwards)
  • chi_max::Int=25: Maximum bond dimension allowed within the MPS during the SVD step
  • eta::Float64=0.01: The learning rate. For gradient descent methods, this is the step size. For Optim and OptimKit this serves as the initial step size guess input into the linesearch
  • d::Int=5: The dimension of the feature map or "Encoding". This is the true maximum dimension of the feature vectors. For a splitting encoding, d = numsplits * auxbasis_dim
  • cutoff::Float64=1E-10: Size based cutoff for the number of singular values in the SVD (See Itensors SVD documentation)
  • dtype::DataType=Float64 or ComplexF64 depending on encoding: The datatype of the elements of the MPS. Supports the arbitrary precsion types such as BigFloat and Complex{BigFloat}
  • exit_early::Bool=false: Stops training if training accuracy is 1 at the end of any sweep.

Encoding Options

  • encoding::Symbol=:Legendre: The encoding to use, including :Stoudenmire, :Fourier, :Legendre, :SLTD, :Custom, etc. see Encoding docs for a complete list. Can be just a time (in)dependent orthonormal basis, or a time (in)dependent basis mapped onto a number of "splits" which distribute tighter basis functions where the sites of a timeseries are more likely to be measured.
  • projected_basis::Bool=false: Whether toproject a basis onto the training data at each time. Normally, when specifying a basis of dimension d, the first d lowest order terms are used. When project=true, the training data is used to construct a pdf of the possible timeseries amplitudes at each time point. The first d largest terms of this pdf expanded in a series are used to select the basis terms.
  • aux_basis_dim::Int=2: Unused for standard encodings. If the encoding is a SplitBasis, serves as the auxilliary dimension of a basis mapped onto the split encoding, so that the number of histogram bins = d / auxbasisdim.
  • encode_classes_separately::Bool=false: Only relevant for data driven bases. If true, then data is split up by class before being encoded. Functionally, this causes the encoding method to vary depending on the class

Data Preprocessing and MPS initialisation

  • sigmoid_transform::Bool: Whether to apply a sigmoid transform to the data before minmaxing. This has the form

\[\boldsymbol{X'} = \left(1 + \exp{-\frac{\boldsymbol{X}-m_{\boldsymbol{X}}}{r_{\boldsymbol{X}} / 1.35}}\right)^{-1}\]

where $\boldsymbol{X}$ is the un-normalized time-series data matrix, $m_{\boldsymbol{X}}$ is the median of $\boldsymbol{X}$ and $r_{\boldsymbol{X}}$is its interquartile range.

  • minmax::Bool: Whether to apply a minmax norm to [0,1] before encoding. This has the form

\[\boldsymbol{X'} = \frac{\boldsymbol{X} - x'_{\text{min}}}{x'_{\text{max}} - x'_{\text{min}}},\]

where $\boldsymbol{X''}$ is the scaled robust-sigmoid transformed data matrix, $x'_\text{min}$ and $x'_\text{max}$ are the minimum and maximum of $\boldsymbol{X'}$.

  • data_bounds::Tuple{Float64, Float64} = (0.,1.): The region to bound the data to if minmax=true. This is separate from the encoding domain. All encodings expect data to be scaled scaled between 0 and 1. Setting the data bounds a bit away from [0,1] can help when your basis has poor support near its boundaries.

  • init_rng::Int: Random seed used to generate the initial MPS

  • chi_init::Int: Initial bond dimension of the random MPS

Loss Functions and Optimisation Methods

  • loss_grad::Symbol=:KLD: The type of cost function to use for training the MPS, typically Mean Squared Error (:MSE) or KL Divergence (:MSE), but can also be a weighted sum of the two (:Mixed)

  • bbopt::Symbol=:TSGO: Which local Optimiser to use, builtin options are symbol gradient descent (:GD), or gradient descent with a TSGO rule (:TSGO). Other options are Conjugate Gradient descent using either the Optim or OptimKit packages (:Optim or :OptimKit respectively). The CGD methods work well for MSE based loss functions, but seem to perform poorly for KLD base loss functions.

  • rescale::Tuple{Bool,Bool}=(false,true): Has the form rescale = (before::Bool, after::Bool). Where to enforce the normalisation of the MPS during training, either calling normalise!(Bond Tensor) before or after BT is updated. Note that for an MPS that starts in canonical form, rescale = (true,true) will train identically to rescale = (false, true) but may be less performant.

  • update_iters::Int=1: Maximum number of optimiser iterations to perform for each bond tensor optimisation. E.G. The number of steps of (Conjugate) Gradient Descent used by TSGO, Optim or OptimKit

  • train_classes_separately::Bool=false: Whether the the trainer optimises the total MPS loss over all classes or whether it considers each class as a separate problem. Should make very little diffence

Debug

  • return_encoding_meta_info::Bool=false: Debug flag: Whether to return the normalised data as well as the histogram bins for the splitbasis types
source

Convert the internal Options type into a serialisable MPSOptions.

source

You can also print a formatted table of options with print_opts (beware long output)

print_opts(opts)
┌────────────┬──────────────┬──────────────────────────┬────────┬───────────────────────────┬──────────┬─────────┬─────────┬──────────┬──────────────────┬───────────────────┬───────────┬─────────────────────────┬──────────┬───────┬────────────┬───────────────────────────┬─────────┬───────────────────┬───────────────┬────────┬───────────┬───────────┬─────────────────┬─────────┐
│ track_cost │ update_iters │ train_classes_separately │ minmax │ return_encoding_meta_info │    dtype │ nsweeps │  cutoff │ chi_init │         encoding │           rescale │ loss_grad │             data_bounds │ init_rng │     d │ exit_early │ encode_classes_separately │     eta │ sigmoid_transform │ aux_basis_dim │  bbopt │ log_level │ verbosity │ projected_basis │ chi_max │
│       Bool │        Int64 │                     Bool │   Bool │                      Bool │ DataType │   Int64 │ Float64 │    Int64 │           Symbol │ Tuple{Bool, Bool} │    Symbol │ Tuple{Float64, Float64} │    Int64 │ Int64 │       Bool │                      Bool │ Float64 │              Bool │         Int64 │ Symbol │     Int64 │     Int64 │            Bool │   Int64 │
├────────────┼──────────────┼──────────────────────────┼────────┼───────────────────────────┼──────────┼─────────┼─────────┼──────────┼──────────────────┼───────────────────┼───────────┼─────────────────────────┼──────────┼───────┼────────────┼───────────────────────────┼─────────┼───────────────────┼───────────────┼────────┼───────────┼───────────┼─────────────────┼─────────┤
│      false │            1 │                    false │   true │                     false │  Float64 │       5 │ 1.0e-10 │        4 │ Legendre_No_Norm │     (false, true) │       KLD │              (0.0, 1.0) │     1234 │     5 │      false │                     false │    0.01 │              true │             2 │   TSGO │         3 │         1 │           false │      25 │
└────────────┴──────────────┴──────────────────────────┴────────┴───────────────────────────┴──────────┴─────────┴─────────┴──────────┴──────────────────┴───────────────────┴───────────┴─────────────────────────┴──────────┴───────┴────────────┴───────────────────────────┴─────────┴───────────────────┴───────────────┴────────┴───────────┴───────────┴─────────────────┴─────────┘

julia> 

Classification

To predict the class of unseen data, use the classify function.

MPSTime.classifyMethod
classify(mps::TrainedMPS, X_test::AbstractMatrix)) -> (predictions::Vector)

Use the mps to predict the class of the rows of X_test by computing the maximum overlap.

Example

julia> W, info, test_states = fitMPS( X_train, y_train);
julia> preds  = classify(W, X_test); # make some predictions
julia> mean(preds .== y_test)
0.9504373177842566
source

For example, for the noisy trendy sine from earlier:

julia> predictions = classify(mps, X_test);
julia> using Statistics
julia> mean(predictions .== y_test)
0.925

Training with a custom basis

To train with a custom basis, first, declare a custom basis with function_basis, and pass it in as the last argument to fitMPS. For this to work, the encoding hyperparameter must be set to :Custom in MPSOptions

encoding = function_basis(...)
fitMPS(X_train, y_train, X_test, y_test, MPSOptions(; encoding=:Custom), encoding)

Docstrings

MPSTime.fitMPSMethod
fitMPS(X_train::AbstractMatrix, 
       y_train::AbstractVector=zeros(Int, size(X_train, 1)), 
       X_test::AbstractMatrix=zeros(0,0), 
       y_test::AbstractVector=zeros(Int, 0), 
       opts::AbstractMPSOptions=MPSOptions(),
       custom_encoding::Union{Encoding, Nothing}=nothing) -> (MPS::TrainedMPS, training_info::Dict, encoded_test_states::EncodedTimeSeriesSet)

Train an MPS on the data X_train using the hyperparameters opts, see MPSOptions. The number of classes are determined by the entries of y_train.

Returns a trained MPS, a dictionary containing training info, and the encoded test states. X_test and y_test are used only to print performance evaluations, and may be empty. The custom_encoding argument allows the use of user defined custom encodings, see function_basis. This requires that encoding=:Custom is specified in MPSOptions

NOTE: the return value encoded_test_states will be sorted by class, so predictions shouldn't be compared directly to y_test.

See also: Encoding

Example

See ??fitMPS to for a more verbose example

julia> opts = MPSOptions(; d=5, chi_max=30, encoding=:Legendre, eta=0.05);
julia> print_opts(opts) # Prints options as a table
       ...
julia> W, info, test_states = fitMPS( X_train, y_train, X_test, y_test, opts);
Generating initial weight MPS with bond dimension χ_init = 4
        using random state 1234.
Initialising train states.
Using 1 iterations per update.
Training KL Div. 28.213032851945012 | Training acc. 0.31343283582089554.
Using optimiser CustomGD with the "TSGO" algorithm
Starting backward sweeep: [1/5]
        ...

Starting forward sweep: [5/5]
Finished sweep 5. Time for sweep: 0.76s
Training KL Div. -12.577920427063361 | Training acc. 1.0.

MPS normalised!

Training KL Div. -12.57792042706337 | Training acc. 1.0.
Test KL Div. -9.815236609211746 | Testing acc. 0.9504373177842566.

Test conf: [497 16; 35 481].

julia> 

Extended help

julia> Using JLD2 # load some data
julia> dloc = "test/Data/italypower/datasets/ItalyPowerDemandOrig.jld2"
julia> f = jldopen(dloc, "r") 
           X_train = read(f, "X_train")
           y_train = read(f, "y_train")
           X_test = read(f, "X_test")
           y_test = read(f, "y_test")
       close(f);
julia> opts = MPSOptions(; d=5, chi_max=30, encoding=:Legendre, eta=0.05);
julia> print_opts(opts) # Prints options as a table
       ...
julia> W, info, test_states = fitMPS( X_train, y_train, X_test, y_test, opts);
Generating initial weight MPS with bond dimension χ_init = 4
        using random state 1234.
Initialising train states.
Using 1 iterations per update.
Training KL Div. 28.213032851945012 | Training acc. 0.31343283582089554.
Using optimiser CustomGD with the "TSGO" algorithm
Starting backward sweeep: [1/5]
        ...

Starting forward sweep: [5/5]
Finished sweep 5. Time for sweep: 0.76s
Training KL Div. -12.577920427063361 | Training acc. 1.0.

MPS normalised!

Training KL Div. -12.57792042706337 | Training acc. 1.0.
Test KL Div. -9.815236609211746 | Testing acc. 0.9504373177842566.

Test conf: [497 16; 35 481].

julia> get_training_summary(W, test_states; print_stats=true);
         Overlap Matrix
┌──────┬───────────┬───────────┐
│      │   |ψ0⟩    │   |ψ1⟩    │
├──────┼───────────┼───────────┤
│ ⟨ψ0| │ 5.074e-01 │ 1.463e-02 │
├──────┼───────────┼───────────┤
│ ⟨ψ1| │ 1.463e-02 │ 4.926e-01 │
└──────┴───────────┴───────────┘
          Confusion Matrix
┌──────────┬───────────┬───────────┐
│          │ Pred. |0⟩ │ Pred. |1⟩ │
├──────────┼───────────┼───────────┤
│ True |0⟩ │       497 │        16 │
├──────────┼───────────┼───────────┤
│ True |1⟩ │        35 │       481 │
└──────────┴───────────┴───────────┘
┌───────────────────┬───────────┬──────────┬──────────┬─────────────┬──────────┬───────────┐
│ test_balanced_acc │ train_acc │ test_acc │ f1_score │ specificity │   recall │ precision │
│           Float64 │   Float64 │  Float64 │  Float64 │     Float64 │  Float64 │   Float64 │
├───────────────────┼───────────┼──────────┼──────────┼─────────────┼──────────┼───────────┤
│          0.950491 │       1.0 │ 0.950437 │ 0.950425 │    0.950491 │ 0.950491 │  0.951009 │
└───────────────────┴───────────┴──────────┴──────────┴─────────────┴──────────┴───────────┘

julia> sweep_summary(info)
┌────────────────┬──────────┬───────────────┬───────────────┬───────────────┬───────────────┬───────────────┬────────────┬──────────┐
│                │ Initial  │ After Sweep 1 │ After Sweep 2 │ After Sweep 3 │ After Sweep 4 │ After Sweep 5 │ After Norm │   Mean   │
├────────────────┼──────────┼───────────────┼───────────────┼───────────────┼───────────────┼───────────────┼────────────┼──────────┤
│ Train Accuracy │ 0.313433 │      1.0      │      1.0      │      1.0      │      1.0      │      1.0      │    1.0     │   1.0    │
├────────────────┼──────────┼───────────────┼───────────────┼───────────────┼───────────────┼───────────────┼────────────┼──────────┤
│  Test Accuracy │ 0.409135 │   0.947522    │   0.951409    │   0.948494    │   0.948494    │   0.950437    │  0.950437  │ 0.949271 │
├────────────────┼──────────┼───────────────┼───────────────┼───────────────┼───────────────┼───────────────┼────────────┼──────────┤
│  Train KL Div. │  28.213  │   -11.7855    │    -12.391    │   -12.4831    │   -12.5466    │   -12.5779    │  -12.5779  │ -12.3568 │
├────────────────┼──────────┼───────────────┼───────────────┼───────────────┼───────────────┼───────────────┼────────────┼──────────┤
│   Test KL Div. │ 27.7435  │   -9.12893    │   -9.73479    │   -9.79248    │    -9.8158    │   -9.81524    │  -9.81524  │ -9.65745 │
├────────────────┼──────────┼───────────────┼───────────────┼───────────────┼───────────────┼───────────────┼────────────┼──────────┤
│     Time taken │   0.0    │   0.658366    │    0.75551    │   0.719035    │   0.718444    │    1.16256    │    NaN     │ 0.802783 │
└────────────────┴──────────┴───────────────┴───────────────┴───────────────┴───────────────┴───────────────┴────────────┴──────────┘
source
MPSTime.get_training_summaryMethod
get_training_summary(mps::TrainedMPS, 
                     test_states::EncodedTimeSeriesSet;  
                     print_stats::Bool=false, 
                     io::IO=stdin) -> stats::Dict

Print a summary of the training process of mps, with performane evaluated on test_states.

source
MPSTime.print_optsFunction
print_opts(opts::AbstractMPSOptions; io::IO=stdin)

Print the MPSOptions struct in a table.

source